5 min read

Bias and Prejudice.

Bias and Prejudice.

Data is not only the fuel for models and algorithms but also our responsibility towards other humans. If one grants us access to data, we are accountable and responsible for such data - how and by whom it is being processed, what outcomes it will produce, and who will be affected by such effect. As developers and operators of self-learning systems, we must comply with relevant laws and regulations, including regarding personal data protection, and follow ethical values and principles. Values and principles that are not always clear not always universal and not always easy to follow. However, we must remember that the outcomes of algorithms and models may adversely affect sensitive groups, for example, by discriminating against them or manipulating them.

The recent paper by the European Parliament's experts, "Auditing the quality of datasets used in algorithmic decision-making systems," makes it clear - we must ensure that the datasets we use are of good quality and quantity and are subject to adequate and appropriate good practices for data science and privacy and data protection approach. The document's authors say, "there are good options to identify, avoid, and mitigate biases. To do so, it is utterly important to understand where and how biases can be introduced." To ensure that our algorithms and models are fair and ethical, we must first understand the source of any bias.

Data we use for machine and deep learning applications are usually quite biased as data is always a result of someone's work. Let's consider the following example. A bank is eager to deploy an "AI" system for customer creditworthiness assessment (CA). The initial identification of resources at hand is quite promising as the bank has vast historical data gathered during the manual CA processes made by credit analysts. The bank has developed the model, put it on a validation stage and testing, and then had to take some steps back. Why? The results were unacceptable as the model evidently "had" problems treating a gender fairly. The model constantly downgraded women, perceiving them as a risk for timely payments. Why?

Historical data means that the data was created by someone for a particular task or goal. In our example, the bank hired several analysts with one job - to evaluate the probability of default of different groups and give them appropriate scoring. Banks must follow written policies and procedures that include clear indicators (gender, financial standing, marital status, etc.) and assessment methodology. If all analysts followed such guidance, the world would be almost perfect, at least in terms of financial inclusion. As we, however, live in the real world, the reality is not that perfect. Many people - as they are only humans - have their own biases resulting from their experience and other factors. Suppose the analyst(s) put such bias or biases into the scoring assessments. In that case, the model will likely find an "interesting" correlation that may adversely affect the model and its users, namely clients.

As a consequence, certain groups (including racial) may be discriminated against and financially excluded. The bank (or other entity) will likely face a lawsuit, and its reputation will fall. The regulator or supervisor may also conduct ad hoc inspections and, in certain circumstances, may impose an administrative fine. The model will also have to be re-trained with new datasets and double-checked to mitigate the risks it may pose concerning the bias. While it is not a part of today's column, privacy and data protection concerns may also arise.

The EU's proposal for artificial intelligence systems introduces a new category of such applications - high-risk AI systems - that will be subject to specific requirements, including data and data governance (article 10). The final text of the proposal is still under development. However, the current wording of Article 10 is quite helpful when it comes to "good practices" for data governance (yet it does not include essential provisions regarding internal governance), as it calls for "appropriate data governance and management practices to be applied for each stage of AI life-cycle, including training, validation, and testing. It also applies to the gathering stage when the team of a data scientist is looking for relevant (fit-for-purpose) data that machine learning specialists will use for training and validation.

We must, however, face the truth - even at the preprocessing stage, the bias may occur, for example, in the form of labeling bias. Humans are only humans. They have their perception and may not see the world as others see it. Therefore, even the " cleaned " data may include bias that is not apparent at first sight. That is why the validation and testing phases are essential.

This pre-executive phase is critical, and a lack of sufficient resources, practices, and processes may result in bias and discrimination and have severe consequences for the entity. To ensure that this stage is "protected" from AI-related risks, we should engage many stakeholders that may help us identify the relevant data, specify the business goal and eliminate threats that may be linked to the datasets and model itself (not always apparent). If the model is based on sensitive data, it is also essential to identify the relevant lawful basis for data processing. If we do not have (at least) one, the administrative fine might hit us.

There are many practices and approaches to data gathering and preprocessing, including those that put ethics at the top. Some organizations are developing tools to screen all the data and find potential pitfalls. However, human judgment should always be considered while developing machine and deep learning solutions as the AI systems "may not be" certain bias-sensitive and may not be able to identify some dangers. Many ethical and responsible AI documents, including the AI Act in article 14, include the call for effective human oversight and judgment, even for semi-autonomous applications. Along with the proper risk management system that provides for AI-related risks, the company will manage the bias and discrimination risks - or at least make them less likely to occur.

It is not all. The good (responsible or ethical) AI model is transparent and explainable. It does not mean that all the aspects of the running of the model should be extractable and visible. The information the model provides should be aligned with the specific needs of stakeholders. For example, the data scientists and machine learning experts may need different information to re-validate and re-training the model to ensure it works properly. Completely different information may be required by the manager that is responsible for the line of business that wishes to apply the specific AI system. It is, however, of utmost importance to ensure that the relevant information (triggers) is constantly provided to the addresses. Only then will the company be able to react swiftly.

The remaining question is - how should we assess what is and is not ethical? There is no universal answer as cultural differences, goals, and values exist. Each company should define " ethical " and what will signify that the company has responsible AI. Ensuring that will also require a risk-based approach and proportionality principle in practice. The data and model are imperfect and will not be perfect (accurate, free of errors) as our world is not. This, however, does not mean that we should not try to make it a better place for humans.