qtablog

Blog of QTA

June 19, 2022 - AI-Driven Healthcare Technologies and Algorithmic Gender Bias: A Need for Data Feminism - An Overview of Pin Lean Lau’s Seminar Talk at TK MILAB

2022. September 06. 12:26

Pin Lean Lau held the final talk of the Artificial Intelligence National Laboratory (MILAB) online seminar series in 2021. She is a researcher at the Brunel University of London, and holds a PhD in Law from Central European University. She introduced an ongoing project of hers concerning the regulation of AI bias in healthcare. Her main goal was to explore how AI bias may occur in healthcare technologies, and how such biases may be avoided through better regulation and data science.

Dr Lau started her talk by outlining the main topic and aims of her ongoing research project. Her primary interest is the emergence and regulation of gender-based algorithmic discrimination in healthcare technologies. Her principal aims are to (1) understand the relationship between AI bias and the quality of healthcare discriminated patients receive, (2) detect the impact of algorithmic bias in healthcare on women’s rights, and (3) build a case for data feminism as a response to the issue.

If used properly, AI technologies can lead to tremendous improvements in the efficiency and quality of healthcare. Dr Lau highlighted two examples for this. First, algorithms proved to be very efficient in predicting the risk, or diagnosing, certain diseases. In the case of breast cancer, for instance, algorithms proved to be 85% more reliable than human doctors. Second, algorithms can help prescribe personalized medicine to patients. By processing a huge amount of personal data, they can predict which medicine would be safe and most effective for a given patient.

Algorithmic bias occurs once these AI systems treat patients differently without any medical justification. These usually include discrimination based on gender, race or socio-economic background. Dr Lau argued that most often, such biases occur because the initial data fed to these algorithms is itself biased. This is due to two reasons. First, due to the long standing male-dominated nature of medicine, data on the male body is much more accurate and representative than on the female one. Second, the human labeling of medically relevant data (for instance, who counts as male, female or transgander) is often subjective, paving way for bias.

In the case of gender-based discrimination, such biases in AI-technologies in healthcare are already detectable. Dr Lau cited numerous studies showing how AI systems led to mistreatment or misdiagnosis in a disproportionately higher number of cases for female, than for male patients. This has devastating impacts. First, it augments past inequalities, as these algorithms make future decisions based on data which contain the very biases we aim to eliminate. Second, it unfairly lowers the overall quality of healthcare women receive. Third, it endangers the mental and physical well-being of women.

Dr Lau recommended two remedies for the outlined issue. First, there is a need to better regulate AI-systems used in healthcare. She cited attempts to this end from the European Union, the United Nations and the Federal Trade Commission. While she praised the direction these initiatives take, she highlighted unclarities and hardships in their enforcement and implementation. Second, she argued that there is a need for data feminism. Feminist data science aims to (1) fill in the gaps in data to make it more representatie, (2) eliminate bias in the data we already have, and (3) use data to detect and highlight unfair discrimination. Ultimately, as elsewhere in this technological field, solutions to social problems lie beyond legal regulation, and genuine changes are necessary in how technology actors conduct themselves, and in how they view their responsibility towards society and the individual.

During the discussion, some participants argued that while Dr Lau gave a compelling justification for combating AI bias, it was unclear whether it is possible to achieve this end. Dr Lau responded that there are at least some obvious steps we can already take. For instance, while it is true that machine learnt biases are hard to regulate, we can start by eliminating biases contained in the initial data.

__________________________________________________________

It was supported by the Ministry of Innovation and Technology NRDI Office within the framework of the FK_21 Young Researcher Excellence Program (138965) and the Artificial Intelligence National Laboratory Program.

__________________________________________________________

The views expressed above belong to the author and do not necessarily represent the views of the Centre for Social Sciences.