media

AI in Cancer Care: Challenges and Path to Equity

  • userGizem Uluc

  • calendarJanuary 31, 2025

  • clock23 min read

Artificial Intelligence (AI) is changing the way cancer is diagnosed and treated, offering new possibilities for early detection, accurate diagnoses, and personalized care. Despite its potential, AI in healthcare faces significant challenges, particularly biases in the data used to train these models. These biases can lead to unequal access to accurate diagnoses and treatments, creating disparities in healthcare outcomes.

This paper uses a company called PAICON and its initiatives as a case study to demonstrate real-world efforts to address and overcome these challenges. PAICON is a digital health company focused on creating the most genetically and technically diverse cancer data lake for developing robust and unbiased AI algorithms. PAICON’s AI tool, SatSightDX1, improves the classification of Microsatellite Instability (MSI) or Microsatellite Stability (MSS) in colorectal cancer by using diverse, globally sourced datasets. SatSightDX contributes to making cancer care more equitable by addressing biases and offering faster, more accurate diagnostic outcomes.

The paper also examines the challenges of building unbiased AI tools, including difficulties in collecting diverse data, standardizing datasets, and managing regulatory differences between countries. Finally, it offers practical recommendations, such as establishing global standards for data sharing, expanding access to underrepresented populations, and fostering international collaborations.

Introduction

Artificial Intelligence (AI) is rapidly changing cancer research and applications has grown exponentially in the recent years, as it facilitates the clinical settings by enabling disease screening, early diagnosis, reducing costs associated with cancer detection techniques, and enhancing medical imaging-based prediction tasks (Corti et al., 2023) (Cross, Choma, & Onofrey, 2024). Its potential extends across biomedical discovery, diagnosis, prognosis, treatment, and prevention, enabling precise tumor characterization, personalized cancer treatment strategies, and the development of innovative therapies through the integration of AI with clinical expertise and large-scale data analysis (DeCamp & Lindvall, 2023)

Although AI has made significant advancements in various fields, its adoption in healthcare, especially in cancer research and clinical care, has not been of comparable pace for AI tools to deliver accurate and reliable diagnostic outcomes, they need to be trained on large volumes of high-quality data (Corti et al., 2023). However, gathering such data comes with its challenges, especially when aiming to create diagnostic AI tools that work across diverse populations (DeCamp & Lindvall, 2023). For instance, in the field of pathology a major issue is that most high-quality histology data come from predominantly white populations in developed countries, leading to biased AI models. Celi et al. (2022) highlight this disparity in Fig. 1, which shows that 40.8% of datasets come from the United States, followed by 13.7% from China, while many low- and middle-income countries are barely represented. This imbalance limits the ability of AI models to be truly inclusive and risks worsening existing healthcare inequalities by neglecting underrepresented populations.

Figure 1. (a) Distribution of overall database nationality in AI in medicine. (b) Heatmap of distribution of overall database nationality in AI in medicine (Adapted from Celi et al., 2022)

 

Because of systematic errors, or “bias,” incorporating AI into clinical healthcare systems presents serious ethical and societal issues. When it comes to medical AI, bias refers to systematic errors that cause a difference between expected and actual results, which could disadvantage some patient groups (Koçak, Ponsiglione, & Stanzione, 2024). For instance, Obermeyer et al. (2019) found that a machine learning algorithm designed to predict health risks consistently underestimated the needs of Black patients. This happened because the algorithm used healthcare expenses as a proxy for overall health, failing to account for the fact that Black patients tend to have lower healthcare expenditures.

To address these disparities and leverage AI’s potential to improve healthcare equity, it is essential to develop and train AI tools through comprehensive, transparent, and inclusive approaches. This paper examines the application of AI in cancer care, highlights the challenges associated with achieving unbiased outcomes, and presents a real-world case study of PAICON, a company dedicated to designing equitable AI diagnostic tools. Furthermore, it provides practical recommendations for leveraging AI and large language models (LLMs) to advance equity in cancer care.

Current Strands of Literature

The integration of artificial intelligence (AI) into cancer care has provided an opportunity to make traditional cancer diagnosis and treatment faster and less expensive, while also fired up

some significant academic discourse by focusing on the ethical, technical, and societal challenges that must be addressed to generate reliable outcomes (Cross, Choma, & Onofrey, 2024). This chapter reviews will examine the opportunities, challenges, and limitations of AI in cancer care. It addresses critical issues such as technical barriers to AI implementation in healthcare, the prevalence of bias in AI systems, real-world applications of AI in cancer diagnosis, and strategies to improve health equity.

Realizing the Promise of AI in Cancer Care

The use of AI in healthcare has the potential to improve cancer care through innovative applications that facilitate diagnosis, treatment, and access to healthcare. According to Corti et al. (2023), AI-powered cancer diagnostic tools such as DeepPath have demonstrated exceptional accuracy in analyzing hematoxylin & eosin (H&E) stained whole slide images (WSI), which are high-resolution digital scans of entire microscope slides, allowing for detailed analysis and AI-driven diagnostics (Corti et al., 2023). Recent studies using DeepPath achieved an area under the curve (AUC) of 0.97 in classifying lung cancer subtypes such as normal tissue, lung adenocarcinoma, and lung squamous cell carcinoma (Corti et al., 2023).

Similarly, Dankwa-Mullan and Weeraratne (2022) emphasized that AI can play a major role in improving personalized cancer care by identifying patients at high risk of cancer through predictive modeling, enabling early interventions and more targeted treatments. However, although AI has great potential to improve cancer care into a faster diagnostic process, its limitations prevent AI applications from being fully utilized in cancer care, and one of the biggest obstacles is to achieve unbiased outcomes.

Addressing Bias in AI Models

One of the most significant challenges in fully using the potential of AI in cancer care is the bias embedded in AI training data and model development (Marcus, 2021). Biased outcomes from AI tools in the cancer care sector lead to inequalities in diagnosis, treatment, and overall healthcare outcomes (Marcus, 2021). For instance, Marcus (2021) highlights that AI algorithms for detecting skin cancer, predominantly trained on datasets with images of light-skinned individuals from developed countries, tend to perform less accurately when diagnosing skin cancer in patients with darker skin tones. Such disparities not only exacerbate existing health inequalities but also increase the risk of delayed or missed diagnoses in these underrepresented populations.

Bias in AI outcomes in healthcare can emerge at various stages of the algorithm development process, including data collection and preparation (e.g., anonymized data, consistently formatted, and of similar quality), model training, evaluation, and deployment within clinical settings (DeCamp and Lindvall, 2023). According to Norori et al. (2021), bias in AI models for healthcare can be categorized into three types: data-driven bias, which arises from unrepresentative or imbalanced datasets; algorithmic bias, which arises from the design and optimization of AI models that may enhance existing biases; and human bias, which are subjective decisions that may occur during data labeling or feature selection processes (Norori et al., 2021) . These biases collectively contribute to disparities in healthcare outcomes, particularly for underrepresented populations.

Figure 2. Sources of Bias in AI Models for Healthcare (Adapted from Norori et al., 2021)

 

Achieving unbiased outcomes in AI systems is inherently challenging, as bias can be introduced at various stages of the model development process. Each stage demands a detailed attention, making the task both complex and resource intensive. Nonetheless, implementing unbiased AI tools in healthcare is crucial to ensure their applicability across diverse ethnic backgrounds. By addressing these biases, healthcare can become not only more equitable but also more inclusive, ultimately leading to improved outcomes worldwide.

Furthermore, as part of efforts to address this issue, the European Parliamentary Research Service (EPRS) published an article examining the applications, risks, and ethical and societal impacts of integrating AI into healthcare. In this article, bias in AI and the perpetuation of existing inequities are identified as one of the seven major risks associated with the use of AI in medicine and healthcare (Lekadir et al., 2022). Lekadir et al. (2022) emphasize that the most common causes of AI biases in healthcare arise from biased and imbalanced datasets, often shaped by structural bias and systemic discrimination. These biases may result from how data is collected or from the interactions between healthcare providers and their patients. Beyond these human-centric biases, disparities in access to quality equipment and digital technologies, as well as a lack of diversity and interdisciplinary collaboration in technological, scientific, clinical, and policymaking domains, further exacerbate the challenges of integrating AI in clinical settings (Lekadir et al., 2022).

The U.S. Food and Drug Administration (FDA) has introduced guiding principles to help reduce bias in AI systems used in medical devices (2023). One of the key recommendations is to ensure that clinical study participants and datasets represent the diverse range of people who will use these devices. By including data from a variety of patient groups, AI models can perform more accurately across different populations. This approach not only reduces bias but also helps developers identify situations where the AI might not work well, ensuring the system is fair, reliable, and effective for everyone (FDA, 2023).

Building on these principles, companies in the healthcare industry are actively working toward mitigating AI bias. One such company, PAICON, is developing AI algorithms trained on genetically diverse and globally representative histopathology datasets. They have built a data lake with more than 620 terabytes (TB) in size consisting of genetically diverse histopathology slides and related datasets, with a particular focus on underrepresented populations. Their mission is to develop AI-assisted cancer diagnostic tools that can support oncologists and cancer researchers in delivering equitable care to patients from diverse backgrounds.

The following sections will examine PAICON’s comprehensive approach to reducing bias in AI-powered cancer diagnostics. This includes the use of diverse and globally representative datasets, the development of a foundational model and AI pipeline to ensure high-quality and consistent data, and the application of tools like SatSightDX for efficient and equitable cancer diagnostics. Additionally, these sections will address the challenges faced in developing unbiased AI systems, such as data collection and harmonization, and explore practical solutions to overcome these obstacles, to achieve equitable healthcare outcomes.

Addressing Bias in AI-Powered Cancer Care: Real-Life AI Applications

The successful integration of AI into cancer care relies on building models that are not only accurate but also fair and capable of serving diverse populations (Corti et al., 2023). To meet this challenge, efforts are being made to diversify AI training data and create more inclusive healthcare solutions (Nature Medicine, 2023). PAICON is a notable example of a company actively working to address AI bias by developing innovative, data driven cancer care models (PAICON, personal communication, 2024).

A key part of PAICON’s approach is training models on datasets that are both genetically diverse and globally representative. They achieve this by collecting medical data from multiple countries, including those in hard-to-reach regions, through partnerships with healthcare companies, NGOs, and local medical centers. This extensive collaboration allows PAICON to gather rich, real-world datasets that form the foundation for its AI models (PAICON, personal communication, 2024). By prioritizing diversity and using detailed clinical annotations, PAICON’s models aim to reduce bias, improve inclusivity, and deliver faster and more accurate diagnostic tools to meet clinical needs (PAICON, personal communication, 2024).

In addition to improving diagnostics, PAICON focuses on identifying new therapeutic targets and incorporating AI into clinical practice as Software as a Medical Device (SaMD). Their algorithms are developed to meet ISO 13485 standards and are working toward CE / FDA certification, ensuring that these tools are ready for clinical use and can help bring equitable cancer care to patients worldwide (PAICON, personal communication, 2024).

PAICON’s Foundational Model and AI Pipeline

PAICON’s Foundational Model (in developmental stage and not publicly accessible) represents a critical advancement in addressing bias within AI driven cancer diagnostics. This model is built on an extensive dataset comprising over 620 TB of medical data, including 240,000 cases and 1.4 million histopathology slides collected from 33 countries. The dataset is notable for its genetic diversity and technical variability, incorporating more than 60 primary cancer types, 180 staining protocols, and data scanned using eight different devices (PAICON, personal communication, 2024). Such diversity is critical for developing AI systems that are accurate and applicable across a wide range of patient populations.

The model development involves a structured process: data is systematically curated to filter out irrelevant or low-quality inputs, annotated by experienced pathologists to ensure reliability, and then used to train the AI model (see Figure 1; PAICON, personal communication, 2024). The foundational model supports key clinical applications, including tissue segmentation, cell classification, molecular feature prediction, diagnostic assistance, and identifying potential therapeutic targets (PAICON, personal communication, 2024).

Figure 3. Workflow of PAICON’s foundational model, detailing stages from data curation to clinical applications. Source: PAICON, personal communication, 2024.

 

PAICON’s AI pipeline is a carefully designed process that transforms clinical data from diverse sources into high-quality datasets for building reliable AI models for cancer diagnostics. It starts with gathering of data input which are pathological reports from clinics and practices worldwide, which often comes in a variety of formats (PAICON, personal communication, 2024). After the data gathering stage is completed, the data is curated and standardized, making it ready for further analysis. During the data curation phase, automated tools refine and organize the data, removing inconsistencies or duplicate entries. At this stage, the data is also categorized by cancer type, stage, and other key features, creating structured datasets tailored for specific diagnostic and research purposes (PAICON, personal communication, 2024).

After the data curation phase, the data is harmonized, meaning information from different sources is standardized to maintain consistency in format and quality. This step ensures that all the data can be seamlessly integrated into AI training, helping to develop reliable models while reducing errors and biases. So far, PAICON’s pipeline has processed over 35,000 clinical reports, which has significantly improved the organization of patient data into well-defined study cohorts. These cohorts, grouped by specific criteria such as cancer type or genetic profile, are essential for building unbiased AI tools in cancer diagnostics (PAICON, personal communication, 2024).

Building on the concept of foundational models and AI pipeline, PAICON has developed an AI tool named SatSightDX, a diagnostic tool aimed at improving the classification of Microsatellite Instability (MSI) and Microsatellite Stability (MSS) in colorectal cancer (PAICON, personal communication, 2024). The tool aims to assist pathologists in detecting the MSI/ MSS status more quickly and less costly without sacrificing accuracy, addressing a critical need in inequality and accessibility for cancer diagnostics and treatment planning.

SatSight DX: PAICON’s Projects on Addressing Bias in Cancer Care

MSI/MSS classification is essential for identifying cancer patients who would benefit from immunotherapy or chemotherapy, respectively. MSI occurs due to errors in the DNA mismatch repair system, leading to genetic instability. MSI tumors often respond well to immunotherapy, while MSS tumors typically require chemotherapy (Lin et al., 2015).

Traditionally, diagnosing MSI or MSS involves molecular testing, PCR or IHC, which are costly and time-consuming, and can take weeks to deliver results (Lin et al., 2015). These delays can negatively affect timely treatment planning, especially for MSI tumors that are more responsive to immunotherapy (Lin et al., 2015). To address these challenges, SatSightDX employs AI algorithms trained on annotated and genetically diverse histopathology slides, offering a faster and more cost-effective alternative. The tool’s training datasets ensure its applicability across diverse patient populations and help reduce the biases often found in traditional AI diagnostic tools (PAICON, personal communication, 2024).

The diagnostic process of SatSightDx begins with the digitization of histology slides, which are processed through PAICON’s AI pipeline. Once digitized, the system applies pre-processing techniques to standardize image quality and format, ensuring that the data used for training is consistent. The AI model then analyzes cellular and tissue morphology to detect key markers associated with MSI or MSS. Using an end-to-end deep learning architecture, the tool identifies patterns and features within the tissue structure to determine whether the tumor exhibits MSI or MSS characteristics (PAICON, personal communication, 2024).

Finally, the tool outputs an MSI score along with detailed diagnostic insights, including tissue segmentation and confidence score. These results are designed to assist pathologists in making faster and more informed clinical decisions, as SatSightDX allows for highly accurate classification, achieving an average AUC of 92%, while reducing the diagnostic process to just 90 minutes compared to the weeks required by conventional methods (PAICON, personal communication, 2024).

Figure 4. Comparison of traditional MSI testing methods with PAICON’s AI-powered SatSight DX approach, highlighting differences in cost and time efficiency. Source: PAICON, personal communication, 2024.

 

However, while tools like SatSightDX show great potential to improve cancer diagnostics processes, bringing them into everyday clinical use is not without its challenges. Developing and implementing AI models in healthcare is a complicated process, involving technical obstacles, ethical considerations, and regulatory requirements (Esmaeilzadeh, 2024).

Challenges in Achieving Unbiased AI Models

Developing AI models for cancer diagnostics, such as SatSightDX, involves navigating a range of challenges, particularly when working with technically diverse and globally sourced datasets. These challenges span multiple areas, including data collection, standardization, harmonization, and regulatory compliance, each of which presents unique obstacles (PAICON, personal communication, 2024).

A significant challenge lies in collecting data from underrepresented regions. Countries with limited healthcare infrastructure often lack the necessary resources to digitize and share patient data (PAICON, personal communication, 2024). In some cases, economic, political, logistical or regulatory barriers further restrict access to these datasets. Such constraints result in gaps within the training datasets, making it difficult to ensure adequate representation of certain populations in AI models. Consequently, the risk of bias increases, as models may be disproportionately influenced by data from well-represented regions, thereby reducing their reliability and applicability across diverse patient groups (Norori et al., 2021).

During the data collection process, another complexity emerges in determining the appropriate volume of data from each country or region to ensure the AI algorithm is trained to be unbiased. It is not just about collecting data from diverse sources but also deciding how much data from each region is necessary to achieve fair representation. Over-representing certain countries or under-representing others can lead to skewed outcomes, compromising the AI tool’s generalizability across populations (PAICON, personal communication, 2024).

Even when data is successfully collected, its technical diversity introduces additional complexities. Datasets often arrive in inconsistent formats and may contain irrelevant information, such as clinical notes or administrative details, instead of histopathology images (PAICON, personal communication, 2024). Annotation inconsistencies are also common, with variations in language, file structure, and classification systems depending on the source. In some instances, datasets are not digitized, requiring physical slides to be scanned and processed for AI training. Such technical inconsistencies demand significant time and resources to resolve, further complicating the development of AI models (DeCamp & Lindvall, 2023).

Harmonizing data from different regions and institutions presents another challenge. Variability in equipment and techniques used for generating histopathology slides – such as staining protocols, imaging resolutions, and scanner types – leads to inconsistencies in data quality (PAICON, personal communication, 2024). Addressing these differences requires advanced preprocessing techniques to standardize the data, to make sure that it can be effectively used for training reliable AI models (Celi et al., 2022). Without proper harmonization, these inconsistencies can compromise the accuracy and generalizability of AI diagnostic tools.

Finally, regulatory differences between countries add another layer of complexity. Each country has its own regulatory framework for sharing patient data and/or approving medical devices, including AI-based tools (PAICON, personal communication, 2024). For instance, the European Union requires CE certification, while other regions follow entirely different validation and approval processes (FDA, 2023). Variations in data-sharing policies, legal requirements, and ethical standards across regions can complicate the process of accessing histology datasets from multiple countries. Furthermore, these discrepancies mean that AI models must often undergo multiple rounds of compliance testing to meet the standards of each region, increasing the time and cost of implementation of AI tools into clinical settings (PAICON, personal communication, 2024).

Addressing the challenges in developing unbiased AI models for cancer diagnostics requires a coordinated effort from multiple stakeholders, including governments, healthcare providers, reseachers, and technology companies. Below are recommendations based on current research and expert insights to overcome these obstacles and advance the development of equitable AI tools for clinical use.

Proposed Solutions and Future Directions in Achieving Unbiased AI Models

Addressing the challenges in developing unbiased AI models for cancer diagnostics requires a comprehensive approach that incorporates technical innovation, collaboration, and policy reform. Below are practical recommendations aimed at overcoming these challenges and achieving unbiased AI diagnostic tools.

  1. Adopting Standardized Data Annotation and Sharing Protocols: Inconsistent data annotation across regions and institutions creates significant challenges when trying to combine datasets effectively. Establishing global standards for annotation, such as common labeling formats and metadata structures, could make it much easier to use data to train AI algorithms from different sources. Encouraging countries and institutions to adopt shared protocols would help reduce variability in training datasets and improve the reliability of AI models (Norori et al., 2021). Additionally, international agreements on data sharing, backed by strong ethical protections, could promote fair access to diverse datasets, helping to address gaps in representation (Lekadir et al., 2022).

 

  1. Expanding Access to Diverse Datasets: Expanding data collection to include underrepresented populations is critical to reducing gaps in existing datasets. Collaborative initiatives between governments, research institutions, and healthcare providers can facilitate the collection of data in regions with limited healthcare infrastructure. For example, partnerships with local hospitals and clinics can help access data from populations often excluded from AI model training (Celi et al., 2022). Governments could create funding programs to incentivize data digitization and sharing, particularly in resource-limited countries. Achieving to gather proportional representation of datasets from various regions can help create more balanced and unbiased AI models.

 

  1. Reducing Costs and Increasing Access to Technology: High costs associated with data digitization and advanced scanning equipment often limit data collection efforts particularly in underdeveloped regions. Governments and international organizations should subsidize the cost of histopathology scanners and related technologies. For example, offering reduced-cost equipment or funding infrastructure development could significantly increase participation from low resource settings. Training local healthcare professionals in data digitization and annotation practices would also enhance the quality of data contributed to global AI efforts (Lekadir et al., 2022).

 

  1. Streamlining Regulatory Processes Across Countries: Different regulatory rules in various countries make it difficult to develop and use AI models in healthcare globally. Simplifying and aligning these regulations could make it easier to share data and validate AI tools for clinical use. Governments could work together with international organizations to create common rules for data sharing and approval processes. This would help companies and researchers bring their AI models into clinical settings more quickly and expand their use to multiple regions (FDA, 2023).

 

Putting these strategies into practice will require collaboration across healthcare, research, and technology sectors. By working on to overcome the challenges and implementing these solutions, we can move closer to developing AI-driven cancer diagnostics that are fair, reliable, and benefit diverse populations worldwide.

Conclusion

AI has the potential to improve the process of cancer diagnostics by making it faster, more accurate, and more accessible. However, this paper has shown that achieving unbiased and equitable AI tools is a complicated task. Challenges such as gaps in data collection, technical inconsistencies, and regulatory hurdles must be addressed to avoid perpetuating existing healthcare disparities.

To overcome these challenges, actionable steps like standardizing data practices, improving access to diverse datasets, reducing costs, and harmonizing regulatory frameworks are important. Collaboration between governments, research institutions, and private companies will play a vital role in building unbiased diagnostic AI tools and making these advancements possible.

The work being done by PAICON serves as a good example of how AI can bridge the gap between research and practical healthcare applications. By focusing on diverse datasets and developing tools such as SatSightDX, PAICON highlights the importance of equity and precision in reducing healthcare disparities and improving diagnostic accuracy.

The journey toward achieving unbiased AI models for cancer diagnostics will not be without its challenges. However, the potential rewards such as, improved patient outcomes, greater inclusivity, and a more equitable healthcare system, make this journey not only worthwhile but essential for the future of medicine. With continued collaboration and effort, AI can improve cancer care and can offer innovative solutions that improve outcomes and promote equity for patients worldwide.

References

Celi, L. A., Cellini, J., Charpignon, M.-L., Dee, E. C., Dernoncourt, F., Eber, R., et al. (2022). Sources of bias in artificial intelligence that perpetuate healthcare disparities—A global review. PLOS Digital Health, 1(3), e0000022. https://doi.org/10.1371/journal.pdig.0000022

Corti, C., Cobanaj, M., Dee, E. C., Criscitiello, C., Tolaney, S. M., Celi, L. A., & Curigliano, G. (2023). Artificial intelligence in cancer research and precision medicine: Applications, limitations, and priorities to drive transformation in the delivery of equitable and unbiased care. Cancer Treatment Reviews, 112, 102498. https://doi.org/10.1016/j.ctrv.2022.102498

Dankwa-Mullan, I., & Weeraratne, D. (2022). Artificial intelligence and machine learning technologies in cancer care: Addressing disparities, bias, and data diversity. Cancer Discovery, 12(6), 1423–1427. https://doi.org/10.1158/2159-8290.CD-22-0373

Esmaeilzadeh, P. (2024). Challenges and strategies for wide-scale artificial intelligence (AI) deployment in healthcare practices: A perspective for healthcare organizations. Artificial Intelligence in Medicine, 151, 102861. https://doi.org/10.1016/j.artmed.2024.102861

FDA. (2023). Good machine learning practice for medical device development: Guiding principles. U.S. Food and Drug Administration. https://www.fda.gov/medical-devices/software-medical-device-samd/good-machine-learning-practice-medical-device-development-guiding-principles

Hernandez-Boussard, T., Bozkurt, S., Ioannidis, J. P. A., & Shah, N. H. (2020). MINIMAR (MINimum Information for Medical AI Reporting): Developing reporting standards for artificial intelligence in health care. Journal of the American Medical Informatics Association, 27(12), 2011–2015. https://doi.org/10.1093/jamia/ocaa088

Koçak, B., Ponsiglione, A., & Stanzione, A. (n.d.). Bias in artificial intelligence for medical imaging: Fundamentals, detection, avoidance, mitigation, challenges, ethics, and prospects. Diagnostic and Interventional Radiology. https://doi.org/10.4274/dir.2024.242854

Lekadir, K., Quaglio, G., Tselioudis Garmendia, A., & Gallin, C. (2022). Artificial intelligence in healthcare: Applications, risks, and ethical and societal impacts. Panel for the Future of Science and Technology (STOA), European Parliamentary Research Service (EPRS). https://www.europarl.europa.eu/RegData/etudes/STUD/2022/729512/EPRS_STU(2022)729512_EN.pdf

Lin, E. I., Tseng, L. H., Gocke, C. D., Reil, S., Le, D. T., Azad, N. S., & Eshleman, J. R. (2015). Mutational profiling of colorectal cancers with microsatellite instability. Oncotarget, 6(39), 42334–42344. https://doi.org/10.18632/oncotarget.5997

Marcus, A. (2021). Health care AI systems are biased. Scientific American: Health & Medicine, 3(1). https://doi.org/10.1038/scientificamerican022021-7I562QNmh6t0dduWU1DEnh

Matthew DeCamp, C., & Lindvall, M. (2023). Mitigating bias in AI at the point of care. Science, 381, 150–152. https://doi.org/10.1126/science.adh2713

Nature Medicine. (2023). Addressing bias in AI healthcare models through diverse datasets. Nature Medicine, 29(4), 1054–1057. https://www.nature.com/articles/s41591-022-01987-w.pdf

Norori, N., Huerta, R., Beach, M., Nelson, M. R., & Peppin, J. (2021). Addressing bias in big data and AI in healthcare: A call to action. Nature Medicine, 27(7), 1091–1093. https://doi.org/10.1038/s41591-021-01307-

Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447–453. https://doi.org/10.1126/science.aax2342

Related Articles

bacground image
bacground image

Subscribe to our newsletter

Loading