AI Can Be Friend or Foe in Improving Health Equity. Here is How to Ensure it Helps, Not Harms | By The Digital Insider

Healthcare inequities and disparities in care are pervasive across socioeconomic, racial and gender divides. As a society, we have a moral, ethical and economic responsibility to close these gaps and ensure consistent, fair and affordable access to healthcare for everyone.

Artificial Intelligence (AI) helps address these disparities, but it is also a double-edged sword. Certainly, AI is already helping to streamline care delivery, enable personalized medicine at scale, and support breakthrough discoveries. However, inherent bias in the data, algorithms, and users could worsen the problem if we’re not careful.

That means those of us who develop and deploy AI-driven healthcare solutions must be careful to prevent AI from unintentionally widening existing gaps, and governing bodies and professional associations must play an active role in establishing guardrails to avoid or mitigate bias.

Here is how leveraging AI can bridge inequity gaps instead of widening them.

Achieve equity in clinical trials

Many new drug and treatment trials have historically been biased in their design, whether intentional or not. For example, it wasn’t until 1993 that women were required by law to be included in NIH-funded clinical research. More recently, COVID vaccines were never intentionally trialed in pregnant women—it was only because some trial participants  were unknowingly pregnant at the time of vaccination that we knew it was safe.

A challenge with research is that we do not know what we do not know. Yet, AI helps uncover biased data sets by analyzing population data and flagging disproportional representation or gaps in demographic coverage. By ensuring diverse representation and training AI models on data that accurately represents targeted populations, AI helps ensure inclusiveness, reduce harm and optimize outcomes.

Ensure equitable treatments

It’s well established that Black expectant mothers who experience pain and complications during childbirth are often ignored, resulting in a maternal mortality rate 3X higher for Black women than non-Hispanic white women regardless of income or education. The problem is largely perpetuated by inherent bias: there’s a pervasive misconception among medical professionals that Black people have a higher pain tolerance than white people.

Bias in AI algorithms can make the problem worse: Harvard researchers discovered that a common algorithm predicted that Black and Latina women were less likely to have successful vaginal births after a C-section (VBAC), which may have led doctors to perform more C-sections on women of color. Yet researchers found that “the association is not supported by biological plausibility,” suggesting that race is “a proxy for other variables that reflect the effect of racism on health.” The algorithm was subsequently updated to exclude race or ethnicity when calculating risk.

This is a perfect application for AI to root out implicit bias and suggest (with evidence) care pathways that may have previously been overlooked. Instead of continuing to practice “standard care,” we can use AI to determine if those best practices are based on the experience of all women or just white women. AI helps ensure our data foundations include the patients who have the most to gain from advancements in healthcare and technology.

While there may be conditions where race and ethnicity could be impactful factors, we must be careful to know how and when they should be considered and when we’re simply defaulting to historical bias to inform our perceptions and AI algorithms.

Provide equitable prevention strategies

AI solutions can easily overlook certain conditions in marginalized communities without careful consideration for potential bias. For example, the Veterans Administration is working on multiple algorithms to predict and detect signs of heart disease and heart attacks. This has tremendous life-saving potential, but the majority of the studies have historically not included many women, for whom cardiovascular disease is the number one cause of death. Therefore, it’s unknown whether these models are as effective for women, who often present with much different symptoms than men.

Including a proportionate number of women in this dataset could help prevent some of the 3.2 million heart attacks and half a million cardiac-related deaths annually in women through early detection and intervention. Similarly, new AI tools are removing the race-based algorithms in kidney disease screening, which have historically excluded Black, Hispanic and Native Americans, resulting in care delays and poor clinical outcomes.

Instead of excluding marginalized individuals, AI can actually help to forecast health risks for underserved populations and enable personalized risk assessments to better target interventions. The data may already be there; it’s simply a matter of “tuning” the models to determine how race, gender, and other demographic factors affect outcomes—if they do at all.

Streamline administrative tasks

Aside from directly affecting patient outcomes, AI has incredible potential to accelerate workflows behind the scenes to reduce disparities. For example, companies and providers are already using AI to fill in gaps on claims coding and adjudication, validating diagnosis codes against physician notes, and automating pre-authorization processes for common diagnostic procedures.

By streamlining these functions, we can drastically reduce operating costs, help provider offices run more efficiently and give staff more time to spend with patients, thus making care exponentially more affordable and accessible.

We each have an important role to play

The fact that we have these incredible tools at our disposal makes it even more imperative that we use them to root out and overcome healthcare biases. Unfortunately, there is no certifying body in the US that regulates efforts to use AI to “unbias” healthcare delivery, and even for those organizations that have put forth guidelines, there’s no regulatory incentive to comply with them.

Therefore, the onus is on us as AI practitioners, data scientists, algorithm creators and users to develop a conscious strategy to ensure inclusivity, diversity of data, and equitable use of these tools and insights.

To do that, accurate integration and interoperability are essential. With so many data sources—from wearables and third-party lab and imaging providers to primary care, health information exchanges, and inpatient records—we must integrate all of this data so that key pieces are included, regardless of formatting our source . The industry needs data normalization, standardization and identity matching to be sure essential patient data is included, even with disparate name spellings or naming conventions based on various cultures and languages.

We must also build diversity assessments into our AI development process and monitor for “drift” in our metrics over time. AI practitioners have a responsibility to test model performance across demographic subgroups, conduct bias audits, and understand how the model makes decisions. We may have to go beyond race-based assumptions to ensure our analysis represents the population we’re building it for. For example, members of the Pima Indian tribe who live in the Gila River Reservation in Arizona have extremely high rates of obesity and Type 2 diabetes, while members of the same tribe who live just across the border in the Sierra Madre mountains of Mexico have starkly lower rates of obesity and diabetes, proving that genetics aren’t the only factor.

Finally, we need organizations like the American Medical Association, the Office of the National Coordinator for Health Information Technology, and specialty organizations like the American College of Obstetrics and Gynecology, American Academy of Pediatrics, American College of Cardiology, and many others to work together to set standards and frameworks for data exchange and acuity to guard against bias.

By standardizing the sharing of health data and expanding on HTI-1 and HTI-2 to require developers to work with accrediting bodies, we help ensure compliance and correct for past errors of inequity. Further, by democratizing access to complete, accurate patient data, we can remove the blinders that have perpetuated bias and use AI to resolve care disparities through more comprehensive, objective insights.


#Acuity, #Administration, #Ai, #AIDevelopment, #AIModels, #AiTools, #Algorithm, #Algorithms, #American, #Analysis, #Artificial, #ArtificialIntelligence, #Bias, #Bridge, #Building, #Cardiology, #CardiovascularDisease, #Challenge, #Clinical, #ClinicalResearch, #Coding, #College, #Color, #Companies, #Compliance, #Comprehensive, #Covid, #Creators, #Data, #Design, #Detection, #Developers, #Development, #Diabetes, #Discoveries, #Disease, #Diversity, #Double, #Drift, #Drug, #Economic, #Education, #Equity, #Factor, #Fair, #Forecast, #Functions, #Gender, #Genetics, #Guidelines, #Harvard, #Health, #HealthEquity, #Healthcare, #Heart, #HeartDisease, #How, #HowTo, #Identity, #Imaging, #Industry, #InformationTechnology, #Insights, #Integration, #Intelligence, #It, #Kidney, #KidneyDisease, #Languages, #Law, #LED, #LESS, #Life, #Matter, #Medical, #Medicine, #Members, #Metrics, #Mexico, #Mitigate, #Model, #ModelPerformance, #Models, #Monitor, #Mountains, #Naming, #Notes, #Obesity, #One, #Organizations, #Other, #Pain, #Performance, #Pieces, #Play, #Population, #Prevent, #Prevention, #Process, #Racism, #Research, #Rhapsody, #Risk, #RiskAssessments, #Risks, #Scale, #Society, #Staff, #Standards, #Strategy, #Studies, #Technology, #ThirdParty, #ThoughtLeaders, #Time, #Tools, #Training, #Treatment, #Tuning, #Type2Diabetes, #Vaccination, #Vaccines, #Wearables, #Work, #Workflows
Published on The Digital Insider at https://is.gd/15F5SF.

Comments