ChilCast: Healthcare Tech Talks

Bias in AI: Illuminating the Unseen with Dr. Tania Martin-Mercado

April 28, 2022 Chilmark Research
ChilCast: Healthcare Tech Talks
Bias in AI: Illuminating the Unseen with Dr. Tania Martin-Mercado
Show Notes Transcript Chapter Markers

On the latest episode of Chilcast: Healthcare Tech Talks, we are delighted to feature an interview with Dr. Tania Martin-Mercado, an expert in clinical research, biotech and public health. She joins Chilmark Senior Analyst Jody Ranck for a discussion on the issue of bias in AI: how it’s harmful to patients individually and our culture as a whole, how to add diversity to data teams in meaningful and authentic ways, and what organizations can do to tackle this issue as more algorithms are deployed for care delivery.

Dr. Tania: [00:00:00] And unfortunately the use of race is still commonplace when designing clinical algorithms — I don't mean Health and Human Services or outreach programs, I mean specific to clinical decision making — race is still a factor, even though it is widely known. This is not new information among scientific and research communities that race is not a biological issue, it is a social issue. So we know this, it seems like in science and research and in health care, and yet we still continue to create algorithms and mathematical models based on race as a factor and that causes harm.

 

Jody Ranck: [00:00:49] Welcome to the Chilmark Research podcast. I'm Jody Ranck and I'll be hosting today's podcast with Dr. Tania Martin-Mercado, who we'll refer to as Dr. Tania and I met her recently at HIMSS22, where she gave an excellent presentation on implicit bias in AI. So I thought it'd be great to interview her for the listeners of the Chilmark Research podcast. And so today the topic of our discussion bias in AI. And so welcome Dr. Tanya. I'm happy to have you here today. And why don't we begin with you providing a bit of an introduction to your background, clinical research, and how you came to work on bias in AI.

 

Dr. Tania: [00:01:32] Absolutely. It's a pleasure to be here. Thank you so much for inviting me. I have been in clinical research now, I kind of stumbled into it for the past several years, I would say maybe 2013, 2014, officially. And I got into the area of this research because my mom passed away from breast cancer. And we don't have a history of breast cancer in my family. And I have taken every genetic test for breast cancer and I don't have the biomarker for it. So it just raised more questions than it answered. And I decided, I wonder if other people have these questions. It turns out several did. And that's also how I uncovered the amount of inequities and lack of inclusion and diversity in clinical research, clinical studies for marginalized populations. So I just kind of stumbled into it organically, if that's a way to say it.

 

Jody Ranck: [00:02:25] And so right now, your work, as I recall, you are an advisor to Microsoft.

 

Dr. Tania: [00:02:33] Yes, I am a clinical researcher and digital advisor at Microsoft. And I also have my own companies, Phronetik, which is a biotechnology company, and YGEIA, which is a home health care company. So that's what I do in the in my day job.

 

Jody Ranck: [00:02:49] It's very interesting. So you're able to see multiple uses of AI and algorithms and different types of clinical non-clinical settings and have a good perspective on the range of issues we might encounter these days. So why don't we begin with just talking a bit about what is implicit bias? Because we know there are lots of different forms of bias when we do data science and clinical research and so forth. And I thought your talk at HIMSS was quite good at sort of framing this implicit bias and then from there setting up some really interesting examples you provided. So maybe we could begin with just talking a bit about what implicit bias is. Sure.

 

Dr. Tania: [00:03:34] Implicit bias, the Webster definition, it's the form of bias that occurs automatically. It's not intentional. There's no forethought to it. But it does affect our judgments, our decisions and our behaviors. But it's it's immediate. There's no explicit information. It's almost a knee jerk reaction. It's a natural instinct. As a matter of fact, it's could also be considered a survival instinct if you want to look at it from the terms of nature. So if you remember in my HIMSS talk, I used the example of the large cat chasing the deer. He was a tiger. The deer isn't sitting for a moment and wondering whether or not the tiger has good intentions. It's just going to have an alert system that says Run, that's a predator. So that's implicit bias.

 

Jody Ranck: [00:04:21] And how does that play out in clinical medicine or clinical research when it comes to issues around race, gender and so forth? In the data we might collect or how we interpret that data.

 

Dr. Tania: [00:04:35] I'll separate that between technology and human responses, right? This goes to two different approaches to the question. So when we are constantly taking in images, experiences, perceptions of a situation in everyday life that affects implicit bias, it's basically we're creating a knowledge base of experiences, perceptions and how we react and how we feel in that moment and creating our own knowledge base that will subconsciously or unconsciously affect our bias, right? So I use the example of think of Wal Mart, and as soon as I say Wal Mart, what are the images? Who are the people that are shopping there? What is being sold there and so on, versus when I say Nordstrom, who are the people, the products, the thoughts that come to mind about who may or may not be entering those stores. And those are based on marketing. Advertising. If you've been to either one of those stores, what you personally observe and how you felt when you were in there. So all of that would have to do with kind of the human part of implicit bias. You're collecting this data internally and applying it to an everyday situation now in health care, if you take that. Information. And maybe you had an encounter with the with a member of society that is different from you. And you are a physician and that that a similar patient walks into your office, you're going to immediately have judgments and behaviors based on your last interaction with whatever was familiar. Whomever walks into that office, if they have similar characteristics, you're going to immediately form an opinion. So it's more of the human side of it. Now, the technical side is when we take those human interactions and human perceptions and apply them to code. So let me know if that answers the question.

 

Jody Ranck: [00:06:20] Yeah, that's very important because I think a lot of people think about bias and they think, Oh, there's that overt example of racism or institutional racism that's producing that bias, and whether it's how the data is collected and so forth. And then that impacts the treatment. But you're pointing to something far more subtle that virtually anyone can have, but it can also end up, given how algorithms can amplify things, being every bit as insidious in some ways as well, or as harmful as other forms of bias that we encounter in our everyday work and so forth. To get to some of these clinical examples, you provided a I don't know whether the words impressive or really startling list of examples that you've seen just this year clinical decision support, risk stratification, different use cases of clinical algorithms currently in use that have substantial bias in one way or another. And I wonder if you could give our listeners an overview of some of those, because it was it was quite shocking when I heard the list. And just from such a rather short, small window.

 

Dr. Tania: [00:07:39] So if you are listening, if settle back with some tea, I'll start off with cardiology and keep in mind these are real examples from 2022. This isn't from the early seventies or eighties. This is right now.

 

Dr. Tania: [00:07:54] Today, in the cardiology space, the American Heart Association has a guideline. It's a heart failure, risk score, and it predicts the in-hospital mortality in patients that have acute heart failure. So clinicians are advised to use it's a risk stratification to guide their decision when it comes to initiating any type of medical therapy, if you present with any type of acute heart issue. So the use of race in this as three points to that risk score. So think of this as a mathematical model. It adds three points to the risk score. If the patient is identified as non-black, this addition increases the probability of death automatically. So this is done mathematically. This isn't a physician in the office making this calculation manually. This is a tool and it predicts a higher mortality if you add those three points. And the concern here with equity is that you have to show as sicker to have the same resources to meet that threshold. So if you're black, you will appear as a lower risk with the exact same symptoms, with the exact same input variables, which would be your blood pressure, sodium content, your age, your heart rate, any history of COPD. But as soon as they add that black or non black, if you are black, you add three points, right? Which means you need to be sicker if you are black to get the same course of treatment. As someone who is has a little check mark as being non-black. So that's cardiology. Another one is cardiac surgery.

 

Dr. Tania: [00:09:38] This is a short term risk calculator that determines a risk of complications and death with common cardiac surgeries. It's for operative mortality. So will you or will you not have a higher likelihood of complications or death under the under the knife? If you're going under cardiac surgery, if the patient is identified as black, you have a higher risk score by some cases as much as 20%. By the way, in this tool, the default setting is white. So you need to explicitly select black, African American, Hispanic, Latino, Asian, Pacific Islander, so on and so forth. And it will add an immediate 20% when when you are an ethnic minority and it's the same situation, you have to be sicker and you will be deemed as higher — I'm sorry, you'll be deemed as higher risk if you present as black or any other ethnic minority, which will give you a higher risk score. As someone who is non-black. So the clinician may guide you away from surgical procedure and recommend a different course of treatment, which may or may not be in the best interest of that patient if all they're going on is race. So those are the two heart related examples I have when we move. On to nephrology. I think we've all if you're in this space, you've heard of the race correction for the EGFR in the kidneys, which measures what the filtration rate is on the basis of. It's called serum creatine. Right. And this equation has a higher EGFR by a factor of 21%. If the patient is identified as black and it's the same thing you have to present as sicker to get the same course of treatment is as someone who identifies as non black.

 

Jody Ranck: [00:11:28] Sorry, isn't it also used in the queue for kidney transplants as well. So you could be pushed down the the waitlist even further if you're black?

 

Dr. Tania: [00:11:39] Yes.

 

Jody Ranck: [00:11:40] Because of that.

 

Dr. Tania: [00:11:41] There is an example of a of a gentleman who's biracial. His name is Jordan Crowley. You can Google his name because there's been several articles on this. So he's biracial. He has one black grandparent and three white ones. The GFR for him depends on whether or not you select him as white, which means his GFR would be 17, which would be low enough to give him a spot on the organ transplant list. But if you select him as black, then it gives him a GFR of 21 and he is not eligible to be on the organ transplant list. And his physicians decided that he was black, which means he doesn't qualify. So now he has to wait longer to be on that organ transplant list. So that's the example you're referring to. And that's again, this is right now, this isn't a long time ago.

 

Jody Ranck: [00:12:35] And I'll be providing in that blog post that accompanies the podcasts and links to some of the articles, I believe. Yes, these were listed in that in a New England Journal of Medicine piece about two years ago. So I'll be providing the links to some articles on all of these. And speaking of that article, one of the more fascinating aspects of that article that I think what you're speaking to it maybe should address directly is biological versus social construction of race. It's sort of behind the thinking and even the clinical research that the study on the EGF algorithm and so forth from 1999. We want to talk a bit about this distinction between the social construction of race and then genomics and difference than how people are deploying these these concepts quite erroneously, quite often.

 

Dr. Tania: [00:13:31] And clinically. Yes, it is extremely important to understand, regardless of what industry you're in, to be honest, that race is a social construct. There is no black gene, there is no white gene, there is no Asian gene. When it comes to race, we are less than half a percent similar between the races, less than half a percent. So there's there's no gene to indicate race that is purely a social construct. And it's also not a reliable measure of genetic differences. So when you when you are starting to think about race and you apply it to disease progression, you have created a problem that doesn't need to be there. These are two completely separate conversations, and that's a lot of what I focus on in the talk is that when you're making clinical decisions of courses of treatment specific to treat the symptoms, the conditions, the issues that the patient is is presenting for in your office and you apply race to that, you're applying a social construct to a disease, not the specifics of the disease. And those two things need to stop being done in parallel. And unfortunately, the use of race is still commonplace when designing clinical algorithms. I don't mean Health and Human Services or outreach programs, I mean specific to clinical decision making. Race is still a factor, even though it is widely known. This is not new information among scientific and research communities that race is not a biological issue, it is a social issue. So we know this, it seems like in science and research and in health care, and yet we still continue to create algorithms and mathematical models based on race as a factor and that causes harm. At the end of the day, that that is what you are doing and nothing else. You are causing harm to the patient in the long term. And one of the best examples of that is, unfortunately, Jordan Crowley, which is, again, I mentioned that because it's a widely referenced incidence of this.

 

Jody Ranck: [00:15:37] And I think and this is one of the fears about the use of AI and clinical decision support, where these things can get built into a model or an algorithm. Sometimes we're seeing varying degrees of transfer. Parents see going back to the whole explainability and AI issue, which we could talk about some other time, but the scaling up of usage of these things does have potential to cause great harm if we don't leave out the bias. And I think that's why in the last year or so, especially as AI has become starting to get adopted a bit more in health care, it's raising all sorts of alarms. And there's a lot of issue, a lot of issues around trust. And this is one of the sort of subthemes under trust that really needs to be addressed. And so so that's I think when people hear this and they often are overwhelmed and like, what do we do and so forth. But we do know there are a lot of things we can do to address this issue and and modify algorithms and clinical decision support tools or models and so forth. So I thought, let's begin talking about what can be done about the problem. And we have like several different levels that we can attack it. We, there's human level of thinking critically about the clinical research and framing of it and collecting of data and so forth, and then institutions and teams and how they work and so forth. So where would you like to begin? I think you had a nice process during your HIMSS talk and so that you kind of decide where to begin. But maybe we could walk through a couple of the stages or levels of critical thinking and work that needs to be done. And then maybe at the end, talk about as as the health care sector or digital health. What do we do as a broader industry but begin with clinical research practice in your building, a clinical decision support tool or some sort of AI model, what should we do as a team, as a as a developer to begin to address this?

 

Dr. Tania: [00:17:51] You bring up some really good points. So I'm going to point out one thing you said is people say it's such a big problem and what can I do as an individual or a clinician or health care provider in any capacity? It can be overwhelming. And so to that point, look in your organizations, as you begin to say, we want to use a new tool or we want to develop a new tool, who's going to look at the data? Hiring a diverse data team is incredibly important, and I think anyone listening today can take a moment and look at your not just your IT department but if you have a data science team, what is the diverse makeup of that data science team? Are there people on that team that can personally connect to that data that may have a differing opinion because they're literally different than than a homogeneous environment? Right. You want to you want to have individuals on your team that resonate with the data being collected and analyzed that I can't express that enough. That is an extremely important way to be very intentional about problem solving for a community that you want to solve or have members of that community be involved in the process. That's the first thing I would say. The next thing is to there are tools out there that can be used to reduce bias. And I have a list of those tools. I'll mention a few of them. Now there's the What If tool that was released by Google as part of their people, plus AI initiative.

 

Dr. Tania: [00:19:20] There's also a AI Fair 360 that's an open source toolkit of several metrics that will look for unwanted bias in a particular data set or machine learning model. There's another one by Oracle called Scatter. That's a Python library. These are like a black box model to detect bias and understanding how the method of how the tool is making a prediction, how do they get to that conclusion, and was there bias in that process? So there are many more and this is not an exhaustive list, but you have to be intentional about searching for those tools. It has to be a very focused effort. Another thing I like to focus on is among the hiring diverse data teams take that diverse data team and as you're thinking about algorithmic design and the problem you want to solve for use a pre mortem template, so actually decide to conduct a pre mortem as part of your project plan and then look to see if the algorithms you are creating are going to be adversely impacting the population you're trying to help. Right. And there's an actual template that Brookings came up with that is available and it can be used if you have a data team, maybe they can digitize that template that would be outstanding to really help these teams ask certain questions, such as who is the audience for the algorithm? Who are you building the algorithm for and who will be most effective? Did buy it.

 

Dr. Tania: [00:20:53] How will potential bias be detected? Who is going to do the testing? And then this is where having a diverse data team becomes more important. And who are the targets for the testing? Do you have a diverse population to test among? And then where is the threshold for correction? Right. So are we addressing marginalized groups and how are we measuring any correction in bias that is found? And who is looking at that audit, so to speak? So this is just a very quick overview of of a sample pre mortem and the discussion around that and remove the angst out of having these discussions if possible. You want to have a culture of inclusion so that people can bring up bias without feeling like they're going to have backlash or be negatively viewed in the team that they're a part of. You want to create a culture that says, Yes, we're going to be inclusive. Yes, we want to make sure everyone has a voice, that we're not just taking the mic and speaking for them, but we can pass the mic and let people speak and be heard again. These are just a few I don't want to overwhelm the audience, but these are just a few ways that when you're talking about designing an algorithm or designing a tool that leverages artificial intelligence and machine learning, that you have a data team that represents the population you are targeting.

 

Jody Ranck: [00:22:15] I remember in one of our discussions earlier in preparing for today's podcast, you spoke a bit about when you're talking about the team you also made what I thought was a really good point about is more almost around not necessarily data governance, but A.I. governance, where within the organization itself, who has the responsibility to really call things out and stop and we have to start from scratch again because we've seen this really empowering someone to oversee this and that. I think you mentioned how it shouldn't be the HR department, but where should it be? And can you maybe talk a bit more about that as well? Because I thought that was also an important discussion. You know, there's one aspect of having diversity in the team, but then there's also power within the organization, right?

 

Dr. Tania: [00:23:08] And power and authority, right? You don't just want someone that has the title, but they don't have the authority to actually initiate change in the organization. And I know no offense to anyone that works in HR, but diversity and inclusion, if it falls under HR or if it's falling under again. So I don't know everything about HR, but it's a very legalese type of department in my experience. You want to have someone stepped outside of that so they can have authentic conversations, right? They need to have the power and the authority to say we need to change X, Y and Z and this is how we're going to do it. And they have and it can actually get done. Same thing on your data team. If you are not empowering the people that are creating the tools to protect patients, to protect populations, to get people involved in clinical studies. If you're not empowering these people to make change and giving them the authority to drive action, then it's performance. You just have a checkbox that says we have a D, a diversity and inclusion department, and we have someone who looks diverse leading it, but they actually have no power to change anything. But we have check the box, right? And so we want to move away from performative actions that just say, well, we had someone on the team, but we're actually not doing anything to change.

 

Dr. Tania: [00:24:28] You want action. And I'm trying to have these conversations in a way that says, yes, we're identifying a problem, but here are some actions that can be taken and give these actions to individuals and organizations that have the power to initiate that change, not just talk about it, not just create a great PowerPoint presentation because nothing gets done. And I think that's why we've seen these same aims for health equity, reducing gaps in care and so on and so forth. We've seen these these themes repeated since the early 2000s, if not earlier. And there's still an even larger gap in care in the United States today. There's still an even larger amount of health inequities in this country today, despite awareness, despite people knowing it's there, despite people understanding in some ways how it got there and what we need to do. But the process of change is intentional and it's not free. It's amazing to me how many individuals in organizations think that you can just say it and all of a sudden it's done. It's going to cost money to redo your organization and that cultural change and change the systems, empower people.

 

Dr. Tania: [00:25:44] Maybe you need to switch up your leadership team. If your board is comprised of mostly white men, maybe you need to think about that if your computer science department or your department is comprised of mostly white men and you are in a health care environment or a clinical research environment. You need to consider why that is and who is that benefiting and is that part of your mission? If your mission states that we are going to be more inclusive and more diverse, more intentional. How are you doing that? What are the action plans? What is the pay? Is it really equitable? And I think a lot of people find that this type of work, particularly in I.T., is not attractive because it is it is time consuming and it is hard. But if you want to change patient outcomes, if you want to seriously move the needle in reducing gaps to care and removing health inequity, it's going to take work. It's going to take time. It's going to take policy change. It's going to take conversations that may be uncomfortable, and you need to lean into that instead of push away from it, because it's hard. 

 

Jody Ranck: [00:26:44] Putting the accountability really into accountable care.

 

Dr. Tania: [00:26:48] Exactly. Exactly.

 

Jody Ranck: [00:26:50] And I think the one major point you made there with how we've been talking about it for so long, but really haven't seen enough impact and results from it, your outcomes and so forth. That's really raising the issue of ethics washing that we see a lot and across not just health care, but we see a lot of especially younger, younger data scientists who are maybe in PhD programs now or just out. Like if you watch data science Twitter, they're very robust conversations these days on ethics watching and how those that came before haven't really achieved that much in their minds and what needs to be done and so forth. So I don't know what you think about some of that talk as well, because I think it's really important to have that conversation because you can see a lot of people coming to this issue more from a maybe a compliance mindset where we're going to check some boxes to do the right thing, but you do it the wrong way. Your reputation and trust in your products could take a serious hit, and it should take a serious hit. As the market matures and we begin seeing things like liability insurance, you may not be able to get that kind of insurance at a rate that makes your models economical for end users and so forth. So I think even talking about this in terms of creating good AI, that has an impact. We're in the era of value based care, right?

 

Dr. Tania: [00:28:25] Right.

 

Jody Ranck: [00:28:26] To have impact. So moving beyond compliance, just checklisting compliance, talk, to this is how you do good data science that translates into products, right?

 

Dr. Tania: [00:28:39] Yes. I like the way you just said the younger generation is becoming more aware. I am so excited for these new healthcare professionals and clinical researchers that are currently in school today that will be essentially the changemakers of tomorrow. I am so excited because they are more aware, they're more culturally sensitive, they're not afraid. I think fear has a lot to do with the lack of movement. People become paralyzed by it. There's just there's just less of some of the taboo around saying, you know, I don't think that's right. I don't think I don't think we're doing it right. I've seen more of those conversations become acceptable in younger generations than older, older and by generation, I mean generations of health care providers, generation of people in power. You've been doing this 60 something years versus someone that may have just graduated yesterday and they're just starting out. So.

 

Jody Ranck: [00:29:36] Yeah. And your exposure to that type, this sort of nexus of social science and maybe the humanities inject it into how you do data science or epidemiology and so forth.

 

Dr. Tania: [00:29:50] And actually, that's a good point. We we may not realize that a lot of this bias is taught. I mean, it was taught it was acceptable because and we're not we're not I'm not trying to say that people in the past were or intentionally doing wrong. I think the intent was trying was was helpful. But now we know more. Now we have access to more information. Now we know more about the human body. Now we know more about several different areas. We know how fast technology can move and change and update our information. And you should be. Able to adapt and change when new information is presented without getting emotionally attached to the outcome. That is a personal opinion, but I think some of that people get very tied to their belief. Well, I believe that I was doing the right thing and I believe that if you're black, you really shouldn't be doing X, Y and Z. They genuinely believe this. I've had these conversations and when new information is presented like No, but scientifically that makes no sense. It is scary for some people to recognize that and and deal with it. So I think we need to accept the fact that that is a portion of the population that is in power, that fear of change, that fear of having new information and maybe having to reevaluate their belief system or maybe having to reevaluate what they thought they were doing when it came to prescribing treatments. I feel like that is something when we're talking about creating teams and we're talking about hiring diversity. I also mean diversity of age as well. I don't just mean colors of the rainbow, I mean age, gender, immigration status, socioeconomic status, cultural affiliation, you name it. Religious affiliation, diverse in all aspects.

 

Jody Ranck: [00:31:39] Disciplinary as well, right?

 

Dr. Tania: [00:31:40] Yes. Yes.

 

Jody Ranck: [00:31:41] People from the so called soft sciences. I'm one of those.

 

Dr. Tania: [00:31:45] Yes, exactly.

 

Jody Ranck: [00:31:48] Anthropologists are often the last ones included in these things. A lot to say about race and gender and all of these things. And my mind, they should also be valued. Part of data science teams.

 

Dr. Tania: [00:32:01] You're absolutely right. You're absolutely right. And, you know, maybe you call these people in at certain parts of your project, you know, hey, how did you feel when you experienced this? We're trying to improve patient engagement. How would you how did this make you feel when you did this particular part of this digital process? Those things matter. Those small focus groups pulling people in at different parts of the projects. 

 

Jody Ranck: [00:32:24] Yes. Small data versus the big data is true in data science. And I think that's an incredibly important point. And I'd like to now then extend some of our conversation towards we've discussed what goes on within the boundaries or the borders of the firm, but then we know we're out in communities as well and talking more about social determinants of health and data collection and so forth. And if you go back to the data, Twitter, data science Twitter and so forth, you see a lot of discussion there about data colonialism and one hand and data poverty on the other. And this one I find tricky when you talk to health I.T. vendors and so forth that may be working on social determinants. And because on one hand, you have areas where that the missing people in the data in which the data set itself can be biased and around inclusivity and representation of certain groups within the data set. So that's the data poverty side of that equation. But then you have once you implement and you're rolling this out into a community, you have that critique that, for example, at Rua Benjamin has where you can implement this science and intervention on a community. And the way you go about doing that can actually stigmatize the community. Let's say you have a a behavioral health mental health intervention that's driven by some sort of data science AI intervention. And you're rolling that out into a black or Latino community or transgender community and so forth. That experiences a lot of stigma. That's a tricky tightrope to walk because you have these these folks that really want to from the, let's say, the health, the IT vendor perspective, they want to make a difference in that community and they've done their work around creating an appropriate responsible AI algorithm. Then you implement and there are things that can go wrong there, too, right?

 

Dr. Tania: [00:34:37] Yes, absolutely.

 

Jody Ranck: [00:34:39] So any thoughts on how to help people think their way through that? And also sort of the post implementation and thinking about fairness of an algorithm.

 

Dr. Tania: [00:34:52] Right.

 

Jody Ranck: [00:34:53] Like what do they need to think about to do it right is what I'm driving at, I guess.

 

Dr. Tania: [00:34:57] No, I completely understand. And again, excellent point. You may have seen the focus on mental health since we're talking about stigma. I think mental health is a very good example of a negative stigma that's been around such a long time. I don't even know. I almost feel like it's ingrained in our society almost that negative stigma, even if if your psychiatrist is has. A psychiatrist, even that's like frowned upon. It's it's very ridiculous. But it's an interesting topic to discuss when we're talking about how to avoid that. I love at HIMSS that Michael Phelps was the last presentation and he spoke about mental health. He brought awareness. And and I see that more often these high profile individuals bringing to light mental health and how it's impacted them, how they live with it day to day. And I'm hoping I would like to say it's reducing the stigma. I've seen more positive responses than negative. But I will have to say that maybe my own bias I'm looking for positive responses. Negative just seems to pop up anyway. But I have seen more intentioned effort to bring awareness even from an organizational level. Take a mental health day, no explanation needed. Take time for yourself. Take a wellness day. How are you going to call it?

 

Dr. Tania: [00:36:22] I've seen organizations start to roll out these programs, policies, benefits when it relates to mental health and acknowledging that this is something that is healthy to pay attention to. It's not a negative. It's not something to be ridiculed. And if we can apply that sensitivity and I think it does start with leadership, if your leadership is comprised of people that have a bullying tendency, and I think organizations need to be very deliberate and looking at themselves in the mirror, then how can you possibly have creative data science teams and creative programing to reach out to populations in the community that with an effort to help them? If your organization itself has kind of a toxic culture, I don't know how that works.

 

Dr. Tania: [00:37:10] How do you get that done? So I think it starts from people in positions of power, maybe just being made aware. We're going to initiate this program and we want to be sensitive and people coming up with ideas to start that way to to start including that awareness early on in a project or early on in discussions and whether it's from policy change to organizational change to let's let's create a new algorithm, right? Start bringing these people into conversations very early and get that stakeholder buy in from a very positive, inclusive, upbeat. Those things matter. And people are people are keen people to say, but people can smell B.S. most of us. So it needs to be authentic as well. And I again, I'm saying that after seeing this very positive change, whether you're looking at LinkedIn or Twitter or you're coming to an organizational event like HIMSS, where that focus on mental health is starting to to open up and be more acceptable to talk about, not only talk about, but treat say yes.

 

Dr. Tania: [00:38:14] I also have experienced this. How can I help you? What should I be doing in my organization where those conversations are taking place at the highest levels? It's my belief that that is one of the best ways to go about creating a new program, project, mathematical model, you name it is be inclusive, right out of the gate. Remove stigma as much as you can. There's always going to be somebody negative. There's there's there's those people exist. And those people should not prevent doing the right thing just because, you know, there's going to be some negative kick back at some point. That's fine. Let it come. Don't let that stop you from trying to initiate the change, the policy, the algorithm, the project, whatever it might be in a way that is most beneficial to the population you're targeting. And that goes back to that pre mortem I discussed. But removing that stigma, removing perceived stigma and again pointing back to having a diverse data team. I can't tell you what the perceived stigma might be being transgender because I'm not transgender. So don't ask me why don't we find someone who identifies with that population without making them a token? You know, let's let's be genuine and authentic and saying we really want to hear from you, the people that have the energy, emotionally and otherwise, to be a part of it. We'll step up. Most people want to help, but they need to know that that their their help is truly wanted and they're coming to a safe space.

 

Jody Ranck: [00:39:44] So before we go, I thought one final question could be around, you know, if we think about an industry, you provided just a sampling of some of the existing biases. So if we want to, as an industry if you were Joe Biden's health equity czar and you were tasked with allocating funding for around AI and bias, for example, what do you think you would do to try to amp up the efforts to rid these models and algorithms of bias and cause a lot less harm and create more trust in the health system? Hopefully and make it deliver on the promises.

 

Dr. Tania: [00:40:27] One of the first things I would do after I bought an outfit appropriate for that title is I would have a call to action. I would want people in positions of power from wildly diverse backgrounds to join me because that is not a problem I could tackle on my own, nor should I, right? I need to if I want to tackle the problem in the right way. I need to have people from diverse backgrounds, diverse experiences and diverse perspectives. Again, like I said, all aspects of diversity from age and you name it, to join me in that effort. And I would give a significant timeline for it. And I would I would say, let's start with policy. I feel like a lot of things have not been done because health care policy is lacking. The lack of continuity between states, even the lack of continuity between the way data is collected, health care data specifically is collected between health care and hospital systems in the same region. It's ridiculous. So I would want to have a very strong team from diverse backgrounds, different parts of the country to tackle the problem. I would not do it alone.

 

Jody Ranck: [00:41:35] Great. Well, I hope someday we get to see you take on something like that, because I've really enjoyed our conversations and I think there's a lot here for our listeners to learn from. And I want to thank you for your time and I want our listeners to know things that we raised in this. Any articles, thinkers, so forth that we raised in this issue of our podcast. I'll put in the blog post and now have some contact info. If you want to reach out directly to Dr. Tanya as well. And, and I just wanted to thank you so much for taking some time today to talk to us.

 

Dr. Tania: [00:42:15] This has been my pleasure, and I love these conversations. Thank you for having me.

 

Introduction
Implicit Bias
Biased Algorithms Currently in Use
Social Construction of Race
What Can Be Done?
AI Governance and Responsibility
"Ethics-Washing"
Preventing Stigmatizing Marginalized Communities
If You Were The Health Equity Czar...

Podcasts we love