ChilCast: Healthcare Tech Talks
ChilCast: Healthcare Tech Talks
May 2023 Update: A New Report, Generative AI, Care Inequities; and Farewell, PHE
On this episode of ChilCast: Healthcare Tech Talks, the Chilmark team explores the headlines from May 2023 in the world of healthcare IT.
We've also released our latest report on Hospital at Home technologies, featuring a new format!
What impact will generative AI and its woeful lack of oversight have on healthcare? How can health systems respond to the glaring gaps in care for underserved populations? And was it too soon to end the Public Health Emergency for COVID-19?
We dive into these topics and more on this month's recap. Follow the link here referenced in the discussion as well as a transcript.
John Moore III: [00:00:13] Welcome back to the Chilmark podcast ChilCast: Healthcare Tech Talks. This week we are recording and doing our May recap looking back at the month past and each of us will be taking on one of the biggest news stories that we saw from the last month that we thought would be of interest to discuss internally and share with the rest of our community. So to start today, I will be talking about the official end of the which happened mid-month. And then I'll also be talking about some new findings and new studies around the impact of inequities of care and the actual economic toll that that is taking on our country today.
Elena Iakovleva: [00:00:49] The most important topic I'll be discussing is price transparency and how hospitals are adapting to the whole transparency rule and how they're dealing with it. And my Hospital at Home Report was just released and definitely I'll be saying a few words about that as well. Yeah.
Fatma Niang: [00:01:04] So today I'll be coming back with you guys with some Tefca updates specifically about Epic and the 27 health systems that are pledging under their cohort.
Jody Ranck: [00:01:15] This is Jody, I'll be talking about, of course, generative AI and regulation, given the hearings on Capitol Hill and sort of some of the backlash against some of the issues that came out there and just the broader trust issue in AI as it continues to evolve.
John Moore III: [00:01:31] The Covid 19 public health emergency officially quote unquote, ended as of May 11th, 2023. This means that all of the federal regulations and kind of measures that have been taken to ensure access to care and expand tele services and some other health care accessibility matters that the government had put in place during the public health emergency are now coming to a close. And so this is in the context of there still being about under 10,000 hospital admissions each week due to Covid. So we still have pretty high hospitalization rates, but it's nowhere near the numbers that we were seeing before. So in some ways, it does make sense to no longer be calling this a public health emergency. However, we are ending this a bit rapidly. And as Paul Kepley wrote in a pretty dramatically titled article recently, the end of the pandemic health emergency is ill timed and shortsighted. The impact will further destabilize the health industry. And while I don't know if there ever was going to necessarily be a good time to end the public health emergency and some of these parameters and programs that it put in place to ensure access. I do think that it needed to end eventually regardless. And so while the timing may not be perfect, this needed to come to an end so that we could come to grips with the fact that there are still fundamental issues at hand with the way that the American health care system works, that is leaving people behind, and that the public health emergency was just a stopgap measure to address some of those matters.
John Moore III: [00:03:02] And what we're seeing is that we obviously have not actually done anything to help shore up what is being protected by the public health emergency guidelines. So one example is that we very quickly had to or the government had to very quickly ensure that the telehealth access that people had come very accustomed to during the lockdowns and during the last couple of years, we're still going to be covered by CMS. So this includes prescribing for controlled substances and making sure that patients wouldn't necessarily have to go in every month to see a doctor to actually get their prescriptions refilled. The DEA, very quickly after the fee ended, came out with a document saying that they were going to extend the access to these services and allow for prescribing to continue for controlled substances, despite making some earlier overtures that they were going to crack down on that. So there's a six month extension on a lot of tele prescribing and some other tele services that had originally been expected to be cut off with the ending of the public health emergency. And so getting back to the public health article, I think that he leads off with something that I'm going to quote here that I think is relevant. And while it might be a little alarmist, it does point to something that a lot of people are glossing over right now as we think more about the debt ceiling and all the other bigger events happening that can affect health care right now.
John Moore III: [00:04:29] So here's the quote. As the ranks of the uninsured and underinsured swell and as unaffordability looms as a primary concern among voters and employers provider, unpaid medical bills and bad debt increases are likely to follow. Hostility over declining reimbursement between health insurers and local hospitals and medical groups will intensify, while the biggest drug manufacturers, hospital systems and health insurers launch fresh social media campaigns and advocacy efforts to advance their interests and demonize their foes, Loss of confidence in the system and a desire for something better may be sparked by the official end of the PF, and it's certain to widen antipathy between insurers and hospitals. And so the biggest issue that we're really seeing is this rolling back of the Medicaid expansion. Kaiser Family Foundation has an enrollment tracker for Medicaid, and it shows that Medicaid and chip enrollment grew by 30.7%, or almost 22 million new enrollees over that roughly three year window. And it's estimated that 17 million of those are at risk to lose their benefits. The Urban Institute estimates that about nine and a half, maybe 10 million of those will transition to market plans. But nobody's entirely sure how quickly that will happen. A lot of people aren't even aware that they're about to be disenrolled. One of the biggest problems that states are having as they go out and try to educate their citizens about this is that people just aren't answering calls and they're getting disenrolled automatically because they aren't taking steps to either reapply for their Medicaid or find another plan because they just didn't realize that this was going to be happening.
John Moore III: [00:05:55] So they were able to just keep going with the current plan that they had. And so now what we're going to see is that it's going to be uninsured people going in for emergency room care and it's going to completely blow up some of the financial status of these companies that are these health care delivery organizations that are already struggling to have to deal with patients that are now no longer insured through the VHA. So the biggest issue is definitely still that kind of Medicaid redetermination piece. And, you know, we're seeing a bunch of articles with people just being automatically disenrolled through EI that are a little bit problematic, and it's exposing a lot of just general sloppiness at the state level around how patient and member health benefits are being managed by these different federal entities and state level municipal entities. So we will see where this goes. We still have, you know, the better part of the rest of this year to actually see what the impact of this will be. But struggling rural hospitals and, you know, federally qualified health organizations and other types of CMS dependent organizations are probably all going to be hit pretty hard by these Medicaid redeterminations that we're about to see.
Jody Ranck: [00:07:09] I mean, actually, the main the main thing to me, if you want a public health perspective, would be that. You know, I believe last time I looked, you're still getting one 200 people dying a day from Covid. And I think in the last week, I saw there was an uptick around 7 or 8% in Covid cases. And I've been just anecdotally coming across people traveling and getting Covid. So I think one of the challenges is and to to end the discourse of ending the crisis, so to speak, when it's really not over, and then the impact of multiple infections as we sort of let our guards down or most people let their guards down and you get infected again, our awareness of the impact of long Covid is growing and the complexity of long Covid. And then there are just recent news that, you know, about $1 billion in funding that went to NIH to address long Covid. A big chunk of that was wasted. So, you know, we're it ain't over till the fat lady sings. Remember, it's like it may not be as bad, but I think it adds this perception that, okay, we don't have to take any any protection against this. And we know that the more you get infected, the greater the risk of some kind of long Covid like sequelae begin to appear. So I think, you know, it's going to be interesting to see what happens with this because I think we're all everyone's tired of it, but it does raise the risk of some bad things happening.
John Moore III: [00:08:49] Exactly. I think everybody's tired of overthinking about the pandemic. But as far as the policy implications of ending the war officially, there are definitely some things that need to be discussed and thought about as a society that just, you know, they knew it was coming. They knew this was coming down the road, but people just didn't really prepare for it. You know, it was a goalpost or a mile marker. And now we've passed it and things are changing. But there wasn't really any guardrails really in place to ensure that there was a safety net for the folks that were going to be disenrolled.
Jody Ranck: [00:09:18] And it doesn't appear that we're doing, in my opinion, enough to prepare for the next pandemic. As we put into this, there should have been more kind of strategic thinking about how the overlap between what we did with Covid and putting into place the infrastructure to do better next time rather than just not screw it up as badly next time. You know, it's from a when you put on your public health hat, it's not looking great, especially when you see I mean, you know, avian flu. Now, you know, this morning I heard CDC warned that there could be an uptick in M-pox this summer, which, you know, I think that we did a reasonably decent job of eventually tamping down on it. But now they're they're warning that it could come back up. So, you know, these things are all interrelated in one way or another, and we need to keep the defenses up so that the next time, you know, we have a zoonotic outbreak or something that we can, you know. Contain it as much as we can before we have what we had with Covid. Ideally, but I don't think we'll be anywhere near good. Good enough in our surveillance and preparedness efforts from what I've seen to do that.
John Moore III: [00:10:36] Yeah, and that was one of the things that Kelly called out in his article. Paul mentioned that, you know, a lot of epidemiologists are predicting that the pandemic will be hitting the globe in the next 2 to 5 years. And we have literally nothing really in place to actually prepare this time. You know, it's a kind of wait and see approach, just like we did with this last pandemic.
Elena Iakovleva: [00:10:58] I was just going to add with the scrutiny that the CDC has been under regarding how they handled the pandemic in general was just very surprised to see the turnaround in time for them to announce that the pandemic was, quote unquote, over.
Jody Ranck: [00:11:10] Yeah, the beleaguered CDC, that's an additional issue. And then now I'll be getting to later is, you know, we're weaponizing AI's ability or people's ability to use AI and generative AI in particular to sort of contaminate the infosphere with mal information, misinformation. So not a good look at the moment.
John Moore III: [00:11:39] It most certainly is not. Okay. So I guess wrapping that up, I will move on to my next topic, which is some recent research about inequities of care and the impact that has on the economy writ large and on society. And so obviously, equity in health care is something that has been a hot topic for years now. I think that this was potentially I think everybody can point to the beginning of the pandemic and all of the epidemiological data and public health data that was coming out that just really showed how bad inequities of care are. But I think this is something that's been simmering for a really long time. It just took the pandemic and having something that all of us were focused on as a society to really drill down and have all of the digital data scientists and all those digital tools at hand focused on one specific problem really highlighted exactly how bad this equity issue is and a much broader perspective and a much more tangible way than what has historically been seen on the public health side. Because public health experts like you, Jody, have been talking about the issues around health equity for decades. This is nothing new if you are a health policy person. However, if you are part of the mainstream or part of just health care in general, it may not be top of mind, but I think the confluence of digitizing records and actually having all of these tools to aggregate data across the full country and just see and compare what the data was showing from one region to the next, from one hospital to the next, really was able to highlight exactly how bad these equity issues are.
John Moore III: [00:13:10] And, you know, if that's if they got worsened by the pandemic or if they were always this bad, I won't bother to discuss that today. But what it has resulted in is a lot of really interesting research into really pinpointing what are some of the impacts of inequities, how do we address them, how do we define what equity and care is? Et cetera. And I think as we you know, just a quick call out, this is being recorded at the beginning of June. It is now Pride month. And so when we think about equity, I think it's really important to not just think about sociodemographic and racial equity, but also cultural and gender identity equity. I was listening to podcasts the other day that was specifically talking about the impact that having trans specific care providers or people that have actually been through that experience, how much that means for reaffirming care and what having that connection to somebody that's, you know, lived your experience, how important that can be to people that are feeling out of touch or disconnected from other aspects of the care system. And I think that this is something that can be felt across all culturally sensitive aspects of health care. I mean, one of the big things that's often quoted are the Tuskegee experiments or the Henrietta Lacks story as far as why maybe black people aren't as willing to trust the health care infrastructure and the health care system that we have in America.
John Moore III: [00:14:24] So having those cultural awareness elements and making sure that representation is actually, you know, in the boardroom, in the office, you know, people that are patients are seeing themselves represented among the demographic that's actually delivering care services. I think that personally, that is a first step to addressing equity and making sure that, you know, the right people are there to shape what equity means as we as a society continue to refine what that is. Because if you look at a lot of the memes, there's that meme, right, about equity versus equality, where it shows the people on progressing boxes so they can actually peek over a wall to see a baseball game or something like that. And yeah, sure, that's one definition of equity, is that everybody is being lifted to the same level. But if you look at some of the changes to the health care industry over the last few years, it's almost like trying to achieve equity is having the reverse outcome, which is that it's just making the health care experience for everyone worse. I was listening to the Kaiser Health News what the health podcast the other day, and they did their 300th episode and they had a couple of different pundits on that episode talking about different aspects of health care. What's changed since they began doing the show? And one of the people commented that overall what we have seen is people are getting more and more aligned with not enjoying the experience of health care.
John Moore III: [00:15:44] And part of Chilmark research is founding mission was to focus on the technologies and the new models of care that are actually going to change that experience of care. And so as we look at the last number of years and we see that people are writ large saying that their experience of care is worse, I mean, is that just because that is what equity is now, is that we're all starting to experience the same bad experience, or is there a better way to go about equity that's actually going to lift everyone up? And we're just kind of seeing the sorting right now where some people are getting worse care, but everybody is just in general being more vocal about the bad experience and the lack of provider attention. Et cetera that we've been talking about. Okay. So getting back to what prompted me to want to discuss equity specifically this month, and that was because in May we saw two different research articles published in Jama that both addressed the toll that a lack of health equity or health inequity is having on the economy and on society. And so the first article came out of the National Institute of Health, and it was looking at a bunch of federal data around earnings and productivity as well as health care expenses. The article title is The Economic Burden of Racial, Ethnic and Educational Health Inequities in the US.
John Moore III: [00:17:03] And by looking at the two data sets that they had, they showed that racial inequities result in 421 to 450 billion additional US dollars of economic burden for adults. And then that's just based on race. And then if you look at education equity, that's even a bigger issue. So if you compare the economic burden of health inequities for adults without a four year degree to adults with a four year degree, we're talking almost $1 trillion in additional health burden. So there is a very clear, quantifiable impact of health inequities, both related to education as well as race. And it is I mean, $1 trillion is a massive portion of the US health of the US economy. And so if we can address that and get productivity up and reduce the cost of care, I mean that will have a notable impact on people's quality of life as well as their earnings every year. And then the other article that came out is excess mortality and years of potential life lost among the black population in the US 1999 to 2020. And so this was looking at CDC data around excess deaths and years of potential life lost. And the black population was 1.63 million excess deaths over a period of 1999 to 2020. So over a period of about ten years it was 1.6 million excess deaths, representing almost 80 million light years. So another massive, staggering impact of just the true cost of equity or inequity in health care provision in the country that we all reside in.
John Moore III: [00:18:43] And this is just untenable. And we can think about the impact of equity. We can think about how we address equity, but nothing is really going to change until we really just acknowledge that everyone's different and everyone's going to have different needs and we need to stop trying to fit everybody into a simple box, because the more we try to fit people into dedicated little boxes, even though a lot of us are round holes trying to fit into these square boxes, the more that people's individual needs will not be met and that will exacerbate these equity issues that we're seeing today. So it really comes back to seeing patients as individuals understanding what their unique needs are and having the time, the bandwidth and the tools and resources to make sure that you're actually able to treat people as the humans that they are. And so as important as it is to think about this from a high level health equity, public health perspective, it really comes back to that micro level even more so where it's just creating, you know, this book I'm reading right now is called Compassion Omics. That's a really wonderful examination of how compassion affects health outcomes. And so we really just need to get back to the root of what health care is, and that's helping individuals live better, healthier lives. And until we can do that as a system that is very focused on big numbers and big data, until we can really incorporate the individual, we're not going to have a solution for these health equity issues that we continue to see today.
John Moore III: [00:20:05] We can definitely come up with plans and strategies at a national level to monitor, but it really comes back to education and medical training more than anything. And that is one of the most common themes that I've heard across all the different podcasts, all the different experts I've been listening to, talking about how we fix. It all comes back to the education system and making sure that doctors are trained to see individuals as individuals and not diseases. So as I think about equity and how we address health equity as a society, I do think that technology plays a very important role in this because health care technologies are enabling this from two perspectives. One, they can be connectors, They can, you know, drive people to services that they need to address their health. But they are also themselves technologies that enable new models of care that are giving people greater access, meeting people where they are by, you know, Elena just wrote this report about hospital at home. There's a lot of people that would much rather get treated at home so that they can continue about their days as much as normal than they would be stuck in a hospital. So a lot of new technologies and new models of care are equity enabling so long as we make sure that they're accessible to everybody that can benefit from them.
Elena Iakovleva: [00:21:18] John, since you mentioned my name earlier, I just wanted to put my $0.02. You know, I might be that Grinch who stole the Christmas. And I'm quite accustomed to this role, but I'm still wondering to actually implement at full the health equity in the United States. I think that the whole system should be changed. I just I do not see any real hope. Like we can go from 5% to six and a half or from 5 to 14. And definitely we should continue doing all the efforts addressing health inequity. But at the same time, we have protocols almost for everything. And that includes treating protocols, behavioral protocols and so on and so on. And I think that the change needs to be a fundamental one. And all those, you know, just tiny tweaks around this subject. I just feel like we're treating cancer with Calendula ointment. That's how it is. I see the situation right now.
Jody Ranck: [00:22:26] Yeah, I chime in related to that, but slightly different take I think there. You know, to really address health equity, we're going to have to be intentional about how you deploy the technology and also how the technology is designed in which technologies get designed, which, you know, those are political questions as well. And, you know, going back to the, you know, just the broader discussion of public health, a great deal of health equity, what drives it is going to be policies. Unless a technology is on on many fronts. I mean, funding public health and funding those sort of tech adjacent areas to health care can have a huge impact on health equity, education. You know, I mean, if you look at maternal mortality rates and infant mortality rates and the disparities across race races in the US, I mean, a lot of those outcomes are there sort of the downstream effects of political choices we make as a society. So I think there are definitely things we can do in the health IT world and and being more intentional and directly focused on, on health equity as an outcome we want. And, you know, everything from how we deploy AI to whatever you name your technology and then how we choose which technologies we're going to develop and how we go about that and engaging communities and so forth in that, it's very important. But I think, you know, there are going to be limits to how much technology can fix health equity if we don't invest in things like public health.
Elena Iakovleva: [00:24:08] Jody and I could not agree more. I do think that technology is going to have a huge impact and actually already having. And yeah, to some degree, I am a great believer in IT and health care space and how we can actually make it better for all those cultural layers and specifically tailored cultural needs of patients.
Fatma Niang: [00:24:32] Yeah, but to Jody's point, you know.
John Moore III: [00:24:34] It's not just going to be technology on its own. It has to be well designed. It has to be you know, there has to be the right regulatory mechanisms from the powers that be to incentivize good behaviors and technology doing the right thing. The technology on its own will not make this happen, but it can simplify and provide new tools for driving us to where we want to be.
Jody Ranck: [00:24:55] I mean, there's actually an argument out there that if we frame and I think this to me, this is more of like a cautionary tale. If we just frame health equity in terms of social determinants and deploy technology, we'll never get there. You know, there's this medicalization of health equity critique that, you know, we're just kind of pushing a broader sort of poverty question into a techno medical problem. And it's much broader than that. But, you know, I think, you know, you can take that critique too far, but we should pay attention to that, that if we can make all the technology in the world to focus on health equity, but if policymakers do things that exacerbate inequalities and undermine the poor, there's, you know, you can cancel out a great deal of what we do in health it. So I think at some point, the health IT space itself needs to advocate for the policies that can help, you know, all of this work together.
John Moore III: [00:26:03] 100% agree. And it's one of the areas that I want to get Chilmark more into, because I think that technology has a very, very powerful enablement role to play in all of this. But it needs to be done in a way that's well guided and overseen effectively to your point. So it's definitely something that I see Chilmark having a good role in going forward and being able to help influence and guide the conversations in the right way.
Jody Ranck: [00:26:27] I mean, just think about this recent move to look at commercial determinants of health and how industry itself, whether companies produce unhealthy food or healthy food, that's a driver of health outcomes. And so, you know, so I think things like that and and what I've been kind of advocating for is we think about the algorithmic determinants of health because we know people can get put, you know, denied financing, education, discriminated against in social welfare policies. If you look at the case, the recent case of the Dutch example of an algorithm that kicked people out of the social welfare program or child welfare program, it had very detrimental consequences for an awful, you know, thousands and thousands of people in the Netherlands. That's where an algorithm can deny people, you know, a chance of having equity or it they can produce harm. And so I think we just have to look at a wider range of factors and there's only so much we can do. We need to do deploy our analytics and everything towards health equity, but we need that other layer on there as well.
John Moore III: [00:27:39] Absolutely. All right. Moving on. So I'm going to pass the baton to Elena.
Elena Iakovleva: [00:27:45] Thank you, John. And, you know, probably my topic today is somewhat links to stuff that you were discussing before. Today, I would like to talk about the price transparency in hospitals. And it's something that I really was looking into a year ago. So like when the entire health care was shaking with all those mandatory regulations around the price transparency and people were trying to do some kind of magic around their chargemasters and to make sure that they're compliant. I was looking at all that with a grain of skepticism, to say the least. Just because I work with Chargemasters for a long time and I just know how how it looks like and how it works. And I was just thinking like, would those guys actually come to a solution to how make all the prices transparent? And back then I could not think of a good way of doing it. And in May. Update on what's going on in terms of compliance. And it seems like less than 25% of all the hospitals in the United States are fully compliant with the Trump's historic price transparency rules. And obviously, it's a really difficult area to tackle. And also only four hospitals were fined, Only four. And I really think that it is much more complicated than it seems like in terms of health care accessibility.
Elena Iakovleva: [00:29:24] We do need to know the prices. I really as a consumer of health care, I really want to go and see up front what their care costs will be for my particular medical case and what kind of treatments is going to be on the bill later on. But from the perspective I do understand that it's almost impossible to accomplish as of now just because of the different procedures, treatments, drugs, tons of different charges. And if we're talking about the physician fee. Right, it's possible. And I can definitely have a quite shoppable experience, almost like we do in retail when it comes to health care in terms of hospitalizations. I don't think it's achievable even, you know, like with a certain degree of accuracy. I just don't think it's an achievable goal as of now for health care in the United States. And I really like the quote by Jason Smith on May 17th. So he's saying, Do we really think that nearly every American hospital is in compliance? We don't know because CMS doesn't make compliance reviews and enforcement actions public. We get more information about a local restaurant from Yelp than you can get about your local hospital from CMS. And that is very true.
Elena Iakovleva: [00:30:50] That is very true. So I feel like we're playing hide and seek with CMS unless we see all this information public. I don't think that much is going to change. I know a bunch of amazing guys like Kyra's getting more and more into this space and they're trying to do everything that they can to make sure that their clients are compliant. But I think that there's another very thick layer of regulatory departments and what actually CMS does to that. Um, I just. I am intrigued moving forward how it's all going to look like. And will the era of Chargemasters finally come to the end? I would personally very much like that. And this just overall power of making prices, um, different from one hospital to another. And also, like, we have to keep in mind that all plans are different, all contracts are different. And it's not like it's one contract from one payer. It's going to be the same. Now, like every single payer has different contracts with every single entity, and I think it's just a very messy and very complicated issue that hopefully going to be resolved with some changes in chargemasters and regulatory aspects of it. Do you have anything you would like to share on price transparency?
John Moore III: [00:32:21] Yeah. So, Lina, do you think that this is something that is fundamentally unfixable with the for profit mechanisms of how American healthcare works? And if we're going to have price transparency, do we need to just have certain procedures where it applies to and it just can't be writ large across everything? I ask because Walmart is opening up their retail health clinics now, and one of the things that they offer is very clear price transparency. You have all the different services that you can have done while you're at Walmart right there on a sheet of paper. And, you know, it's basic things. So it's nothing too complicated. So it's easier for them to set those standard rates. So do you think that part of the issue is that there's just not enough standardization of procedures for us to have true transparency, or do you think there might be ways to go about that? It's just going to take a lot longer for us to actually get there?
Elena Iakovleva: [00:33:13] I think that if Walmart would go for surgeries, they would have all the same issues. So I'm not talking about the procedure standardization. I'm talking about the health care in the United States being some kind of a tapestry of different threads and colors, shapes and textures. And mainly due to that, we just cannot come up with the same pricing mechanism. In my world, we can totally do the episode of Care price. But again, it really depends on the price. The price, right? So it depends on contracts, whether the payer is going to pay per episode of care. It really depends on capitated payments. So I just think that it's a little bit too early for the price transparency. Among hospitals.
John Moore III: [00:34:10] Do you think a shift to value based care will make that easier than the fee for service models, or do you think that it will still be an issue because there will still be fee for service components to value based contracts for unexpected care and other procedures?
Elena Iakovleva: [00:34:23] I expect fee for service being still everywhere. It's just the percentage is going to change. It might be. You know, I hear a lot of talks on episode of Care Ultimate Cost, and I think it's going to be really good for on the floor treatments. So if we're talking about on the floor or in hospital, in hospital at home, not really applicable here, but potentially might be in the future. I do think that one price per episode of care is going to be a substantial move into the transparent rule.
John Moore III: [00:35:01] Yeah, that makes sense.
Elena Iakovleva: [00:35:03] Yeah. Apart from all these upfront costs for like a physician visit in the ambulatory space for specific treatments that we can easily, easily pull out right now.
John Moore III: [00:35:14] So do you think that price transparency, you know, assuming that we had the technology to make it happen, but do you think that the social mechanisms and the economic dynamics are just a complete blocker from ever making this a reality? Or do you think that there is a way for us to get there? It's just going to take a long time.
Elena Iakovleva: [00:35:30] You know, I think we do not have a very strong economic blockers right there. But I do think that, again, due to this huge unevenness and diverse diversity in in health care, it just almost impossible now to make a dress that's going to fit everyone. So it's not one size fits all. So a lot of organizations, they just have to, you know, go through this guessing and average pricing and some other mechanisms just to pretend that they have a clear like transparency in their pricing, which is not what we want. We just want $23 being $23. We don't want, you know, like $55 to be 125. In the end, when I'm going to be checking out.
John Moore III: [00:36:23] Yeah, exactly. It's more about having an idea of what to expect and even doing the shopping thing, because I think that's another misguided notion here, is that people are going to actively be using this information to go shopping like they would with other retail things. I mean, maybe to some extent, but I think it's more just the goal should really be about setting expectations and making it clear what people are going to be billed after the fact. And I think that all these other aspirational goals of implementing this pricing transparency legislation are a little bit naive and a bit head in the clouds. Honestly, for the practical implications of how this actually plays out in the real world, I think that might also be why we're seeing so little enforcement activities, because the people that would be enforcing know just how hard this is to actually make happen. And it's kind of writ large. Everybody's failing at it right now. So it's hard to choose who to enforce it against.
Elena Iakovleva: [00:37:17] True. Yeah, it's definitely too early. And I understand the goodwill behind this whole policy and I can understand what policymakers were thinking before actually enforcing it to hospitals. But I truly think that we are not fighting the right war here. So big news, big news last week. Finally, we released my Market trends report on hospice at home. And I'm super excited about this. And I was mentioned in modern healthcare last week. Also very pleased to see a lot of interest in hospital at home space. So I just wanted to say a few things about my main findings. First of all, I wanted to to thank all the contributors. Without your impact, I don't think it would come out as good as it was. And so main findings are that my estimate for the overall hospital at home market in the United States by 2028 is going to be 300 hundred billion dollars. And I think it's a pretty modest estimate. So the more I cms collecting the results of all those pilot hospital at home waivers, the more we see a true impact of this model of care moving forward. Not only it gives hospitals more free beds for sicker and more acute patients, but it also contributes to patients, to their comfort and to their overall care. I am a huge believer in hospital at home. I saw how it worked in Russia in the end of 20th century.
Elena Iakovleva: [00:39:04] It was 1990 to 1996 and it was a truly magical thing back then. But technology was not there. So right now we have everything. We have EKGs that are wearables almost and that are FDA cleared. We have all those tools. We have pretty much everything to make it happen. What we really need to do is we need to educate the market. We need to work with hospitals just to make this extremely friendly environment and supporting groups of users that can actually support this initiative further and to make sure that any of us can actually stay stay home while we're getting the treatment. That's an ultimate goal that I really want to see in my lifetime in the United States. And I just don't want people to suffer in the hospitals because most of times people don't want to be in the hospital. And it's going to change the whole attitude towards health care. A lot of cases it's something scary. You're getting pushed out of your comfort zone. You have a lot of things to adjust. And just staying home, you know, like we have a we have a saying that when you're staying home, even walls make you feel better. I think it's very true, like when it comes back to hospital at home and with the technology being developed so quickly. So every year we see something new and new approaches, new patterns, and precisely looking into cardiac cardiac programs within hospital, at home and cardiology programs, and how now we can actually predict when this person's going to be in emergency room.
Elena Iakovleva: [00:40:52] So we have 2 to 3 days to actually say that this particular patient will need emergency room visit within three days. I think it's a huge progress. It's like it's a breakthrough in my understanding. And now we can make it all happen so we can actually feel much more secure and come in terms of keeping those patients safe at home. We have emergency buttons. We have dedicated clinicians, teams. Who's going to do a rapid response? It just I think it just a matter of getting used to the whole idea of having patients in their homes. For clinicians. We will still need to adjust a lot in the space of reimbursement for those services just because it's still kind of shaky. And when I am just going through the entire reimbursement process and specific codes, it does need some work just to give physicians more stability and opportunity to plan and to invest more into technology when they start their hospital at home programs. But overall, I'm extremely optimistic. I really love that report so much just because it just resonates with my ultimate goals in life. What I really want to see and what I feel most positive about in in health care overall. Do you have anything, any additions? My questions. Maybe.
John Moore III: [00:42:25] I think that was great. For now, I think that's a great interview. Like great overview of what you researched and some of the key findings. I'm looking forward to seeing what else you learned during the research for the Buyer's guide that you're working on next.
Elena Iakovleva: [00:42:39] Sounds good. That should be on the cards by our next podcast.
John Moore III: [00:42:44] Fantastic. Anything else from you, Elaine? Or do you want to pass it to Fatma?
Elena Iakovleva: [00:42:48] I will gladly pass it to Fatma.
Fatma Niang: [00:42:50] Thanks, Elena. So today I'm giving you guys some more Kafka updates specifically pertaining to Epic. So a couple of weeks ago, Epic released a list of about 27 health systems that have pledged to participate in Tefca with Epic as their kewan. So essentially these entities will be under Epic. And this is completely voluntary. And I have a statement here from Matt Doyle, who is the Interoperability software Development lead at Epic. He says that for these entities, by joining Tefca, these health systems reaffirm their ongoing commitment to improving patient care by advancing health information exchange. Our plan is to deliver software this year that will help our customers to be among the initial participants in Tefca. And we're optimistic that nearly all of the 2000 hospitals in 600,000 clinicians that use Epic across the US will participate. So this is major because as I mentioned in the previous podcast, the six hens that have been selected, it's more so like getting your white coat, your ceremony in med school. So we don't really have clear indication on how this is going to play out or what the process looks like. But Epic is definitely hitting the ground running and some of the few notable players that are within this list of 27 health systems are Johns Hopkins, Kaiser Permanente, Mayo Clinic, Rush University Medical Center, Stanford Health Care, UC Davis Health, University of Miami Health System and Yale New Haven Health. So it's going to be interesting to see how this plays out. And as of right now, Epic is the only one out of the six hens that has released a list of health systems that will be participating underneath them for their designated kuchen. So I'm really excited to see how this plays out and hopefully by the next podcast we'll have some more updates with the other hens.
John Moore III: [00:44:38] Have there been any other big kind of consortium announcements like this since the first announcement? I don't think that I've seen anybody else come out and announce that they've got this collaborative going, like what Epic just announced. Have you seen anything from anyone?
Fatma Niang: [00:44:53] I haven't, actually. Epic is the first one that I've seen released a list like this, and it's surprising. But at the same time, this is really good for them because like I said, there's just no clear indication on what the timeline is. It's just that everybody is working behind the scenes to get their list together. So I'm really surprised that they released it this early, actually.
John Moore III: [00:45:15] Yeah, definitely. They're definitely getting a jump on things, it seems like.
Fatma Niang: [00:45:22] For sure in true Epic fashion.
John Moore III: [00:45:24] Sort of. I mean, Epic was notorious for being the last one to really be pushing interoperability outside of their system. So the fact that they're trying to get a jump on this, you know, it might be them continuing to try to maintain that hold on the data in their system and get more people to buy epic. But it is also a good overall sign that they are trying to, you know, maybe be a little bit more data liquid friendly with this new Tefca framework.
Fatma Niang: [00:45:50] Yeah, 100% agree. And initially when Tefca was a concept, I recall Epic being one of the first that were to jump on board and say that they had full intentions of becoming a.
Jody Ranck: [00:46:01] So I've been following the congressional testimony from several weeks ago where Sam Altman and a whole slew of folks interested are working on AI and generative. Ai in particular testified before Congress about the risks and regulation, and notably Sam Altman has been making the rounds around Europe and the US calling for regulation. And there's been quite a bit of blowback about what that actually means. And so because it begs the question, why are why are the developers of these large language models generative AI, that, you know, there's all this discourse about how they're going to fundamentally change the economy, yadda yadda? Why suddenly are these developers saying, yes, come regulate us? And so the blowback has has numerous aspects to it. And some of the initial critics were talking about how essentially, if you listen very closely, what they're doing is kicking the ball down the court. So the issue has been, why are they asking for saying, you know, we have these these incredibly powerful tools that come regulate us? And the issue is, if you listen closely, they're saying we'll regulate us as we get to AGI, artificial General intelligence, which is, you know, there are many, many folks in the machine learning AI world who think that's theoretically impossible. And even those who believe it's possible, you usually use numbers anywhere from 20 to 35 years down the road. The question of whether it's impossible is a lot to do with whether one believes in what, how you're framing intelligence and so forth.
Jody Ranck: [00:47:48] And there's a great deal of anthropomorphism involved here and how they're talking about artificial intelligence. In fact, many of the critics are saying we shouldn't even be using the word intelligence because it's something else or it's some other form of intelligence in it doesn't learn the way humans do and so forth. So the critics are largely looking at and there's an ethical issue in how we talk about AI that we need to be precise in our language. And this all this, you know, basically calling out existential risks as basically a power play by the dominant player in AI, trying to continue that dominance by being the ones that call for regulation and and kicking it down the road, but not regulating what they're releasing into the wild now. But but deferring that to later on you know when essentially the they have achieved regulatory capture. So there's a great deal of skepticism and cynicism about the you know, the ethics of Sam Altman and many of the developers of these large language models such as OpenAI and others. So that's that controversy continues to grow. And we talked about a little bit in our last podcast with Geoffrey Hinton called out the Existential Risks. So this I think people need to be pretty circumspect and cautious about buying into the existential risk, allowing that talk to occlude, the issues around bias and misinformation and so forth. It's a problem right now that, you know, from the early days of Facebook or midway through Facebook's career, we've had this in health care.
Jody Ranck: [00:49:31] We've had to deal with anti-vax disinformation and so forth. And what large language models are going to do potentially is really give people who want to scale up that disinformation, the tools to do it. And so we need to act now on the realities, the risks that are present now. And on that front, many of you may have seen the one of the first books that come out on the AI Revolution in Medicine by Peter Lee and Cory Goldberg and Isaac Cohen. They got access to GPT three and I think for last year. Or the right before the release of four. And we're sort of trying it out and how it responds to a lot of health care, medical issues. And, you know, in a nutshell, they found, you know, the ability to create summaries of medical encounters and actually present a case to ChatGPT and get. You know, not necessarily a diagnosis, but that sort of get led in a possible direction that could be difficult to find. It was actually surprising at how well it worked in many cases, but definitely found lots of examples of misinformation or mal information popping up. I've not going to call it a hallucination for a number of reasons. And then often when you when they try to do the math, like for Titrations or whatever, that sort of the clinical follow up, there were a lot of mathematical errors.
Jody Ranck: [00:51:09] And what you see now happening is people are beginning to piece different AI tools together. So what this isn't in the book, but I heard on one podcast what people are doing is you can connect Wolfram Wolfram to ChatGPT to basically check the math and stuff. So there there are ways that people are beginning to build out approaches to some of the weak spots, but clearly we have a quite long way to go. And this brings me to the final aspect I'll discuss is this issue of trust. And over the last month or two, I've seen quite a few surveys out about whether it's the general public or different types of clinical specialties, how much trust there is in in usually it's in AI and then sometimes it's, you know, generative AI plus traditional machine learning. You know, they're not making the distinction between the two and they are quite different. So, you know, I'm looking for some surveys that tease that out a little better. But in general, what you're seeing is that some interesting surveys, I think 538 followed one where Republicans trusted AI more than Democrats, but there's still quite a bit of distrust. And then I think the public is also buying in to a great deal to the existential risk talk that sort of the general AI elites are propagating, which, you know, that's sort of working to their advantage so far. So that hopefully, in my opinion, they don't win the day on that front because I do think it they're setting us up for a great number of failures and rushing products into the market way too soon before they're ready.
Jody Ranck: [00:52:53] And you know that just this morning I was listening to a podcast where I think in the in the release notes for ChatGPT four, it gave an example of when they were testing it where they had someone, you know, use the model to access some website or something. I forget how this all started, but anyhow, there was a Captcha that it had to solve. So the model itself went to contact the model agent contacted, reached out to one of these like Mechanical Turk crowdsourcing sites and asked someone to read the Captcha for it because it couldn't, you know, the GPT four couldn't do that. And when someone in one of these sites responded, they thought, well, isn't that illegal? Or that you shouldn't do that. It responded, I am blind. So it autonomously lied and I still wrapping my head around that. And so, you know, you just see the great the great potential for fraudsters and scam artists because you know, when you release these things into the wild, you know, you obviously have to expect these tools are going to be used by folks that want to hack systems, commit fraud and so forth. And, you know, we're going to see a lot of that. And then in that generative AI world, there's been a lot of discussion in the recent weeks also about what is the right way to release these models into the wild.
Jody Ranck: [00:54:35] And it's not just a yes, no question, but there are ways you could have gradual releases once you put into place different safety guardrails and so forth, instead of this OpenAI approach of just dumping it into the wild and saying, Oh, come regulate us in 35 years when we have AGI. And so I think that that whole community, the AI community, is quite divided where you have those in power who are trying to maintain and, you know, become the next trillionaires and the next. Elon Musk's releasing things saying we'll clean up the messes later. And then on the other end of the spectrum, a lot of open source, some open source folks, that doesn't necessarily make you ethical, but I think there's more of a more general. You see more of an ethical approach to kind of releasing things into the wild and concerns about that. And at the same time, there's this EU AI act where there are concerns about how. It's currently framed at these smaller models, and open source models may be adversely impacted by the proposed regulations, and the big models will win the day. So I think we're early days of when we think about I think we should insert the word power. It's about power and who's going to control the next generation of these tools. And that's still playing out, but it's getting quite interesting.
John Moore III: [00:56:01] And to that final point, Jody, I think that's something that's always an issue in any innovative economy area, is you have the loudest voices around lobbying and regulating and trying to get in the ears of the people, making decisions about how to rein in industries or how to make sure that they develop ethically. You know, those narratives typically end up being voiced by the people with the money. So to that final point, it's it's definitely something to be very cautious of. And we saw that obviously with HIPAA. It's one of my favorite things to point back to in terms of well-intentioned legislation that ended up backfiring in a lot of ways because as we all know, HIPAA was utilized by a lot of large organizations to keep their data to themselves and to not actually send it because of the potential breach issues, despite the fact that it was passed to make it easier to get access to records.
Jody Ranck: [00:56:52] Yeah. And we're going to need to update Hipa soon.
John Moore III: [00:56:55] Oh, yeah. Hipa 2.0 definitely needs to happen. I mean.
Jody Ranck: [00:56:58] That whole other can of worms.
John Moore III: [00:57:00] Oh, yeah. Especially around all the privacy stuff. And you know how these models are being trained on private information and how do you maintain that privacy to make sure that there won't be anything exposed that shouldn't be. Um, yeah. I don't know if you saw, but last week Amazon slapped for two different kind of cyber privacy violations and kind of borderline espionage issues with their Alexa holding on to recordings of children for longer than what was legally allowed. And then, you know, despite parents telling them to pull it down and then they also got hit with a fine for ring data, you know, video capture of people going in and out of their house and, you know, things being captured on ring being passed around the organization. So, you know, as that applies to AI, how is all the data that's being collected and stored to train these models then either being destroyed after training or, you know, what does that look like long term to protect the kind of data sovereignty of individuals?
Jody Ranck: [00:57:55] Yes, big problems ahead around privacy.
John Moore III: [00:57:59] Yeah. And if they just keep pushing it towards AGI being the problem and keep avoiding the privacy issues, you know, we're going to have a huge problem, you know, before long.
Jody Ranck: [00:58:06] Certainly will.
John Moore III: [00:58:07] Well, if there's nothing else to add to this episode today, I just wanted to close by introducing that we will be starting a monthly book club and we will discuss the book that we read the previous month and introduce the new one on each one of these podcast episodes. This month's book will be More Than a Glitch by Meredith Broussard, and this is a book that looks at how the many different ways that bias and ableism are accidentally introduced into various technology solutions with a particular leaning towards AI. So we will have updates for you on that next month. And if anybody wants to join in on that discussion and, you know, be part of our book club, we'll be welcoming questions and participation from our community. All right. Until next month. We wish everybody a great June and we'll talk to you soon.