Reading view

There are new articles available, click to refresh the page.

Medicare-funded medical residencies falling short of goals: Report

Interview transcript:

Terry Gerton I suspect that most people don’t know that Medicare funds graduate medical education. This is a topic you’ve looked into quite a bit at GAO. Give us an overview of this situation first.

Leslie Gordon Medicare is the biggest funder of graduate medical education. There are a number of federal programs that fund graduate medical education, but Medicare is the primary funder, funding about $22 billion in 2023. We’ve been looking at it over a course of years, because that’s a chunk of change to look at and understand if it’s being effective. Over the course of our work in 2017, 2018, we’ve noticed that there’s a misalignment; there’s an unevenness in how medical residency is distributed across the country and where those graduate medical education dollars go.

Terry Gerton How’s it supposed to work?

Leslie Gordon Well, the impetus behind funding for graduate medical education is to ensure we have a well-trained workforce. And indeed, it should be distributed across the country so that people have access to services. And Medicare cares about having trained providers to care for all the Medicare beneficiaries across the country.

Terry Gerton So the Consolidated Appropriations Act of 2021, if we can take our minds back that far, put in some provisions maybe to try to address this misallocation of graduate medical education, and you’ve just published a report on that. Tell us about what you found in terms of the first three years of this program.

Leslie Gordon Our report is an interim report. The Consolidated Appropriations Act of ’21 funded 1,000 new residency positions. That’s in a framework of over 163,000 medical residents. So it’s not a lot of positions, but it’s designed to alleviate situations where hospitals would like to expand, or smaller hospitals would like to open new medical residency programs, and to direct some attention to underserved communities.

Terry Gerton And as you looked at the distribution of those thousand positions, what did you find?

Leslie Gordon We found that awarded hospitals were very similar to those that applied who were not awarded. So we did a comparison to look at, were there any biases in how these positions were being distributed when CMS was distributing them? Or, did they follow the rules and the categories set out in the CAA? And generally, the awardees and those that were not awarded were similar in nature. There was an effort and an impetus to focus on underserved areas, particularly rural areas, as we noted earlier. And while there wasn’t a redistribution of a majority of positions going to rural areas, there was an emphasis and a success in funding the rural hospitals that applied. Ten rural hospitals applied, nine were awarded. That’s different from about 50% of urban hospitals that applied that were awarded.

Terry Gerton Sounds like this really didn’t get to the crux of putting more residents in urban hospitals.

Leslie Gordon It didn’t put more residents necessarily in urban hospitals. It put more residents everywhere, let me say that. It emphasized and allowed for more residency positions and programs in rural and underserved areas. But with a small portion of residencies that were being awarded, it’s a big pile to redistribute and only have a thousand pieces with which to do that.

Terry Gerton I’m speaking with Leslie Gordon. She’s director for Medicare at GAO. Are there particular issues that the smaller or rural hospitals faced in terms of applying for this program or really making the best use of the resources that could potentially be available?

Leslie Gordon In the course of our work, we talked to representatives from hospitals in all kinds of areas, including rural hospitals. We talked to some hospitals and we talked to hospital associations. We heard that CMS’s use of the health professional shortage area as the primary criterion for distributing and allocating prioritization of who would be awarded got in the way for some smaller communities and those that might be training in rural areas or serving rural residents that traveled out of their local area to seek care. The way in which it didn’t quite land for rural areas is that the HPSA score is based on a population-to-provider ratio. And if you add one new doctor to population of a thousand people, that can really change the score a lot, as opposed to adding one new doctor to a population of 200,000 people. In that way, it wasn’t quite aligned with the goal of focusing on rural areas.

Terry Gerton And so, are there changes that CMS could make to this distribution model in the last couple of years of this program to help address those shortcomings?

Leslie Gordon We provided this feedback and other feedback to CMS as a part of the course of our work. They have two more rounds to distribute. And one of the things we also learned about is that hospitals needed other funding to make good use of these additional spots, these additional residency spots. I think being more aware of upfront costs, the need to maintain accreditation, and some of the challenges that we highlight in our report will help CMS, perhaps, and those who apply.

Terry Gerton If CMS does adjust the criteria or support, are there metrics that they should track to make sure that the changes are working?

Leslie Gordon CMS is tracking the metrics that were set out in the Consolidated Appropriations Act, and we will be looking at and reporting again in 2027.

Terry Gerton So when you think about this over the next couple of years, are there things that Congress, or educators, or other folks should watch to gauge whether or not these new slots are meeting workforce goals? Are they helping advance the accreditation or the certification of young medical students?

Leslie Gordon I think the experience of awarding these positions helps highlight that it’s not just funding that will solve the problem in terms of distribution of medical residency. There’s a support infrastructure that needs to be there in terms staffing, in terms equipment, in terms considering the types of experiences that medical residents need to have to be fully trained. So we cover all these things in our report and I think that this experience with the allocation of the thousand positions helps highlight all the infrastructure that’s needed to support medical residency training.

Terry Gerton Are there companion programs designed to address those infrastructure shortcomings?

Leslie Gordon The federal government actually has 72 programs. Yes, there’s a lot of programs and there are 72 health care workforce programs. And we have open recommendations from our prior work that HHS needs to examine the gaps in the workforce and take action to address those gaps and needs to communicate around them. We have other open recommendations that they don’t have the information necessary to identify and evaluate the cost effectiveness of those 72 programs. So we do have open work, not directly related to this report, but we have open recommendations that focus in on the need to have better information and truly evaluate the effectiveness of all of the Work First programs in a comprehensive way.

Terry Gerton And do you have a sense that CMS and HHS are taking on that task to sort of harmonize all of these programs so that they make sense and they are optimized for best outcomes?

Leslie Gordon They are making progress on our recommendations and we will continue to follow up to see how they progress.

The post Medicare-funded medical residencies falling short of goals: Report first appeared on Federal News Network.

© The Associated Press

A radiology technician looks at a chest X-ray of a child suffering from flu symptoms at Upson Regional Medical Center in Thomaston, Ga., Friday, Feb. 9, 2018. The bad flu season has contributed to the rural hospital's 25 percent increase in emergency room patients from a year ago. A government report out Friday shows 1 of every 13 visits to the doctor last week was for fever, cough and other symptoms of the flu. That ties the highest level seen in the U.S. during swine flu in 2009. (AP Photo/David Goldman)

U.S. health data is disappearing—with potentially serious consequences

Interview transcript:

Joel Gurin The work that we’re doing now is part of an effort being led by the Robert Wood Johnson Foundation, which has become really concerned about the potential for some real disruptions to what you can think of as the public health data infrastructure. This is the data on all kinds of things, on disease rates, on social determinants of health, on demographic variables that’s really critical to understanding health in this country and working to improve it for all Americans. And we’ve seen a lot of changes over the last year that are very troubling. There are attempts to make some of this data unavailable to the public. Some major research studies have been discontinued. There’ve been deep cuts to the federal staff that are responsible for collecting some of this data. And just cuts to research funding, for example from the NIH overall. So it really adds up to a cross-cutting risk to the infrastructure of public health data that we’ve relied on for decades.

Terry Gerton Talk to us about why this data is so important, why it’s the government’s responsibility, maybe to keep it up to speed, and whether it’s a policy shift that’s driving this, or is it just individual actions?

Joel Gurin From what we can tell, it’s, I would say, a number of policy decisions that are all related to how the Trump administration sees the president’s priorities and how they want to implement those. So it’s not like we’ve seen a wholesale destruction of data, but we’ve see a lot of kinds of targeted changes. Anything related to DEI, to diversity issues, to looking at health inequity. That’s at risk. Any kinds of data related to environmental justice or climate justice — that’s at risk. Data related to the health of LGBTQ people, particularly trans individuals, that’s at risk. So we’re seeing these kinds of policy priorities of the administration playing out in how they relate to the collection of public health data. And this data is critical because government data, number one, some of these data collections are expensive to do and only the government can afford it. And also federal data has a kind of credibility, as a kind centralized source for information, that other studies don’t have. For example, the administration recently discontinued the USDA’s study of food insecurity, which is critical to tracking hunger in America. And it’s going to be especially important as SNAP benefits are cut back. There are other organizations and institutions that study hunger in America. The University of Michigan has a study, NORC has a study. But the federal study is the benchmark. And losing those benchmarks is what’s troubling.

Terry Gerton One of the recommendations, just to skip ahead, is that more states and localities and nonprofits collect this data if the federal government is not going to. But what does that mean for trust in the data? You mentioned that federal data is usually the gold standard. If we have to rely on a disperse group of interested organizations to collect it, what happens both to the reliability of the data and the trust in data?

Joel Gurin It’s a great question, and it’s one that we and a lot of other organizations are looking at now. One of the things that’s important to remember is that a lot of what we see as federal data actually begins with the states. It’s data that’s collected by the states and then fed up to federal agencies that then aggregate it, interpret it and so on. So one of questions people have now is, could we take some of that state data that already exists and collect it and aggregate it and study it in different ways, if the federal government is going to abdicate that role? There was some very interesting work during COVID, for example, when the Johns Hopkins Center, Bloomberg Center for Government Excellence, pulled together data from all over the country around COVID rates, at a time when the CDC was not really doing that effectively, and their website really became the go-to source. So we have seen places where it’s possible to pull state data together in ways that have a lot of credibility and a lot impact. Some of the issues are what do the states really need to make that data collection effective? So regardless of what the federal government does with their data, they need mandates from the federal government to collect it, or it won’t be collected. They need funding. About 80% of the CDC’s budget actually goes to state and local, and a lot of that is for data collection, so they need that funding stream to do the work. And they also need networks, which are starting to develop now, where they can sort of share expertise and share insights to make data work on a regional level.

Terry Gerton I’m speaking with Joel Gurin. He’s the president and founder of the Center for Open Data Enterprise. Well, Joel, then let’s back up a little bit and talk about the round table and the research that led into this paper. How did you do it and what were the key insights?

Joel Gurin So one of the things that our organization, Center for Open Data Enterprise, or CODE, does is we hold roundtables with experts who have different kinds of perspectives on data. And that’s what we did here with Robert Wood Johnson Foundation support. We pulled together a group of almost 80 experts in Washington last summer, and we led them through a very highly facilitated, orchestrated set of breakout discussions. We also did a survey in advance. We did some individual interviews with people. We do a lot of our own desk research. The result is a paper that we’ve just recently published on ensuring the future of essential health data for all Americans. You can find it on our website, odenterprise.org. That’s odenterpreise.org. If you go to our publications page and do the health section in the drop-down from publications, you’ll find it right there, along with a lot of other op-eds and things we publish related to it. Putting out this paper was really the result of pulling together a lot information from literally hundreds of pages of notes from those breakout discussions as well as our own research and as well is tracking everything that we could see in the news. But one of the things that I want to really emphasize, in addition to the analysis that we’ve done of what’s happening and what some of the solutions could be which is that’s a fairly lengthy paper and hopefully useful, we’ve also put together an online resource hub of what we think are the 70 or so most important public health data sets. And I want to really stress this because we think it’s actually a model for how to look at some of the issues affecting federal data in a lot of areas. We found that by working with these 80 or so experts and doing additional research and surveying them and talking to them, there’s a lot commonality and common agreement on what are the kinds of data that are really, really critical to public health and what are those sources. Once you know that, it becomes possible for advocates to argue for why we need to keep this data and how it needs to be applied. And it’s also possible to ask questions like, for this particular kind of data, could somebody other than the federal government collect it? And could we develop supplemental or even alternative sources? So we really feel that that kind of analysis, we hope, is a step forward in really figuring out how to address these issues in a practical way.

Terry Gerton That’s really helpful and also a great prototype for, as you say, data in other areas across the federal government that may or may not be getting the visibility that they used to get. What were the key recommendations that come out of the paper?

Joel Gurin Well, we had recommendations on a couple of different levels. We had recommendations to, as we talked about before, to really look at state and local governments as important sources of data. They are already, but could more be done with those? This includes, for example, not just government data collections the way it’s done now, but using community-based organizations to help collect data from the community in a way that ultimately serves communities. We’re also very interested in the potential for what are being called non-traditional data sources, like the analysis of social media data and other kinds of things that can give insights into health. But I think probably the single most important recommendations at the federal level are to continue funding for these critical data sources and to recognize how important they are and to really recognize the principle that there’s an obligation to understand health and improve health for all Americans, which means looking at data that you can disaggregate by demographic variables and so on. I want to say we have had some really positive signs, I think, from Congress, particularly on the overall issue of supporting health research. And when we talk about NIH research, remember some of that is really lab medical research, but a lot of it is research on public health, research on social factors, research on behavioral factors, all of this kind of critical work. And the president’s budget actually recommended a 40%  cut in NIH funding, which is draconian. The Senate Appropriations Committee over the summer said, we actually do not want to do that, and in fact, we want to increase the NIH budget by a small amount. So I think what we’re seeing is there’s a lot of support, bipartisan support in Congress, for protecting research funding that ultimately is the source of a lot of the data we need. Some of this is just because it’s a shared value, and some of it is because those research dollars go to research institutions in congressional districts that representatives and senators want to see continue to be funded. So I think that basic fear that a lot of us had a few months ago, that research was simply going to be defunded, I think, that may not happen. And I would hope that Congress continues both the funding and also support for not only some of this research funding, but agencies like the National Center for Health Statistics, or the Agency for Health Research and Quality, which have been under threat, to really recognize their importance and sustain them.

Terry Gerton One of the challenges we might face, even if Congress does appropriate back at the prior levels, is that much of the infrastructure has been reduced or eliminated, and that’s people and that’s ongoing projects. How long do you think it will take to kind of rebuild back up to the data collection level that we had before, if we do see appropriation levels back to what they were?

Joel Gurin I think that’s a really critical question. You know, early in the administration, 10,000 jobs at HHS were cut, about a quarter of those from the CDC. But there has been some pushback. There was an attempt during the shutdown to do massive layoffs in HHS and CDC. The courts ruled against that. So I’m hoping that we can prevent more of that kind of brain drain. It will take a while to restaff and really get everything up to speed, but we think it’s doable and we hope we can get on that path.

The post U.S. health data is disappearing—with potentially serious consequences first appeared on Federal News Network.

© Getty Images/iStockphoto/Panuwat Sikham

health care icon pattern medical innovation concept background design

Inside the Biggest U.S. Civilian Agency’s Pentesting Strategy

By: Synack

The U.S. Department of Health and Human Services (HHS) draws on Synack’s trusted security researchers and smart pentesting platform to stay nimble in the face of fast-moving cyberthreats. 

With 84,000 federal employees, the agency’s sheer size poses challenges when it comes to addressing the cyber talent gap or pentesting its most critical networks. It’s the largest U.S. civilian agency by spending.

“We have an enormous footprint on the internet,” said Matthew Shallbetter, director of security design and innovation at HHS, during a webinar Wednesday hosted by Synack. “Across the board, HHS is both vast and well-known – and so a good target for troublemakers and hackers.” 

He cited constant cyberthreats to the National Institutes of Health, HealthCare.gov and the Centers for Disease Control and Prevention – some of the most recognizable federal research centers and government services. All those resources fall under HHS’s purview.

So how does the agency hire for mission-critical cybersecurity roles, stay on top of shifting zero-trust requirements and satisfy the need for continuous security testing?

Shallbetter shared his insights with Synack’s Scott Ormiston, a federal solutions architect who’s no stranger to the challenges facing public sector organizations globally.

With an estimated 2.72 million unfilled cybersecurity jobs worldwide, government agencies are struggling more than ever to meet diverse infosec hiring needs.  

“Attackers are responding so much faster today than they were even five years ago,” Ormiston pointed out. “In the time that a vulnerability is released to the public, within minutes of that release, attackers are out scanning your systems. If you don’t have enough skilled personnel to run a continuous testing program and to continuously be looking at your assets, how do you address that challenge?”

Here are a few themes and highlights from the webinar:

Continuous pentesting is a must

It can take weeks to spin up a traditional pentest to find and fix urgent software bugs. Meanwhile, bad actors almost immediately start scanning to exploit those same vulnerabilities, whether they’re blockbuster flaws like Log4j or lesser-known CVEs.

Against that backdrop, traditional pentesting clearly falls short. But is continuous pentesting realistic?

“The short answer is yes, because your adversaries are doing it every day: They’re continuously testing your environment,” Ormiston said.

Shallbetter noted that HHS has its own set of pentesting teams that are centrally located and focus on high-value assets. But there isn’t enough in-house talent to keep up with regular testing, scanning and patching.

“If we could focus on what’s really, really important and test those [assets], we might have enough bodies,” he said. “But it’s really a challenge to try to patch vulnerabilities… The footprint never shrinks; it’s always expanding.” 

To augment his own agency’s workforce capabilities, Shallbetter pulls from Synack’s community of world-class researchers. The diverse members of the Synack Red Team (SRT) allow HHS security testing to keep up with rapid software development cycles and the unrelenting pace of digital transformation.

HHS led 196 assessments using Synack’s platform, adding up to over 45,000 hours of testing on its perimeter services as part of an established vulnerability disclosure process.

There’s no match for human insight

That adds up to a lot of actionable data.

“We really couldn’t have done the VDP the way we did… without using a centralized platform like Synack,” Shallbetter said. “The human insight was key.”

He pointed out that HHS has automated tools across the board to help developers weed out vulnerabilities and drive down risk.  

But over and over, SRT members would find more.

Shallbetter said his favorite examples are when a system owner engages the Synack Platform to validate that HHS has really fixed a vulnerability. “They ask for a retest and the researcher says, ‘Oh, I did X, Y, and Z, but I did it again…’ And the system owner says, ‘Wow, that’s really cool.’”

Those exchanges also build trust between the SRT community and HHS developers who appreciate researchers’ ability to find the vulnerabilities that matter, cutting through the background noise of automation. An average of 30 SRT members contribute their expertise to each HHS assessment, according to Shallbetter.

“When you put a bunch of humans on a target, even if it’s been scanned and pentested by an automated tool, you will find new problems and new issues,” he said.

Zero trust is no longer just a buzzword

The White House early this year unveiled its highly anticipated zero trust strategy, M-22-09, which set federal agencies on a path to achieve a slate of zero-trust principles.

Those five security pillars include identity, devices, applications and workloads, networks and data.

“It’s great to have this architecture,” Ormiston said of M-22-09. “But this also means additional stress on a cyber workforce that’s under pressure.”

Zero trust is a “hot topic” at HHS, as Shallbetter noted.

“It doesn’t feel like a marketing term; people are really beginning to understand what it means and how to implement it in certain ways,” he said.

And pentesting has emerged as “a significant part” of meeting HHS’s zero trust goals. 

“I do think the scope and scale of technology now means the real vision for zero trust is possible,” he said. “For HHS, penetration testing has been an important part of speeding our deployment processes.”

Agencies have until the end of fiscal 2024 to reach the pillars of the zero trust paradigm described in the White House memo.

In the meantime, Synack will continue working as a trusted partner with HHS, delivering on-demand security expertise and a premier pentesting experience.

“I love being able to sort of toss the schedule over the fence and say, ‘hey, Synack, we need four more [assessments], what are we going to do?’—and have it happen,” Shallbetter said.

Access the recording of the webinar here. To learn more about why the public sector deserves a better way to pentest, click here or schedule a demo with Synack here.

The post Inside the Biggest U.S. Civilian Agency’s Pentesting Strategy appeared first on Synack.

❌