Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Social Media Addiction Lawsuit Moves Forward After Snap Reaches Settlement

22 January 2026 at 04:24

Snap settled before jury selection in a landmark social media addiction case, while Meta, ByteDance, and YouTube still face trial over “addictive design.”

The post Social Media Addiction Lawsuit Moves Forward After Snap Reaches Settlement appeared first on TechRepublic.

Social Media Addiction Lawsuit Moves Forward After Snap Reaches Settlement

22 January 2026 at 04:24

Snap settled before jury selection in a landmark social media addiction case, while Meta, ByteDance, and YouTube still face trial over “addictive design.”

The post Social Media Addiction Lawsuit Moves Forward After Snap Reaches Settlement appeared first on TechRepublic.

US senators demand answers from X, Meta, Alphabet, and others on sexualized deepfakes

15 January 2026 at 10:00
In a letter to the leaders of X, Meta, Alphabet, Snap, Reddit, and TikTok, several U.S. senators are demanding the companies provide proof that they have "robust protections and policies" in place, and how they plan to curb the rise of sexualized deepfakes on their platforms.

U.S. health data is disappearing—with potentially serious consequences

13 January 2026 at 16:32

Interview transcript:

Joel Gurin The work that we’re doing now is part of an effort being led by the Robert Wood Johnson Foundation, which has become really concerned about the potential for some real disruptions to what you can think of as the public health data infrastructure. This is the data on all kinds of things, on disease rates, on social determinants of health, on demographic variables that’s really critical to understanding health in this country and working to improve it for all Americans. And we’ve seen a lot of changes over the last year that are very troubling. There are attempts to make some of this data unavailable to the public. Some major research studies have been discontinued. There’ve been deep cuts to the federal staff that are responsible for collecting some of this data. And just cuts to research funding, for example from the NIH overall. So it really adds up to a cross-cutting risk to the infrastructure of public health data that we’ve relied on for decades.

Terry Gerton Talk to us about why this data is so important, why it’s the government’s responsibility, maybe to keep it up to speed, and whether it’s a policy shift that’s driving this, or is it just individual actions?

Joel Gurin From what we can tell, it’s, I would say, a number of policy decisions that are all related to how the Trump administration sees the president’s priorities and how they want to implement those. So it’s not like we’ve seen a wholesale destruction of data, but we’ve see a lot of kinds of targeted changes. Anything related to DEI, to diversity issues, to looking at health inequity. That’s at risk. Any kinds of data related to environmental justice or climate justice — that’s at risk. Data related to the health of LGBTQ people, particularly trans individuals, that’s at risk. So we’re seeing these kinds of policy priorities of the administration playing out in how they relate to the collection of public health data. And this data is critical because government data, number one, some of these data collections are expensive to do and only the government can afford it. And also federal data has a kind of credibility, as a kind centralized source for information, that other studies don’t have. For example, the administration recently discontinued the USDA’s study of food insecurity, which is critical to tracking hunger in America. And it’s going to be especially important as SNAP benefits are cut back. There are other organizations and institutions that study hunger in America. The University of Michigan has a study, NORC has a study. But the federal study is the benchmark. And losing those benchmarks is what’s troubling.

Terry Gerton One of the recommendations, just to skip ahead, is that more states and localities and nonprofits collect this data if the federal government is not going to. But what does that mean for trust in the data? You mentioned that federal data is usually the gold standard. If we have to rely on a disperse group of interested organizations to collect it, what happens both to the reliability of the data and the trust in data?

Joel Gurin It’s a great question, and it’s one that we and a lot of other organizations are looking at now. One of the things that’s important to remember is that a lot of what we see as federal data actually begins with the states. It’s data that’s collected by the states and then fed up to federal agencies that then aggregate it, interpret it and so on. So one of questions people have now is, could we take some of that state data that already exists and collect it and aggregate it and study it in different ways, if the federal government is going to abdicate that role? There was some very interesting work during COVID, for example, when the Johns Hopkins Center, Bloomberg Center for Government Excellence, pulled together data from all over the country around COVID rates, at a time when the CDC was not really doing that effectively, and their website really became the go-to source. So we have seen places where it’s possible to pull state data together in ways that have a lot of credibility and a lot impact. Some of the issues are what do the states really need to make that data collection effective? So regardless of what the federal government does with their data, they need mandates from the federal government to collect it, or it won’t be collected. They need funding. About 80% of the CDC’s budget actually goes to state and local, and a lot of that is for data collection, so they need that funding stream to do the work. And they also need networks, which are starting to develop now, where they can sort of share expertise and share insights to make data work on a regional level.

Terry Gerton I’m speaking with Joel Gurin. He’s the president and founder of the Center for Open Data Enterprise. Well, Joel, then let’s back up a little bit and talk about the round table and the research that led into this paper. How did you do it and what were the key insights?

Joel Gurin So one of the things that our organization, Center for Open Data Enterprise, or CODE, does is we hold roundtables with experts who have different kinds of perspectives on data. And that’s what we did here with Robert Wood Johnson Foundation support. We pulled together a group of almost 80 experts in Washington last summer, and we led them through a very highly facilitated, orchestrated set of breakout discussions. We also did a survey in advance. We did some individual interviews with people. We do a lot of our own desk research. The result is a paper that we’ve just recently published on ensuring the future of essential health data for all Americans. You can find it on our website, odenterprise.org. That’s odenterpreise.org. If you go to our publications page and do the health section in the drop-down from publications, you’ll find it right there, along with a lot of other op-eds and things we publish related to it. Putting out this paper was really the result of pulling together a lot information from literally hundreds of pages of notes from those breakout discussions as well as our own research and as well is tracking everything that we could see in the news. But one of the things that I want to really emphasize, in addition to the analysis that we’ve done of what’s happening and what some of the solutions could be which is that’s a fairly lengthy paper and hopefully useful, we’ve also put together an online resource hub of what we think are the 70 or so most important public health data sets. And I want to really stress this because we think it’s actually a model for how to look at some of the issues affecting federal data in a lot of areas. We found that by working with these 80 or so experts and doing additional research and surveying them and talking to them, there’s a lot commonality and common agreement on what are the kinds of data that are really, really critical to public health and what are those sources. Once you know that, it becomes possible for advocates to argue for why we need to keep this data and how it needs to be applied. And it’s also possible to ask questions like, for this particular kind of data, could somebody other than the federal government collect it? And could we develop supplemental or even alternative sources? So we really feel that that kind of analysis, we hope, is a step forward in really figuring out how to address these issues in a practical way.

Terry Gerton That’s really helpful and also a great prototype for, as you say, data in other areas across the federal government that may or may not be getting the visibility that they used to get. What were the key recommendations that come out of the paper?

Joel Gurin Well, we had recommendations on a couple of different levels. We had recommendations to, as we talked about before, to really look at state and local governments as important sources of data. They are already, but could more be done with those? This includes, for example, not just government data collections the way it’s done now, but using community-based organizations to help collect data from the community in a way that ultimately serves communities. We’re also very interested in the potential for what are being called non-traditional data sources, like the analysis of social media data and other kinds of things that can give insights into health. But I think probably the single most important recommendations at the federal level are to continue funding for these critical data sources and to recognize how important they are and to really recognize the principle that there’s an obligation to understand health and improve health for all Americans, which means looking at data that you can disaggregate by demographic variables and so on. I want to say we have had some really positive signs, I think, from Congress, particularly on the overall issue of supporting health research. And when we talk about NIH research, remember some of that is really lab medical research, but a lot of it is research on public health, research on social factors, research on behavioral factors, all of this kind of critical work. And the president’s budget actually recommended a 40%  cut in NIH funding, which is draconian. The Senate Appropriations Committee over the summer said, we actually do not want to do that, and in fact, we want to increase the NIH budget by a small amount. So I think what we’re seeing is there’s a lot of support, bipartisan support in Congress, for protecting research funding that ultimately is the source of a lot of the data we need. Some of this is just because it’s a shared value, and some of it is because those research dollars go to research institutions in congressional districts that representatives and senators want to see continue to be funded. So I think that basic fear that a lot of us had a few months ago, that research was simply going to be defunded, I think, that may not happen. And I would hope that Congress continues both the funding and also support for not only some of this research funding, but agencies like the National Center for Health Statistics, or the Agency for Health Research and Quality, which have been under threat, to really recognize their importance and sustain them.

Terry Gerton One of the challenges we might face, even if Congress does appropriate back at the prior levels, is that much of the infrastructure has been reduced or eliminated, and that’s people and that’s ongoing projects. How long do you think it will take to kind of rebuild back up to the data collection level that we had before, if we do see appropriation levels back to what they were?

Joel Gurin I think that’s a really critical question. You know, early in the administration, 10,000 jobs at HHS were cut, about a quarter of those from the CDC. But there has been some pushback. There was an attempt during the shutdown to do massive layoffs in HHS and CDC. The courts ruled against that. So I’m hoping that we can prevent more of that kind of brain drain. It will take a while to restaff and really get everything up to speed, but we think it’s doable and we hope we can get on that path.

The post U.S. health data is disappearing—with potentially serious consequences first appeared on Federal News Network.

© Getty Images/iStockphoto/Panuwat Sikham

health care icon pattern medical innovation concept background design

The SNAP program is under pressure, and states are drowning in paper as new mandates kick in

Interview transcript: 

Terry Gerton We’re going to talk about a very important program today, the SNAP program, Supplemental Nutrition Assistance Program. We got a glimpse of just how important this was during the government shutdown when those benefits were paused. You’ve worked with this program very closely with the state administration of this. Tell us about some of the biggest challenges states have in administering the SNAP program.

Andrew Joiner Well, certainly. Look, it’s the largest anti-hunger program in the U.S. I think approximately 42 million Americans rely on this benefit to help put food on the table. And look, the most recent requirements are that you submit for the eligibility of this program. And so the states administer that eligibility on behalf of the federal government. It’s about a $100 billion entitlement program that we fund through taxpayer assistance, so it’s quite a large program, quite a large amount of Americans, and it does put food on the table, and it’s administered and paid out on a monthly basis, and that eligibility is administered by the states who essentially have to go by applicant by applicant, household by household, to ensure the right amount of a benefit is paid to those individuals.

Terry Gerton The application process itself is pretty backward, maybe. It’s still in paper. And the One Big Beautiful Bill Act that passed last summer added additional application requirements. What do you see in terms of the strain that that puts on the state-administrating agencies?

Andrew Joiner Well, we like to call it the big beautiful bottleneck. I think everyone wants to get this assistance out to the families that need it. But at the heart of this application eligibility process are documents supporting your eligibility. Firstly, you have to submit your identity, that you’re a citizen and a resident of the state. You have to submit actually your, essentially, income and your income eligibility determines the amount of benefit you get. So those are things like pay stubs, bank statements. If you’re a head of household, you may have to submit utility bills. So it’s quite a lot of paperwork that has to get processed and adjudicated to get the benefits. We’ve typically tried to use caseworkers along with system integrators, we call them, which are large consulting companies that try to help administer this benefit. The reality is 44 states are failing, essentially, what we give as a payment error rate. So that is how many, what percent of errors can you make in your determination of the benefit? And then the second part is you need to pay the benefit within 30 days. And the average eligibility payment is about 26 days. So it’s taking quite a long time to get the payment. And 44 states are above the error rate, which is 6%. And the problem with that is the government can withhold, dollar for dollar, anything above 6% that you’re paying incorrectly. They can withhold that and payment to the states. So it’s quite a stressful environment in that we’ve got a time period to pay essentially this assistance and you need to pay it accurately. And both those are stressful to the states.

Terry Gerton You’ve said that the problem is not the portal, it’s the paper. The fact that this application process still happens in paper with all kinds of challenges that that poses, this seems like a tailor-made AI solution environment. How could we bring more technology into the process to streamline the evaluation of these applications?

Andrew Joiner Well, it’s a great question, and that’s what Hyperscience, the company that I run, is really focused on trying to do on behalf of the states. So there’s quite a huge human impact here. So the CIOs, the folks at the HHS organizations, essentially are stressed, because now twice the amount of the paperwork is now flowing in. And if you’re using caseworkers and system integrators to process that, you’ve essentially doubled your bill. That’s how you scale labor. So AI can help reduce the amount of burden, administrative burden that the states are going to have to take on to essentially handle twice the paperwork. But then you move to the caseworkers themselves who are trying to review this paperwork and calculate the accurate payment of these benefits. And they spend more than 80% of their time essentially addressing manual type efforts and paperwork, like correcting when the papers get scanned in, they’re upside down, they’ve got torn, they’re not clear, they have messy handwriting. The pay stubs themselves are complicated tables, and so calculating payroll deductions, all of that is complicated. Really, humans were not built to do this at scale, but AI, especially with something like Hyperscience, which was purpose-built to read human friendly information at scale, can really help reduce the amount of errors that the humans are making when they have to process 42 million of these on a monthly basis. And so we can read all of the documents, essentially we can make sure that everything’s in good order, that you’ve submitted the right identity documents, that you submitted the write pay support information or head of household information. So we can sure everything’s good order before you go off and go back to work and go on and do something. That’s typically a big delay in the process. Then we can accurately extract the information so the caseworkers aren’t having to sift through messy handwriting, multiple languages. The AI can handle all that. And then really what the caseworker can focus on is really helping the applicants understand if they’re missing documentation, if there’s gaps in their documentation, what they need to do to meet the policy to get the payment more quickly. They can focus on those human aspects. And I think that’s a win-win at the end of the day. At the end the day, the reason why there’s twice the paperwork, one side of the spectrum is trying to reduce fraud. They’re also trying to make sure the eligibility is going to the folks who are eligible for it. So that’s one side the spectrum. There’s another side of spectrum that wants to make sure that an accurate amount of benefit is paid and is broadly accessible. And so at the heart of this is, let’s just get through the paperwork as efficiently as possible. And this is an area where AI scales naturally, and AI can really help the caseworkers, it can help the CIOs who are stressed, and then it can also help the applicants who really want this benefit quickly.

Terry Gerton I’m speaking with Andrew Joiner. He’s the CEO of Hyperscience. You mentioned earlier that 44 states are failing to meet the performance requirements embedded in the SNAP program. Why aren’t more of the states picking up on some of these technology solutions to help make the processing easier, reduce their backlog?

Andrew Joiner Well, there hasn’t been a technology like Hyperscience that works across such a broad spectrum of documents and because we have to administer this to these applicants, the best way to do it is through caseworkers. It just happens to be a high turnover job because it’s quite stressful. There’s time barriers, about 30 to 40% attrition rates of the caseworkers nationwide. So it’s quite a high stress. And that has really only historically been the best way. There haven’t been a good set of technologies that allowed states to administer this quickly and at scale. AI, the advancements have come so quickly that now, no matter what the types of documents that are being submitted, whether it’s the form that’s being submitted, whether it’s the identity documents that validate what was on the form, it’s the income requirements that are also specified on the form, we can do now this kind of cross document comparisons at scale and with high accuracy because of the power of AI. And so I think most of the states within the next 18 months have no choice but to adopt a technology like AI to assist with the administration of these programs. We’ve already done this at the Social Security Administration in one of the largest document programs issued by the government. We do it for over 250 million Americans to make sure that that assistance is paid out when you submit your social security claims. And we also do it for the Veteran Affairs Association. There’s over 11 million veterans who are submitting complex documents to get their claims reimbursed for all their medical assistance, a very important constituency. They used to wait over three to six months to get their claims adjudicated. But with the help of AI, we’ve now gotten it down to less than three days. So these are the types of things that I think most states will start adopting, because the results are measurable and there’s quite a big human impact that we can produce on the other side of it.

Terry Gerton You mentioned some other examples of casework in programs across the federal government, VA, Social Security, other sorts of benefit programs. If AI is deployed to really improve this caseworker customer interaction and streamline the process, what needs to come next? Are there new policies, new oversight functions, new governance mechanisms to make sure that we keep private information private and that information flows smoothly and that customers receive the benefits they’re entitled to?

Andrew Joiner It’s a great question. The federal government is really leading the way in the ethical use of private information and forms of delivering efficacy with the government. So what we’re able to provide the states is their own state instance where the citizens and residents of that state that benefit from AI, all of their information never leaves the jurisdiction of the state, it runs locally. And we have safeguards in place where, if information is extracted from the program, we can redact, we can use synthetic information to help train the models. And so there’s actually quite a leadership position that the U.S. government and the states are able to take in the handling of information to help adjudicate some of these government processes. We’ve used it for logistics, for warfare, for a number of different reasons all throughout the U.S. government. The U.S. government has some of the strictest safeguards in terms of security and governance of information with the FedRAMP and now StateRAMP programs. And so we’ve gone through that process with Hypercience as an example, to ensure that leaders and information professionals who lead these states know that the information and the handling and the use of AI will meet the highest, stringent safeguards that they’ve put in place.

The post The SNAP program is under pressure, and states are drowning in paper as new mandates kick in first appeared on Federal News Network.

© AP Photo/Nam Y. Huh

FILE - SNAP EBT information sign is displayed at a gas station in Riverwoods, Ill., Saturday, Nov. 1, 2025. (AP Photo/Nam Y. Huh, file)
❌
❌