Federal employees received high marks for their work. At the same time, the public also wants more from them, and federal agencies more broadly, especially around technology.
These are among the top findings of a survey of a thousand likely voters from last August by the Center for Accountability, Modernization and Innovation (CAMI).
Stan Soloway, the chairman of the board for CAMI, said the findings demonstrate at least two significant issues for federal executives to consider.
Stan Soloway is the chairman of the board for the Center for Accountability, Modernization and Innovation (CAMI).
“It very clear to us from the survey was that public actually has faith, to a certain extent, in public employees. The public also fully recognizes that the system itself is not serving them well,” Soloway said on Ask the CIO. “We found well over half of the folks that were surveyed said that they didn’t believe that government services are efficient. We found just under half of respondents had a favorable impression of government workers. And I think this is very much I respect my local civil servant because I know what they do, but I have a lot of skepticism about government writ large.”
CAMI, a non-partisan think tank, found that when it comes to government workers:
47% favorable vs 38% unfavorable toward government workers (+9% net)
Self-identified very conservative voters showed strong support (+30% net)
African Americans showed the highest favorability (+31% net)
Self-identified independents are the exception, showing negative views (-14% net)
At the same time, when it comes to government services, CAMI found 54% of the respondents believe agencies aren’t as efficient or as timely as they should be.
John Faso, a former Republican congressman from New York and a senior advisor for CAMI, said the call for more efficiencies and timeliness from citizens echoes a long-time goal of bringing federal agencies closer to the private sector.
“People, and we see this in the survey, look at what government provides and how they provide it, and then to what they’re maybe accustomed to in private sector economy,” Faso said. “Amazon is a prime example. You can sit home and order something, a food product, an item of clothing or something else you want for your house or your family, and oftentimes it’s there within a day or two. People are accustomed to getting that kind of service. People have an expectation that the government can do that. I think government is lagging, obviously, but it’s catching up, and it needs to catch up fast.”
Faso said it’s clear that a solid percentage of the reason for why the government is inefficient comes back to Congress. But at the same time, the CAMI survey demonstrated that there are things federal executives could do to address many of these long-standing challenges.
CAMI says respondents supported several changes to improve timely and efficient delivery of benefits:
40% preferred hiring more government workers
34% preferred partnering with outside organizations
Those self-identified as very liberal voters strongly favored more workers (+32% net)
Those identified as somewhat conservative voters prefer outside partnerships (-20% net)
Older voters (55+) preferred outside partnerships
“Whether it’s the Supplemental Nutrition Assistance Program (SNAP) or Medicaid and Medicare, the feds set all the rules for the administration and governance of the programs. So the first question you have to ask is, what is the federal role?” Soloway said. “Even though we have now shifted administrative responsibility for many programs to the states and to some cases, the counties, and reduced by 50% the financial support for administration of these programs, while the states have a lot to figure out and are somewhat panicked about it, because it’s a huge lift. The feds can’t just walk away. This is where we have issues of policy changes that are needed at the federal level, which we can talk about some of the ones that are desperately needed to give the states kind of the flexibility to innovate.”
Soloway added this also means agencies have to break down long-established siloes both around data and processes.
The Trump administration, for example, has prioritized data sharing across the government, especially to combat concerns around fraud. The Office of Management and Budget said in July it was supercharging the Do Not Pay list by removing the barriers to governmentwide data sharing.
Soloway said this is a prime example of where the private sector has figured out how to get different parts of their organization to talk to each other and where the government is lagging.
“What is the federal role in helping to break down the silos and integrate applications, and to the certain extent help with the administration of programs with like beneficiaries? The data is pretty clear that there’s a lot of commonality across multiple programs, and when you think about the number of different departments and the bureaucracy that actually control those programs, there’s got to be leadership at the federal level, both on technology and to expand process transformation, otherwise you’re not going to solve the problem,” he said. “The second thing is when we talk about issues like program integrity, there are ways you can combat fraud and also protect the beneficiaries. But too often, the conversations are either/or any effort to combat fraud is seen as an effort to take eligible people off the rolls. Every effort to protect eligible people on the rolls is seen as just feeding into that so that’s where the federal leadership, and some of that is in technology, some of it’s in policy. Some of it’s going to be in resources, because it requires investments in technology across the board, state and federal.”
Respondents say technology can play a bigger role in improving the delivery of federal services.
CAMI says respondents offered strong support for using AI to improve government service delivery:
48% support vs 29% oppose using AI tools (net +19%)
Self-identified republicans show stronger support than democrats (+36% vs +7% net)
Men are significantly more supportive than women (+35% vs +3% net)
Support is strongest among middle-aged voters (30-44: +40% net)
Soloway said CAMI is sharing its survey findings with both Congress and the executive branch.
“We’re trying to get the conversations going and get the information to the right people. When we do that, we find, by and large, on both sides, there’s a lot of support to do stuff. The question is going to really be, where’s the leadership going to come from that will have the enough credibility on both sides to push this ball forward?” Soloway said.
Faso added state governments also must play a big role in improving program delivery.
“You have cost sharing between the federal and state governments, and you have cost sharing in terms of the administrative burden to implement these programs. I think a lot of governors, frankly, are now really looking at themselves and saying, ‘How am I going to implement this?’” he said. “How do I collaborate with the federal government to make sure that we’re all enrolling in the same direction in terms of implementing these requirements.”
AI Robot Team Assistant Service and Chatbot agant or Robotic Automation helping Humans as technology and Human Job integration as employees being guided by robots.
Terry Gerton We’re going to talk about one of everybody’s favorite topics, FedRAMP. It’s been around for years, but agencies are still struggling to get modern tools. So from your perspective, why is the process so hard for software and service companies to get through?
Irina Denisenko It’s a great question. Why is it so hard to get through FedRAMP? It is so hard to get through FedRAMP because at the end of the day, what is FedRAMP really here to do? It’s here to secure cloud software, to secure government data sitting in cloud software. You have to remember this all came together almost 15 years ago, which if you remember 15 years ago, 20 years ago, was kind of early days of all of us interacting with the internet. And we were still even, in some cases, scared to enter our credit card details onto an online website. Fast forward to today, we pay with our face when we get on our phone. We’ve come a long way. But the reality is cloud security hasn’t always been the, of course, it’s secure. In fact, it has been the opposite. Of course, its unsecure and it’s the internet and that’s where you go to lose all your data and all your information. And so long story short, you have to understand that’s were the government is coming from. We need to lock everything down in order to make sure that whether it’s VA patient data, IRS data on our taxpayers, obviously anything in the DoW, any sort of information data there, all of that stays secure. And so that’s why there are hundreds of controls that are applied to cloud environments in order make sure and double sure and triple sure that that data is secure.
Terry Gerton You lived the challenge first-hand with your own company. What most surprised you about the certification process when you tackled it yourself? What most surprise me?
Irina Denisenko When we tackled FedRAMP ourselves for the first time was that even if you have the resources and specifically if you $3 million to spend, you know, $3 million burning a hole in your pocket doesn’t happen often, but even if have that and you have staff on the U.S. Soil and you have the willingness to invest all of that for a three-year process to get certified, that is still not enough. What you need on top of that is an agency to say yes to sponsoring you. And when they say yes, to sponsoring you what they are saying yes to you is to take on your cyber risk. And specifically what they’re saying yes to is to spend half a million dollars of taxpayer money of agency budget, typically using contractors, to do an initial security review of your application. And then to basically get married to you and do something called continuous monitoring, which is a monthly meeting that they’re going to have with you forever. They, that agency is going to be your accountability partner and ultimately the risk bearer of you, the software provider, to make sure you are burning down all of the vulnerabilities, all of these CVEs, every finding in your cloud environment on the timeline that you’re supposed to do that. And that ends up costing an agency about $250,000 a year, again, in the form of contractors, tooling, etc. That was the most surprising to me, that again, even as a cloud service provider, who’s already doing business with JP Morgan and Chase, you know, healthcare systems, you name it, even that’s not enough, you need an agency sponsor, because at the end of the day, it’s the agency’s data and they have to protect it. And so they have do that triple assurance of, yes, you said you’re doing the security stuff, but let us confirm that you’re doing the the security stuff. That was the most surprising to me. And why, really, ultimately, we started Knox Systems, because what we do at Knox is we enable the inheritance model. So we are doing all of that with our sponsoring agencies, of which we have 15. Knox runs the largest FedRAMP managed cloud. And what that means is we host the production environment of our customers inside of our FedRAMP environment across AWS, Azure, and GCP. And our customers inherit our sponsors. So they inherit the authorization from the treasury, from the VA, from the Marines, etc., Which means that the Marines, the Treasury, the VA, didn’t have to spend an extra half a million upfront and $250k ongoing with every new application that was authorized. They are able to get huge bang for their buck by just investing that authorization, that sponsorship into the Knox boundary. And then Knox does the work and the hard work to ensure the security and ongoing authorization and compliance of all of the applications that we bring into our environment.
Terry Gerton I’m speaking with Irina Denisenko. She’s the CEO of Knox Systems. So it sounds like you found a way through the maze that was shorter, simpler, less expensive. Is FedRAMP 20X helping to normalize that kind of approach? How do you see it playing out?
Irina Denisenko Great question. FedRAMP 20X is a phenomenal initiative coming out of OMB-GSA. And really the crux of that is all about machine-readable and continuous authorization. Today, when I talked about continuous monitoring, that’s a monthly meeting that happens. And I kid you not, we, as a cloud service provider, again, we secure Adobe’s environment and many others, we come with a spreadsheet, an actual spreadsheet that has all of the vulnerabilities listed from all the scans we’ve done over the last month, and anything that is still open from anything prior months. And we review that spreadsheet, that actual Excel document, and then after the meet with our agencies and then, after that meeting, we upload that spreadsheet into a system called USDA on the FedCiv side, eMass, DOW side, DISA side. And then they, on their side, download that spreadsheet and they put it into other systems. And I mean, that’s the process. I think no one is confused, or no one would argue that surely there’s a better way. And a better would be a machine readable way, whether that’s over an API, using a standard language like OSCAL. There’s lots of ways to standardize, but it doesn’t have to be basically the equivalent of a clipboard and a pencil. And that’s what FedRAMP 20X is doing. It’s automating that information flow so that not only is it bringing down the amount of just human labor that needs to be done to do all this tracking, but more importantly, this is cloud security. Just because you’re secure one second doesn’t mean you’re secure five seconds from now, right? You need to be actively monitoring this, actively reporting this. And if it’s taking you 30 days to let an agency know that you have a critical vulnerability, that’s crazy. You, you got to tell them in, you know, five minutes after you find out or, you know to put a respectable buffer, a responsible buffer to allow you to mitigate remediate before you notify more parties, maybe it’s a four day buffer but it’s certainly not 30 days. That’s what FedRAMP20X is doing. We’re super excited about it. We are very supportive of it and have been actively involved in phase I and all subsequent phases.
Terry Gerton Right, so phase II is scheduled to start shortly in 2026. What are you expecting to see as a result?
Irina Denisenko Well, phase I was all about FedRAMP low, phase II is all about FedRAMP moderate. And we expect that, you know, it’s going to really — FedRAMP moderate is realistically where most cloud service offerings sit, FedRAMP moderate and high. And so that’s really the one that the FedRAMP needs to get right. What we expect to see and hope to see is to have agencies actually authorized off of these new frameworks. The key is really going to be what shape does FedRAMP 20x take in terms of machine readable reporting on the security posture of any cloud environment? And then of course, the industry will standardize around that. So we’re excited to see what that looks like. And also how much AI does the agency, the GSA, OMB and ultimately FedRAMP leverage because there is a tremendous amount of productivity, but also security that AI can provide. It can also introduce a lot of risks. And so we’re all collaborating with that agency, as well as we’re excited to see what, you know, where they draw the bright red lines and where they embrace AI.
Terry Gerton So phase II is only gonna incorporate 10 companies, right? So for the rest of the world who’s waiting on these results, what advice do you have for them in the meantime? How can companies prepare better or how can companies who want to get FedRAMP certified now best proceed?
Irina Denisenko I think the end of the day the inheritance model that Knox provides — and, you know, we’re not the only ones, actually there’s two key players.; it’s ourselves and Palantir. There’s a reason hat large companies like Celonis like OutSystems like BigID like Armis who was just bought by ServiceNow for almost $8 billion. There’s reason that all those guys choose Knox and there’s a reason Anthropic chose Palantir and Grafana chose Palantir, because regardless, FedRAMP 20X, Rev 5, doesn’t matter, there is a massive, massive premium put on getting innovative technology in the hands of our government faster. We have a window right now with the current administration prioritizing innovative technology and commercial off-the-shelf. You know, take the best out of Silicon Valley and use it in the government or out of Europe, out of Israel, you name it, rather than build it yourself, customize it until you’re blue in the face and still get an inferior product. Just use the best and breed, right? But you need it to be secure. And we have this window as a country. We have a window as country for the next few years here to get these technologies in. It takes a while to adopt new technologies. It takes awhile to do a quantum leap, but I’ll give you a perfect example. Celonis, since becoming FedRAMPed on August 19th with Knox — they had been trying to get FedRAMPed for five years — since getting FedRAMPed on august 19th, has implemented three agencies. And what do they do? They do process mining and intelligence. They’re an $800 million company that’s 20 years old that competes, by the way, head on with Palantir’s core product, Foundry and Gotham and so on. They’ve implemented three agencies already to drive efficiency, to drive visibility, to drive process mining, to driving intelligence, to drive AI-powered decision-making. And that’s during the holidays, during a government shutdown, it’s speed that we’ve never seen before. If you want outcomes, you need to get these technologies into the hands of our agencies today. And so that’s why, you know, we’re such big proponents of this model, and also why, our agencies, our federal advisory board, which includes the DHS CISO, the DOW CIO, the VA CIO are also supportive of this because ultimately it’s about serving the mission and doing it now. Rather than waiting for some time in the future.
Claude AI now connects with Apple Health, letting users talk through their fitness and health data to spot trends, understand metrics, and get plain-language insights instead of raw numbers.
The IRS is abandoning a customer service metric it’s been using for the past 20 years and replacing it with a new measurement that will more accurately reflect the public’s interactions with the tax agency, according to agency leadership.
The IRS is pursuing these changes as part of a broader shakeup of its senior ranks happening less than a week from the start of the tax filing season.
IRS Chief Executive Officer Frank Bisignano told employees in a memo obtained by Federal News Network that these changes will help the IRS achieve the “best filing season results in timeliness and accuracy.”
“At the heart of this vision is a digital-first taxpayer experience, complemented by a strong human touch wherever it is needed,” Bisignano wrote in the memo sent Tuesday.
In addition to overseeing day-to-day operations at the IRS, Bisignano also serves as the head of the Social Security Administration.
As part of these changes, Bisignano wrote that the IRS will place its current measurement of customer service over the phone with “enterprise metrics that reflect new technologies and service channels.”
“These updates will allow us to more accurately capture how the IRS serves taxpayers today,” he wrote.
The IRS and the Treasury Department did not respond to requests for comment. Bisignano told the Washington Post that the new metrics will track the agency’s average speed to answer incoming calls, call abandonment rates and the amount of time taxpayers spend on the line with the agency.
He told the Post that the agency’s old phone metrics didn’t help the IRS with its mission of solving taxpayers’ problems — and that the agency is investing in technology to better service its customers.
“We’re constantly investing in technology. We constantly must reap the rewards of it,” Bisignano told the Post.
The IRS is specifically sunsetting its Customer Service Representative Level of Service metric. The agency has used this metric for more than 20 years.
But the National Taxpayer Advocate, an independent watchdog within the IRS, told Congress last year that this metric is “misleading” and “does not accurately reflect the experience of most taxpayers who call” the agency.
National Taxpayer Advocate Erin Collins wrote in last year’s mid-year report to Congress that this Level of Service (LOS) metric only reflects calls coming into IRS accounts management phone lines, which make up only about 25% of the agency’s total call volume.
Using the LOS metric, the IRS achieved an 88% level of phone service in fiscal 2024. But IRS employees actually answered less than a third of calls received during the 2024 filing season — both in terms of total calls, and calls to accounts management phone lines.
The agency calculates its LOS metric by taking the percentage of phone calls answered by IRS employees and dividing it by the number of calls routed to IRS staff.
The IRS relies on this metric, as well as historical data on call volumes, to set targets for how many calls it has the capacity to answer, and to set hiring and training goals in preparation for each tax filing season.
Collins wrote that the LOS metric has become a proxy for the level of customer service taxpayers can expect from the IRS. But she told lawmakers that using this metric to drive taxpayer service decisions “is akin to letting the tail wag the dog.”
“The LOS is a check-the-box measure that fails to gauge the taxpayer’s telephone experience accurately and fails even to attempt to gauge the taxpayer experience in other important areas,” Collins wrote. “Yet because the IRS has adopted it as its primary measure of taxpayer service, sacrifices are made in other areas to boost the LOS as much as possible.”
Besides overhauling IRS call metrics, Bisignano announced a new leadership team at the agency.
As reported by the Associated Press, Gary Shapley, a whistleblower who testified about investigations into Hunter Biden’s taxes and who served as IRS commissioner for just two days last year, has been named deputy chief of the agency’s criminal investigation division.
According to Bisignano’s memo, Guy Ficco, the chief of the agency’s criminal investigation division, is retiring and will be replaced by Jarod Koopman, who will also continue to serve as the agency’s chief tax compliance officer.
Terry Gerton Gartner’s just done a new survey that’s very interesting around how citizens perceive how they should share data with the government. Give us a little bit of background on why you did the survey.
Mike Shevlin We’re always looking at, and talk to people about, doing some “voice of the customer,” those kinds of things as [government agencies] do development. This was an opportunity for us to get a fairly large sample voice-of-the-customer response around some of the things we see driving digital services.
Terry Gerton There’s some pretty interesting data that comes out of this. It says 61% of citizens rank secure data handling as extremely important, but only 41% trust the government to protect their personal information. What’s driving that gap?
Mike Shevlin To some extent, we have to separate trust in government with the security pieces. You know, if we looked strictly at the, “do citizens expect us to secure their data?” You know, that’s up in the 90% range. So we’re really looking at something a little bit different with this. We’re looking at, and I think one of the big points that came out of the survey, is citizens’ trust in how government is using their data. To think of this, you have to think about kind of the big data. So big data is all about taking a particular dataset and then enriching it with data from other datasets. And as a result, you can form some pretty interesting pictures about people. One of the things that jumps to mind for me, and again, more on the state and local level, is automated license plate readers. What can government learn about citizens through the use of automated license plates readers? Well, you know, it depends on how we use them, right? So if we’re using it and we’re keeping that data in perpetuity, we can probably get a pretty good track on where you are, where you’ve been, the places that you visit. But that’s something that citizens are, of course, concerned about their privacy on. So I think that the drop is not between, are you doing the right things to secure my data while you’re using it, but more about, okay, are you using it for the right purposes? How do I know that? How do you explain it to me?
Terry Gerton It seems to me like the average person probably trusts their search engine more than they trust the government to keep that kind of data separate and secure. But this is really important as the government tries to deliver easier front-facing interfaces for folks, especially consumers of human services programs like SNAP and homeless assistance and those kinds of things. So how important is transparency in this government use of data? And how can the government meet that expectation while still perhaps being able to enrich this data to make the consumer experience even easier?
Mike Shevlin When I come into a service, I want you to know who I am. I want to know that you’re providing me a particular service, that it’s customized. You know, you mentioned the search engine. Does Google or Amazon know you very well? Yeah, I’d say they probably know you better than the government knows you. So my expectation is partly driven out of my experience with the private sector. But at the same time, particularly since all the craze around generative AI, citizens are now much more aware of what else data can do, and as a result, they’re looking for much more control around their own privacy. If you look at, for example in Europe with the GDPR, they’ve got some semblance of control. I can opt out. I can have my data removed. The U.S. has an awful lot of privacy legislation, but nothing as overarching as that. We’ve got HIPAA. We’ve got protections around personally identifiable information. But we don’t have something as overarching as that in Spain. In Spain, if I deal with the government, I can say yes, I only want this one agency to use my data and I don’t want it going anywhere else. We don’t have that in the U.S. I think it’s something that is an opportunity for government digital services to begin to make some promises to citizens and then fulfill those promises or prove that they’re fulfilling those promises.
Terry Gerton I’m speaking with Mike Shevlin. He’s senior director analyst at Gartner Research. Well, Mike, you introduced AI to the conversation, so I’m going to grab that and privacy. How does AI complicate trust and what role does explainable AI play here, in terms of building citizen trust that their privacy will be protected?
Mike Shevlin I think AI complicates trust in part from generative AI and in part from our kind of mistrust in computers as a whole, as entities, as we start to see these things become more human-like. And that’s really, I think, the big thing that generative AI did to us — now we can talk to a computer and get a result. The importance of the explainable AI is because what we’ve seen is these answers aren’t right from generative AI. But that’s not what it’s built for. It’s built to make something that sounds like a human. I think the explainable AI part is particularly important for government because I want to know as a citizen, if you’re using my data, if you’re then running it through an AI model and coming back with a result that affects my life, my liberty, my prosperity, how do I know that that was the right answer? And that’s where the explainable AI pieces really come into play. Generative AI is not going to do that, at least not right now, they’re working on it. But it’s not, because it builds its decision tree as it evaluates the question, unlike some of the more traditional AI models, the machine learning or graph AI, where those decision trees are pre-built. So it’s much easier to follow back through and say, this is why we got the answer we did. You can’t really do that right now with gen AI.
Terry Gerton We’re talking to folks in federal agencies every day who are looking for ways to deploy AI, to streamline their backlogs, to integrate considerations, to flag applications where there may be actions that need to be taken, or pass through others that look like they’re clear. From the government’s perspective, how much of that needs to be explained or disclosed to citizens?
Mike Shevlin That’s one of the things I really like about the GDPR: It lays out some pretty simple rules around what’s the risk level associated with this. So for example, if the government is using AI to summarize a document, but then someone is reviewing that summary and making a decision on it, I have less concern than I have if that summary becomes the decision. So I think that’s the piece to really focus on as we look at this and some of the opportunities. Gartner recommends combining AI models, and this will become even more important as we move into the next era of agentic AI or AI agents, because now we’re really going to start having the machines do things for us. And I think that explainability becomes really appropriate.
Terry Gerton What does this mean for contractors who are building these digital services? How can they think about security certifications or transparency features as they’re putting these new tools together?
Mike Shevlin The transparency features are incumbent upon government to ask for. The security pieces, you know, we’ve got FedRAMP, we got some of the other pieces. But if you look at the executive orders on AI, transparency and explainability are one of the pillars that are in those executive orders. So, certainly, government entities should be asking for some of those things. I’m pulling from some law enforcement examples, because that’s usually my specific area of focus. But when I look at some of the Drone as a First Responder programs, and I think it was San Francisco that just released their “here’s all the drone flights that we did, here’s why we did them,” so that people can understand: Hey, yeah, this is some AI that’s involved in this, this is some remote gathering, but here’s what we did and why. And that kind of an audit into the system is huge for citizen confidence. I think those are the kinds of things that government should be thinking about and asking for in their solicitations. How do we prove to citizens that we’re really doing the right thing? How can we show them that if we say we’re going to delete this data after 30 days, we’re actually doing that?
Terry Gerton So Mike, what’s your big takeaway from the survey results that you would want to make sure that federal agencies keep in mind as they go into 2026 and they’re really moving forward in these customer-facing services?
Mike Shevlin So my big takeaway is absolutely around transparency. There’s a lot to be said for efficiency, there’s lot to be said for personalization. But I think the biggest thing that came from this survey for me was, we all know security is important. We’ve known that for a long time. Several administrations have talked about it as a big factor. And we have policies and standards around that. But the transparency pieces, I think, we’re starting to get into that. We need to get in to that a little faster. I think that’s probably one of the quickest wins for government if we can do that.
Glassnode says XRP is slipping back into a cost-basis configuration last seen in February 2022, with newer buyers accumulating at levels that leave a prior cohort “top” increasingly underwater, an on-chain setup that can shape sell pressure around key price zones.
In a note shared Monday via X, the analytics firm pointed to a rotation in realized prices by age band. “The current market structure for XRP closely resembles February 2022,” Glassnode wrote. It added that “psychological pressure on top buyers builds over time,” framing the current tape as one where patience is being tested rather than rewarded.
What This Means For XRP Price
The firm’s core observation is that wallets active in the short-term window, roughly the 1-week to 1-month cohort, are accumulating below the cost basis of holders in the 6-month to 12-month band. In practice, that means newer demand is stepping in at prices that are cheaper than what a meaningful slice of mid-term holders paid.
That relationship matters because cohorts tend to behave differently when price revisits their cost basis. When spot trades below a cohort’s realized price, that cohort is, on average, underwater. If the market rallies back toward that level, some of that supply can become eager to de-risk into breakeven, creating overhead liquidity that can cap upside until it is absorbed.
Glassnode’s “Realized Price by Age” chart (7-day moving average) visualizes this dynamic by plotting cohort realized prices against spot. The standout feature is the gap between shorter-term and 6–12 month cost bases during the most recent consolidation, echoing the firm’s February 2022 comparison.
With XRP price again trading slightly below the $2 mark, a post by Glassnode from Nov. 24 2025 also comes back into focus. Glassnode quoted this old X post in which it singled out $2 as the level where this cohort stress has been most visible in flows. “The $2.0 level remains a major psychological zone for Ripple holders,” the firm said. “Since early 2025, each retest of $2 saw $0.5B–$1.2B per week in losses,” a reminder that many holders have been exiting at a loss as price revisits that handle.
Those realized loss estimates are a key qualifier: they suggest that $2 is not just a chart level, but a behavior level, where spending decisions change and where capitulation (or forced de-risking) can cluster.
Notably, in February 2022, XRP put in a sharp round-trip: after slipping to about $0.6034 on Feb. 2, it ripped higher to the month’s peak near $0.8758 on Feb. 8, then rolled over into the back half of the month as macro risk accelerated. Then, XRP was back around $0.70 by Feb. 23–24 (roughly 20% off the Feb. 8 high), before bouncing into month-end near $0.7856 on Feb. 28.
The late-month downdraft coincided with the Russia–Ukraine escalation and the Feb. 24 invasion, which hit risk assets broadly and pushed major crypto lower intraday, consistent with the risk-off impulse seen across the entire crypto market.
The crypto market faced a sharp selloff overnight as renewed trade conflict fears between the United States and the European Union shook global risk sentiment. Bitcoin and major altcoins reversed recent gains, with traders reacting to fresh tariff headlines and the possibility of escalating economic retaliation on both sides of the Atlantic. While crypto is often viewed as a separate market, this move once again showed how quickly digital assets can behave like high-beta risk trades when macro uncertainty spikes.
According to analyst Darkfost, the liquidation impact was immediate and aggressive. More than $800 million worth of leveraged positions were wiped out in a matter of hours, including roughly $768 million in long liquidations. The scale of long closures suggests that traders were positioned for continuation to the upside, but were caught offside as prices rolled over sharply.
What stood out most was where the damage occurred. Darkfost noted that Hyperliquid recorded the largest share of forced liquidations, with $241 million, while Bybit followed closely with $220 million. The wave of liquidations appears partly tied to the announcement of new tariffs targeting Europe, which triggered an equally fast response from EU policymakers, reigniting the broader “trade war” narrative across markets.
CME Opens the Door to Fresh Volatility
Darkfost warns that the timing of this selloff matters as much as the liquidation size. As soon as CME trading opened, Bitcoin saw a sharp downside move, suggesting that institutional flows and macro-linked positioning played a direct role in the shakeout. In past risk-off episodes, the CME open has often acted like a volatility trigger, especially when markets are already fragile, and leverage is elevated across major exchanges.
This is why the next few hours are critical. The same type of move could easily repeat at the opening of the US markets, where liquidity conditions and headline sensitivity tend to amplify reactions. If sellers press again, the market could see another cascade of forced closures, particularly in high-beta altcoins that remain vulnerable after the overnight wipeout.
The message is straightforward: stay cautious and avoid overexposure to leverage while the macro backdrop remains unstable. Liquidations can create sharp bounces, but they can also reset momentum quickly if fear spreads across risk assets.
Darkfost adds that attention should remain on incoming political updates. The market is now trading the narrative, not just the chart. Further statements could arrive at any moment, and as history has shown, Trump often delivers market-moving headlines right in the middle of the weekend.
Bitcoin Holds Fragile Rebound As Crypto Tests Macro Nerves
Bitcoin is trading near $93,100 after a sharp rejection from the $96,000–$97,000 supply zone. The chart shows BTC still struggling below key moving averages, with momentum capped by the declining blue trendline overhead. This reinforces the idea that the latest upside attempt was more of a rebound than a clean trend reversal.
Structurally, price is forming higher lows after the violent breakdown from the $110,000 area. However, the rebound remains vulnerable as long as BTC stays trapped beneath resistance and fails to reclaim the mid-$90,000s with conviction. The recent candles also highlight hesitation, with wicks suggesting aggressive selling into strength.
The red long-term moving average is rising near the low-$90,000s, acting as a potential dynamic support zone. If Bitcoin holds above that level, it keeps the recovery structure intact and prevents a deeper reset toward prior liquidity pockets.
This matters for the broader crypto market. When BTC remains range-bound under resistance, altcoins usually struggle to sustain rallies and become more sensitive to liquidation-driven volatility. Risk appetite can return quickly, but it requires Bitcoin to break above resistance and hold. Until then, crypto remains in a fragile stabilization phase, not a confirmed bullish continuation.
Featured image from ChatGPT, chart from TradingView.com
In this episode, we explore Amazon Ring’s newly introduced Familiar Faces feature that utilizes AI for facial recognition. We discuss the convenience of identifying familiar people at your doorstep, the privacy concerns it raises, and the legal implications surrounding biometric data. Learn about how this feature works, potential inaccuracies, and privacy laws in certain U.S. […]
Varonis found a “Reprompt” attack that let a single link hijack Microsoft Copilot Personal sessions and exfiltrate data; Microsoft patched it in January 2026.
Varonis found a “Reprompt” attack that let a single link hijack Microsoft Copilot Personal sessions and exfiltrate data; Microsoft patched it in January 2026.
On Thursday, the Wikimedia Foundation announced API access deals with Microsoft, Meta, Amazon, Perplexity, and Mistral AI, expanding its effort to get major tech companies to pay for high-volume API access to Wikipedia content, which these companies use to train AI models like Microsoft Copilot and ChatGPT.
The deals mean that most major AI developers have now signed on to the foundation's Wikimedia Enterprise program, a commercial subsidiary that sells high-speed API access to Wikipedia's 65 million articles at higher speeds and volumes than the free public APIs provide. Wikipedia's content remains freely available under a Creative Commons license, but the Enterprise program charges for faster, higher-volume access to the data. The foundation did not disclose the financial terms of the deals.
The new partners join Google, which signed a deal with Wikimedia Enterprise in 2022, as well as smaller companies like Ecosia, Nomic, Pleias, ProRata, and Reef Media. The revenue helps offset infrastructure costs for the nonprofit, which otherwise relies on small public donations while watching its content become a staple of training data for AI models.
The order, first proposed a year ago, bans GM from collecting and then selling geolocation data to third parties, like data brokers and insurance companies.
If 2025 was the year federal agencies began experimenting with AI at-scale, then 2026 will be the year they rethink their entire data foundations to support it. What’s coming next is not another incremental upgrade. Instead, it’s a shift toward connected intelligence, where data is governed, discoverable and ready for mission-driven AI from the start.
Federal leaders increasingly recognize that data is no longer just an IT asset. It is the operational backbone for everything from citizen services to national security. And the trends emerging now will define how agencies modernize, secure and activate that data through 2026 and beyond.
Trend 1: Governance moves from manual to machine-assisted
Agencies will accelerate the move toward AI-driven governance. Expect automated metadata generation, AI-powered lineage tracking, and policy enforcement that adjusts dynamically as data moves, changes and scales. Governance will finally become continuous, not episodic, allowing agencies to maintain compliance without slowing innovation.
Trend 2: Data collaboration platforms replace tool sprawl
2026 will mark a turning point as agencies consolidate scattered data tools into unified data collaboration platforms. These platforms integrate cataloging, observability and pipeline management into a single environment, reducing friction between data engineers, analysts and emerging AI teams. This consolidation will be essential for agencies implementing enterprise-wide AI strategies.
Trend 3: Federated architectures become the federal standard
Centralized data architectures will continue to give way to federated models that balance autonomy and interoperability across large agencies. A hybrid data fabric — one that links but doesn’t force consolidation — will become the dominant design pattern. Agencies with diverse missions and legacy environments will increasingly rely on this approach to scale AI responsibly.
Trend 4: Integration becomes AI-first
Application programming interfaces (APIs), semantic layers and data products will increasingly be designed for machine consumption, not just human analysis. Integration will be about preparing data for real-time analytics, large language models (LLMs) and mission systems, not just moving it from point A to point B.
Trend 5: Data storage goes AI-native
Traditional data lakes will evolve into AI-native environments that blend object storage with vector databases, enabling embedding search and retrieval-augmented generation. Federal agencies advancing their AI capabilities will turn to these storage architectures to support multimodal data and generative AI securely.
Trend 6: Real-time data quality becomes non-negotiable
Expect a major shift from reactive data cleansing to proactive, automated data quality monitoring. AI-based anomaly detection will become standard in data pipelines, ensuring the accuracy and reliability of data feeding AI systems and mission applications. The new rule: If it’s not high-quality in real time, it won’t support AI at-scale.
Trend 7: Zero trust expands into data access and auditing
As agencies mature their zero trust programs, 2026 will bring deeper automation in data permissions, access patterns and continuous auditing. Policy-as-code approaches will replace static permission models, ensuring data is both secure and available for AI-driven workloads.
The rise of generative AI will reshape federal data roles. The most in-demand professionals won’t necessarily be deep coders. They will be connectors who understand prompt engineering, data ethics, semantic modeling and AI-optimized workflows. Agencies will need talent that can design systems where humans and machines jointly manage data assets.
The bottom line: 2026 is the year of AI-ready data
In the year ahead, the agencies that win will build data ecosystems designed for adaptability, interoperability and human–AI collaboration. The outdated mindset of “collect and store” will be replaced by “integrate and activate.”
For federal leaders, the mission imperative is clear: Make data trustworthy by default, usable by design, and ready for AI from the start. Agencies that embrace this shift will move faster, innovate safely, and deliver more resilient mission outcomes in 2026 and beyond.
Seth Eaton is vice president of technology & innovation at Amentum.
AI, Machine learning, Hands of robot and human touching on big data network connection background, Science and artificial intelligence technology, innovation and futuristic.
The 7 Best Real-Time Stock Data APIs for Investors and Developers in 2026 (In-Depth Analysis & Comparison)
Many believe that to access high-quality financial data, you need to pay thousands of dollars for a Bloomberg terminal or settle for limited platforms like Yahoo Finance. The truth is different: today, there are powerful, affordable, and even free real-time stock data APIs you can integrate into your Python scripts, interactive dashboards, or algorithmic trading systems.
As W. Edwards Deming said:
“Without data, you’re just another person with an opinion.”
In this article, I present a practical comparison of the 7 best financial APIs on the market (with a focus on real-time stock data). I include:
Pros and cons of each API
Pricing plans (free tiers and paid options)
Key features and data coverage
Recommendations by profile (analyst, trader, developer, or enterprise)
Concrete use cases demonstrating each API
Comparison table (quick selection guide)
Frequently asked questions to address common doubts
Let’s dive in.
1. EODHD API (End-of-Day Historical Data)
Best for: Fundamental analysis, backtesting, and financial reports Website: eodhd.com
Key features:
Historical end-of-day (EOD) prices and intraday data (1m, 5m, 1h intervals)
Fundamental data (financial ratios, balance sheets, income and cash flow statements)
Corporate actions: dividends, stock splits, earnings, IPO data
Macroeconomic indicators and earnings calendars
Financial news API (with sentiment analysis)
Broad coverage: stocks, ETFs, indices, forex, and cryptocurrencies
Highlights:EODHD provides clear documentation with plenty of Python examples, and it combines both quantitative price data and fundamental data in one service. This makes it great for building dashboards or predictive models that require both historical prices and financial metrics. Its data consistency (handling of splits, ticker changes, etc.) is also highly regarded.
Pricing:
Free: 20 API requests per day (limited to basic end-of-day data) — useful for testing or small-scale scripts
Pro: Plans from ~$17.99 per month (for individual market packages) up to ~$79.99 per month for an all-in-one global data package. The paid tiers offer generous limits (e.g. 100,000 API calls/day) and full access to historical and real-time data.
Cons:
The free plan’s 20 calls/day is very limited, suitable only for trial runs or simple prototypes. Serious projects will need a paid plan.
Some advanced features (like extensive options data or certain international markets) may require higher-tier subscriptions.
Use case: Extract Apple’s dividend history over the past 5 years and calculate the dividend yield trend. (EODHD’s API can provide historical dividend payouts which you can combine with price data for this calculation.)
Personal recommendation: If you need a single comprehensive API for global stocks (prices + fundamentals + news), EODHD is an excellent choice. ✨ Get 10% off here to try it out.
Time series data for equities (daily, intraday down to 1-minute)
Technical indicators built-in (e.g. RSI, MACD, Bollinger Bands) — you can query indicator values directly via the API.
Crypto and Forex data support
Some sentiment and macroeconomic data (e.g. sector performance, economic indicators)
Highlights: Alpha Vantage is known for its ease of use and generous free tier for beginners. It’s one of the most popular starting points for developers learning to work with financial data. Uniquely, Alpha Vantage is an official vendor of Nasdaq market data, which speaks to its data reliability. The API responses are JSON by default, and the documentation includes examples that integrate well with Python and pandas.
Pricing:
Free: Up to 5 API calls per minute (approximately 500 calls per day). This is sufficient for small applications or learning purposes, though heavy use will hit the limits quickly. (Note: Alpha Vantage’s standard free limit is actually 25 calls per day as of late 2024, enforced alongside the 5/minute rate.)
Premium: Paid plans starting from $29.99/month for higher throughput (e.g. 30+ calls/minute) and no daily cap. Higher tiers (ranging up to ~$199/month) allow dozens or hundreds of calls per minute for enterprise needs.
Cons:
Strict rate limits on the free tier. Hitting 5 calls/min means you often have to throttle your scripts or batch requests. For example, pulling intraday data for many symbols or calling many technical indicators will quickly require a paid plan.
Limited depth in some areas: fundamental data coverage is basic (company overviews, a few ratios) and not as extensive globally as some competitors.
Use case: Build an email alert system that triggers when a stock’s 14-day RSI drops below 30 (an oversold signal). Alpha Vantage’s technical indicators API can directly return the RSI for a given symbol, making this straightforward to implement without calculating RSI manually.
3. Intrinio
Best for: Enterprise projects, advanced fundamental data, and large-scale financial applications Website: intrinio.com
Key features:
Extensive financial statement data: Intrinio provides detailed fundamentals — standardized and as-reported financials (income statements, balance sheets, cash flows) for thousands of companies. It’s very useful for deep fundamental analysis and modeling.
Real-time and historical stock prices: Access to real-time equity quotes (for supported exchanges) and long historical price data (often decades back). Intrinio also offers options data, ETFs, Forex, and other asset classes through various packages.
Data marketplace model: Intrinio has a variety of data feeds and endpoints (e.g., US stock prices, global equities, options, ESG data, etc.). You subscribe only to the feeds you need, which can be cost-efficient for specific use cases.
Developer tools: Clean REST API with robust documentation, SDKs in multiple languages, and even a real-time data streaming option for certain feeds. They also provide a sandbox environment and live chat support to help during development.
Highlights: Intrinio is known for high data accuracy and quality. It’s the go-to for many fintech startups and even institutions when building platforms that require reliable and up-to-date financial data. The breadth of APIs and endpoints is massive — from stock screeners to data on insider transactions. Intrinio’s website and product pages are very informative, and they even include an AI chatbot to help you find the data you need.
Pricing:
Free trial: Intrinio offers a free trial period for new users to test out the API with limited access. This is great for evaluating their data before committing.
Paid packages: Pricing is segmented by data type. For example, a US equities core package starts around $200/month (Bronze tier) for end-of-day prices and fundamentals. Real-time stock price feeds and expanded data (Silver/Gold tiers) cost more — e.g., U.S. equities Gold (with real-time quotes and full history) is about $800/month. Similarly, options data packages range from ~$150 up to $1600/month for real-time options feeds. Intrinio’s model is pay for what you need, which scales up to enterprise-level contracts for wide coverage.
Cons:
Not ideal for small projects or beginners: Intrinio’s offerings can be overkill for hobbyist use — the range of data is immense and the pricing is relatively high. There is no unlimited free tier, so after the trial you must budget for at least a few hundred dollars per month to continue using their data at any scale.
Complex pricing structure: Because of the package system (separate feeds for stocks, options, etc.), it may be confusing to figure out exactly which plan(s) you need, and costs can add up if you require multiple data types. It’s geared more toward startups, fintech companies, or professionals with a clear data strategy (as opposed to one-size-fits-all simple pricing).
Website account required: You’ll need to go through account setup and possibly consultation for certain datasets. It’s not as plug-and-play as some other services for quick experiments.
Use case: An investor relations platform could use Intrinio to automate financial report analysis — pulling in several years of standardized financials for dozens of companies to compare ratios and performance. Intrinio’s high-quality fundamentals and wide historical coverage make it ideal for such an application.
4. Polygon.io
Best for: Real-time market data (especially U.S. stocks) and high-frequency trading apps Website:https://massive.com/
Key features:
Real-time price feeds: Polygon provides live tick-by-tick price data for U.S. stocks, options, forex, and crypto. It supports streaming via WebSockets, so you can get quotes and trades in real time with low latency.
Historical data down to ticks: You can access granular historical data, including full tick data and minute-by-minute bars for equities (often used for backtesting trading algorithms).
WebSockets & Streaming: Excellent WebSocket API for streaming live quotes, trades, and aggregates. This is crucial for building live dashboards or trading bots that react to market movements instantly.
Reference data & tools: Polygon also offers comprehensive reference data (company info, financials, splits/dividends, etc.) and endpoints like news, analyst ratings, and more. However, its core strength is market price data.
Highlights: Polygon.io stands out for performance and depth in the U.S. markets. If you need real-time stock prices or even need to stream every trade for a given stock, Polygon can handle it. Their documentation is well-structured and they have a developer-friendly interface with interactive docs. They also offer community resources and example code which make integration easier. Polygon’s pricing page clearly separates plans for different asset types, so you can pick what you need.
Pricing
Free: The free tier allows 5 API requests per minute and limited historical data (e.g., 2 years of daily data). Real-time streaming might be restricted or delayed on the free plan (often 15-minute delayed data for stocks). This tier is good for trying out the API or basic apps that don’t require extensive data.
Paid: Plans start at $29/month for higher call limits and more data access. For instance, Polygon’s “Starter” or “Developer” plans (around $29-$79/month) provide live data with certain limitations (like delayed vs real-time) and a cap on how far back you can fetch history. More advanced plans can go up to a few hundred per month for full real-time tick data and larger rate limits. (Polygon has recently rebranded some offerings under “Massive” but the pricing remains in this range for individual developers.)
Cons:
Primarily U.S.-focused: Polygon’s strength is U.S. stocks and options. If you need comprehensive data for international markets, you’ll need other APIs. Its coverage outside the U.S. (for equities) is limited, so it’s not a one-stop solution for global portfolios.
Costly for full real-time access: While entry plans are affordable, truly real-time professional data (especially if you need full tick data or entire market streaming) can become expensive. Higher-tier plans for real-time data (with no delay and high rate limits) can run into the hundreds per month, and certain data (like entire market breadth or entire options chains in real time) might require enterprise arrangements.
Limited fundamentals/news: Polygon has some fundamental data and news, but it does not offer the depth in these areas that more fundamentally-oriented APIs (like EODHD or FMP) do. It focuses on pricing data.
Use case: Stream live quotes for AAPL and MSFT using Polygon’s WebSocket API and display a live updating chart in a web app. With just a few lines of code, you can subscribe to the ticker feed and get real-time price updates that drive an interactive chart (great for a day-trading dashboard or a demo of live market data).
5. Alpaca Markets
Best for: Building trading bots and executing live trades (with data included) Website: alpaca.markets
Key features:
Commission-free stock trading API: Alpaca is actually a brokerage platform that provides APIs, so you can place real buy/sell orders for U.S. stocks with zero commissions via their API. This sets it apart from pure data providers.
Real-time and historical market data: Alpaca offers real-time price data (for stocks on the US exchanges) and historical data as part of its service. When you have a brokerage account, you get access to stock quotes and minute-level bars, etc., through the API.
Paper trading environment: For developers, Alpaca’s paper trading is a big plus — you can simulate trading with virtual money. You get the same API for paper and live trading, which is ideal for testing your algorithmic strategies safely.
Brokerage integration: You can manage orders, positions, and account info via API. This means you not only get data but can also automate an entire trading strategy (from data analysis to order execution) with Alpaca’s platform.
Highlights: Alpaca is a favorite for DIY algorithmic traders and hackathon projects because it lowers the barrier to entry for trading automation. With a few API calls, you can retrieve market data and send orders. It’s essentially an all-in-one trading service. The documentation is developer-centric, and there are official SDKs (Python, JS, etc.) as well as a vibrant community. Alpaca integrates with other tools (like TradingView, Zapier) and supports OAuth, making it easier to incorporate in different applications.
Pricing:
Free tier: You can use Alpaca’s core API for free. Creating an account (which requires U.S. residency or certain other country residencies for live trading) gives you access to real-time stock data and the ability to trade with no monthly fee. Alpaca makes money if you trade (through other means like payment for order flow), so the API and basic data are provided at no cost to developers.
Premium data plans: Alpaca does have optional subscriptions for more advanced data feeds. For example, the free data might be SIP consolidated feed with a small delay or only IEX data; if you need full real-time consolidated market data or extended history, they offer Data API subscriptions (like $9/month for more history, or higher for things like real-time news, etc.). These are add-ons; however, many users find the free data sufficient for starting out.
Cons:
Limited to U.S. stock market: Alpaca’s trading and data are focused on U.S. equities. You won’t get direct access to international stocks or other asset classes (except crypto, which Alpaca has added in a separate offering).
Requires KYC for live trading: If you plan to execute real trades, you must open a brokerage account with Alpaca, which involves identity verification and is only available in certain countries. Paper trading (demo mode) is available globally, but live trading has restrictions.
Data not as extensive as dedicated providers: While Alpaca’s included data is decent, it may not be as comprehensive (in terms of history or variety of technical indicators) as some standalone data APIs. It’s primarily meant to support trading rather than be a full analytics dataset.
Use case: Create a Python trading bot that implements a simple moving average crossover strategy (e.g., buy when the 50-day MA crosses above the 200-day MA, sell on the reverse crossover). The bot can use Alpaca’s data API to fetch the latest prices for your stock, compute moving averages, and Alpaca’s trading API to place orders when signals occur. You can even run this in paper trading first to fine-tune the strategy.
6. Finnhub
Best for: A mix of data types (real-time prices, fundamentals, news, crypto) in one service Website: finnhub.io
Key features:
Real-time market data: Finnhub provides real-time quotes for stocks (free for US stocks via IEX), forex, and cryptocurrencies through its API. It’s a solid choice if you need live pricing across multiple asset classes.
Financial news with sentiment: There’s a news API that returns the latest news articles for companies or markets, including sentiment analysis scores. This is useful for gauging market sentiment or doing news-driven strategies.
Corporate and economic calendar data: Endpoints for earnings calendars, IPO schedules, analyst earnings estimates, and economic indicators are available. This variety helps investors and analysts stay on top of upcoming events.
Fundamental data: Finnhub offers some fundamentals (e.g., company profiles, financial statements, key metrics), as well as alternative data like COVID-19 stats, and even ESG scores. However, some of these are limited in the free tier.
Highlights: Finnhub is like a Swiss Army knife — it covers a broad range of financial data in one API. Many startups use Finnhub to power their apps because it’s relatively easy to use and the free tier is generous in terms of number of calls. Developers also appreciate that Finnhub’s documentation is straightforward and they have examples for how to use each endpoint. It’s particularly notable for its news and social sentiment features, which not all finance APIs offer.
Pricing:
Free: 60 API requests per minute are allowed on the free plan, which is quite high compared to most free plans. This includes real-time stock prices (US markets) and basic access to many endpoints. The free tier is for personal or non-commercial use and has some data limits (like certain endpoints or depth of history may be restricted).
Pro: Paid plans start from $49–50 per month for individual markets or data bundles. Finnhub’s pricing can be a bit modular; for example, real-time international stock feeds or more historical data might each be priced separately (often ~$50/month per market). They also have higher plans (hundreds per month) for enterprise or for accessing all data with fewer limits. For many users, the $50/month range unlocks a lot of additional data useful for scaling up an application.
Cons:
Limited free fundamentals: The free plan, while generous with call volume, does not include all data. For instance, certain fundamental data endpoints (like full financial statements or international market data) require a paid plan. This can be frustrating if you expect all features to work out of the box with the free API key. Essentially, you might hit “Access denied” for some endpoints until you upgrade.
Pricing can add up: If you need multiple data types (say US stocks real-time, plus international stocks, plus in-depth fundamentals, etc.), Finnhub’s costs can increase quickly because each component may be an add-on. In comparison, some competitors’ bundled plans might be more cost-effective for broad needs.
Website/UI is basic: Finnhub’s website isn’t the slickest and occasionally the docs have minor inconsistencies. This isn’t a huge issue, but it’s not as polished as some others like Alpha Vantage or Twelve Data in terms of user interface.
Use case: Pull the latest news headlines and sentiment for Tesla (TSLA) and display a “sentiment gauge”. With Finnhub’s news API, you can get recent news articles about Tesla along with a sentiment score (positive/negative). A developer could feed this into a simple app or dashboard to visualize how news sentiment is trending for the company.
7. Twelve Data
Best for: Quick visualizations, simple dashboards, and spreadsheet integrations Website: twelvedata.com
Key features:
Historical & real-time data for stocks, forex, crypto: Twelve Data covers many global markets, offering time series data at various intervals (intraday to daily) for equities, FX, and cryptocurrencies.
Built-in visualization tools: Uniquely, Twelve Data provides a web UI where you can quickly generate charts and indicators from their data without writing code. It’s useful for non-developers or for quickly checking data visually.
Easy integration with Python, Excel, etc.: They have a straightforward REST API and also provide connectors (like an Excel/Google Sheets add-in and integration guides for Python, Node, and other languages). This makes it appealing to analysts who might want data in Excel as well as developers.
Technical indicators and studies: Twelve Data’s API can return technical indicators similar to Alpha Vantage. They also support complex queries like retrieving multiple symbols in one call, and even some fundamentals for certain stocks.
Highlights: Twelve Data markets itself as very user-friendly. For someone who is building a simple web app or learning to analyze stock data, Twelve Data’s combination of an intuitive API plus a pretty interface for quick tests is attractive. Another highlight is their freemium model with credits — this can be flexible if your usage is light. They also have educational content and a responsive support team. Many users praise the quality of documentation, which includes example requests and responses for every endpoint (so you can see what data you’ll get).
Pricing:
Free (Basic): 8 API requests per minute (up to ~800/day). This free plan gives real-time data for US stocks, forex, and crypto, which is quite useful for small projects. However, certain features (like WebSocket streaming or extended history) are limited on the free tier.
Paid plans:Grow plan from $29/month, Pro plan from $79/month, and higher tiers up to Enterprise. The pricing is based on a credit system: each API call “costs” a certain number of credits (e.g., 1 credit per quote, more credits for heavier endpoints). Higher plans give you more credits per minute and access to more markets. For example, the Pro plan (~$79) significantly raises rate limits (e.g. 50+ calls/min) and adds a lot more historical data and international market coverage. Enterprise ($1,999/mo) is for organizations needing very high limits and all data. The credit system is a bit complex to grasp at first, but effectively the more you pay, the more data and speed you get.
Cons:
Free plan limitations: The Basic plan is fine for testing, but serious usage will bump into its limits (both in call volume and data depth). Also, some endpoints require higher plans, and real-time WebSocket access is mostly for paid users. In short, Basic is more of a trial.
Credit-based pricing confusion: As noted, the concept of “API credits” and each endpoint having a weight can be confusing. For instance, an API call that fetches 100 data points might consume more credits than one that fetches 1 data point. New users may find it hard to estimate how many credits they need, compared to providers with simple call counts.
Fewer specialty datasets: Twelve Data covers the essentials well, but it doesn’t have things like in-depth fundamentals or alternative data. Its focus is on price data and basic indicators. Large-scale applications needing extensive financial statement data or niche data (like options, sentiment) would need an additional source.
Use case: Build a lightweight crypto price dashboard that updates every 5 minutes. Using Twelve Data’s API, you could fetch the latest price for a set of cryptocurrencies (e.g., BTC, ETH) at a 5-min interval and display them in a Streamlit or Dash app. Twelve Data’s ease of integration means you could have this running quickly, and if you use their built-in visualization components, you might not need to code the charting yourself.
Quick Selection Guide by User Profile:
If you’re an investor/analyst needing both fundamentals and price history:EODHD or FMP are excellent due to their rich fundamental datasets and broad market coverage
If you’re a trader focused on real-time data and execution:Polygon.io (for raw real-time feeds) or Alpaca (for trading with built-in data) are tailored to your needs. Polygon for pure data speed; Alpaca if you also want to place trades via API.
If you’re a developer or student learning the ropes,Alpha Vantage or Yahoo Finance via yfinance are very beginner-friendly. They have free access, simple endpoints, and plenty of examples to get you started in Python or JavaScript.
If you need global market coverage in one service:EODHD, Finnhub, or FMP will give you international stocks, forex, crypto, and more under a single API — useful for broad applications or multi-asset platforms.
If you prefer no-code or Excel integration:EODHD, FMP, and Twelve Data offer Excel/Google Sheets add-ons and straightforward no-code solutions, so you can fetch market data into spreadsheets or BI tools without programming.
Bonus: Financial Modeling Prep (FMP)
Best for: Advanced fundamental analysis and automated financial statement retrieval Website:financialmodelingprep.com
Key features:
Extensive financial statements coverage: FMP provides APIs for detailed financial statements (balance sheets, income statements, cash flows) for many public companies, including quarterly and annual data. They also offer calculated financial ratios and metrics, making it a favorite for equity analysts.
Real-time and historical stock prices: You can get real-time quotes as well as historical daily and intraday price data for stocks. FMP covers stocks worldwide, plus ETFs, mutual funds, and cryptocurrencies.
Specialty endpoints: There are unique APIs for things like DCF (Discounted Cash Flow) valuation, historical dividend and stock split data, insider trading information, and even ESG scores. This breadth is great for those building sophisticated models.
News and alternative data: FMP includes a financial news feed, earnings calendar, and economic indicators. While not as deep on news sentiment as Finnhub, it’s a well-rounded data source for market context.
Highlights: FMP has gained a lot of traction as a developer-friendly alternative to more expensive data platforms. Its documentation is clear, with examples in multiple languages. One big plus is the Excel/Google Sheets integration — even non-coders can use FMP by installing their Google Sheets add-on and pulling data directly into a spreadsheet. The combination of fundamentals + market data in one API, along with affordable pricing, makes FMP very appealing for startups and students. In my personal experience, FMP’s fundamental data depth is excellent for building valuation models or screening stocks based on financial criteria.
Pricing:
Free tier: FMP offers a free plan with a limited number of daily requests (e.g., 250 per day). The free tier gives access to basic endpoints — you can get some real-time quotes, key financial metrics, and historical data for a few symbols to test it out.
Pro plans: Paid plans start at around $19.99/month, which is quite affordable. These plans increase the daily request limit substantially (into the thousands per day) and unlock more endpoints. Higher tiers (on the order of $50-$100/month) offer even larger call volumes and priority support. For most individual developers or small businesses, FMP’s paid plans provide a lot of data bang for the buck. Enterprise plans are also available if needed, but many will find the mid-tier plans sufficient.
Cons:
Free plan restrictions: The free plan is mainly for trial or very light use — serious users will quickly find it inadequate (in terms of both request limits and available data). If you have an app in production, you’ll almost certainly need a paid plan, though fortunately the entry cost is low.
Data normalization quirks: Because FMP aggregates data from various sources, you might notice slight inconsistencies or formatting differences across certain endpoints. For example, some lesser-used financial metrics might have different naming conventions or units. These are minor issues and FMP continually improves them, but it’s something to be aware of if you encounter an odd-looking field.
Not focused on real-time streaming: FMP provides real-time quotes on paid plans, but it’s not a streaming service. If you need tick-by-tick streaming or ultra-low-latency data, a specialized API like Polygon or a broker feed would be necessary. FMP is more geared towards snapshots of data (which is fine for most analysis and moderate-frequency querying).
Why we include FMP: Lately, many developers (myself included) have been testing FMP for projects because of its rich fundamental dataset and solid documentation. It’s a strong alternative if you want advanced company metrics or need to automate financial statement analysis directly into your Python scripts or dashboards. For example, you could pull 10 years of financials for dozens of companies in seconds via FMP — something that’s invaluable for quantitative investing or academic research. FMP combines flexibility, affordability, and depth of data that few APIs offer in one package.
Frequently Asked Questions (FAQs)
❓ What’s the most complete API that combines fundamentals, historical prices, and news? ✅ If you need everything in one service, EODHD, FMP, and Alpha Vantage stand out. They each offer a balance of broad market coverage, reliable data, and depth. EODHD and FMP in particular have extensive fundamental and historical datasets (with news feeds) alongside real-time data, making them all-in-one solutions.
❓ Is there a free API with real-time stock data? ✅ Polygon.io provides limited real-time access on their free plan — you can get real-time quotes for U.S. stocks (with some delays or limits). Additionally, Finnhub’s free tier offers real-time data for U.S. markets (60 calls/min) which is quite generous. If you’re open to paid plans, FMP offers real-time quotes in its affordable paid tiers as well. And for an unofficial free route, Yahoo Finance data via the yfinance library can give near-real-time quotes (with no API key needed), though it’s not guaranteed or supported.
❓ I’m new to programming and want to learn using stock data. Which API is best? ✅ Alpha Vantage or Yahoo Finance (yfinance) are excellent for beginners. Alpha Vantage’s free tier and straightforward endpoints (plus a ton of community examples) make it easy to get started. The yfinance Python library lets you pull data from Yahoo Finance without dealing with complex API details – perfect for quick prototypes or learning pandas data analysis. Both integrate seamlessly with Python for learning purposes.
❓ Which API has the best global market coverage? ✅ EODHD, Finnhub, and FMP are known for their international coverage. EODHD covers dozens of exchanges worldwide (US, Europe, Asia, etc.) for both stock prices and fundamentals. Finnhub includes international stock data and forex/crypto. FMP also has a global equity coverage and even macro data for various countries. If you need data beyond just U.S. markets, these providers will serve you well.
❓ Can I use these APIs in Excel or Google Sheets without coding? ✅ Yes, several of them offer no-code solutions. EODHD, FMP, and Twelve Data all provide add-ins or integrations for Excel/Sheets. For example, EODHD and FMP have official Google Sheets functions after you install their add-on, letting you fetch stock prices or financial metrics into a spreadsheet cell. Twelve Data has an Excel plugin as well. This is ideal for analysts who prefer working in spreadsheets but still want live data updates.
Final Thoughts and Action Plan
You don’t need to be a big firm to access professional-grade financial data. Today’s landscape of financial APIs makes it possible for anyone — from a solo developer to a small startup — to get quality real-time stock data and more.
Follow these steps to get started:
Choose the API that best fits your profile and project needs. (Review the comparisons above to decide which one aligns with your requirements and budget.)
Sign up and get your free API key. Every platform listed offers a free tier or trial — take advantage of that to test the waters.
Connect the data to your tool of choice: whether it’s a Python script, an Excel sheet, or a custom dashboard, use the API documentation and examples to integrate live data into your workflow. Start with small experiments — e.g., pull one stock’s data and plot it.
By iterating on those steps, you’ll quickly gain familiarity with these APIs and unlock new possibilities, from automated trading bots to insightful financial dashboards.
Looking for a single API that does it all (fundamentals, historical prices, and news)? My recommendation is EODHD for its all-around strength in data coverage and value. It’s a one-stop shop for investors and developers alike.
Pro tip: You can try EODHD with a 10% discountusing the link above, to kickstart your project with some savings. Happy data hunting, and may your analyses be ever insightful!
Sources: The information above is gathered from official documentation and user reviews of each platform, including their pricing pages and features as of 2025. For example, Alpha Vantage’s free call limits, Intrinio’s pricing tiers, and Twelve Data’s rate limits are based on published data. Always double-check the latest details on each provider’s website, as features and pricing can evolve over time.
When Congress authorized over $5 trillion in pandemic-era relief programs, and directed agencies to prioritize speed above all else, fraudsters cashed in with bogus claims.
But data from these pandemic-era relief programs is now being used to train artificial intelligence-powered tools meant to detect fraud before payments go out.
The Pandemic Response Accountability Committee has developed an AI-enabled “fraud prevention engine,” trained on over 5 million applications for pandemic-era relief programs, that can review 20,000 applications for federal funds per second, and can flag anomalies in the data before payment.
The PRAC’s executive director, Ken Dieffenbach, told members of the House Oversight and Government Reform Committee on Tuesday that, had the fraud prevention engine been available at the onset of the pandemic, it would have flagged “at least tens of billions of dollars” in fraudulent claims.
Dieffenbach said that the PRAC’s data analytics capabilities can serve as an “early warning system” when organized, transnational criminals target federal benefits programs. He said the PRAC is working with agency inspectors general on ways to prevent fraud in programs funded by the One Big Beautiful Bill Act, as well as track fraudsters targeting multiple agencies.
“Fraudsters rarely target just one government program. They exploit vulnerabilities wherever they exist,” Dieffenbach said.
The PRAC’s analytics systems have recovered over $500 million in taxpayer funds. Created at the onset of the COVID-19 pandemic, the PRAC oversaw over $5 trillion in relief spending. It was scheduled to disband last year, but the One Big Beautiful Bill Act reauthorized the PRAC through 2034.
Government Operations Subcommittee Chairman Pete Sessions (R-Texas) said the PRAC has developed data analytics capabilities that can comb through billions of records, and that these tools need a “permanent” home once the PRAC disbands.
“A permanent solution that maintains the analytic capacities and capabilities that have been built over the past six years is necessary and needed. Its database is billions of records deep, and it has begun to pay for itself,” Sessions said.
In one pandemic fraud case, the PRAC identified a scheme where 100 applicants filed 450 applications across 24 states, and obtained $2.6 million in pandemic loans. Dieffenbach said there are tens of thousands of cases like it.
“This is but one example where the proactive use of data and technology could have prevented or aided in the early detection of a scheme, mitigated the need for a resource-intensive investigation and prosecution, and helped ensure taxpayer dollars went to the intended recipients and not the fraudsters,” Dieffenbach said.
Sterling Thomas, GAO’s chief scientist, said AI tools are showing promise in flagging fraud, but he warned that “rapid deployment without thoughtful design has already led to unintended outcomes.”
“In data science, we often say garbage in, garbage out. Nowhere is that more true than with AI and machine learning. If we start trying to identify fraud and improper payments with flawed data, we’re going to get poor results,” Thomas said.
The Treasury Department often serves as the last line of defense against fraud, but it is giving agencies access to more of its data to flag potential fraud before issuing payments.
Under a March executive order, President Donald Trump directed the Treasury Department to share its own fraud prevention database, Do Not Pay, with other agencies to the “greatest extent permitted by law.”
Renata Miskell, the deputy assistant secretary for accounting policy and financial transparency at the Treasury Department’s Bureau of the Fiscal Service, told lawmakers that only 4% of federal programs could access all of Do Not Pay’s data in fiscal 2014. But by the end of this fiscal year, she said all federal programs are on track to fully utilize Do Not Pay.
“We want every program — and there’s thousands of federal programs — to use Do Not Pay before making award and eligibility determinations,” Miskell said.
To make Do Not Pay a more effective tool against fraud, Miskell said Treasury is looking for the ability to “ping” other authoritative federal databases, such as the taxpayer identification numbers (TINs) issued by the IRS or Social Security numbers, before issuing a payment. Without those datasets, she said, Treasury is following a “trust but verify” approach to payments, doing some basic checks before federal funds go out.
“These data sources would dramatically improve eligibility determination and fraud prevention,” Miskell said.
FILE - The Treasury Building is viewed in Washington, May 4, 2021. The U.S. government has imposed sanctions on a Bosnian state prosecutor who is accused of being complicit in corruption and undermining democratic processes or institutions in the Western Balkans. The Treasury Department says its Office of Foreign Assets Control designated Diana Kajmakovic for sanctions. (AP Photo/Patrick Semansky, File)