Normal view

There are new articles available, click to refresh the page.
Yesterday — 5 December 2025Main stream

Agents-as-a-service are poised to rewire the software industry and corporate structures

5 December 2025 at 05:00

This was the year of AI agents. Chatbots that simply answered questions are now evolving into autonomous agents that can carry out tasks on a user’s behalf, so enterprises continue to invest in agentic platforms as transformation evolves. Software vendors are investing in it as fast as they can, too.

According to a National Research Group survey of more than 3,000 senior leaders, more than half of executives say their organization is already using AI agents. Of the companies that spend no less than half their AI budget on AI agents, 88% say they’re already seeing ROI on at least one use case, with top areas being customer service and experience, marketing, cybersecurity, and software development.

On the software provider side, Gartner predicts 40% of enterprise software applications in 2026 will include agentic AI, up from less than 5% today. And agentic AI could drive approximately 30% of enterprise application software revenue by 2035, surpassing $450 billion, up from 2% in 2025. In fact, business users might not have to interact directly with the business applications at all since AI agent ecosystems will carry out user instructions across multiple applications and business functions. At that point, a third of user experiences will shift from native applications to agentic front ends, Gartner predicts.

It’s already starting. Most enterprise applications will have embedded assistants, a precursor to agentic AI, by the end of this year, adds Gartner.

IDC has similar predictions. By 2028, 45% of IT product and service interactions will use agents as the primary interface, the firm says. That’ll change not just how companies work, but how CIOs work as well.

Agents as employees

At financial services provider OneDigital, chief product officer Vinay Gidwaney is already working with AI agents, almost as if they were people.

“We decided to call them AI coworkers, and we set up an AI staffing team co-owned between my technology team and our chief people officer and her HR team,” he says. “That team is responsible for hiring AI coworkers and bringing them into the organization.” You heard that right: “hiring.”

The first step is to sit down with the business leader and write a job description, which is fed to the AI agent, and then it becomes known as an intern.

“We have a lot of interns we’re testing at the company,” says Gidwaney. “If they pass, they get promoted to apprentices and we give them our best practices, guardrails, a personality, and human supervisors responsible for training them, auditing what they do, and writing improvement plans.”

The next promotion is to a full-time coworker, and it becomes available to be used by anyone at the company.

“Anyone at our company can go on the corporate intranet, read the skill sets, and get ice breakers if they don’t know how to start,” he says. “You can pick a coworker off the shelf and start chatting with them.”

For example, there’s Ben, a benefits expert who’s trained on everything having to do with employee benefits.

“We have our employee benefits consultants sitting with clients every day,” Gidwaney says. “Ben will take all the information and help the consultants strategize how to lower costs, and how to negotiate with carriers. He’s the consultants’ thought partner.”

There are similar AI coworkers working on retirement planning, and on property and casualty as well. These were built in-house because they’re core to the company’s business. But there are also external AI agents who can provide additional functionality in specialized yet less core areas, like legal or marketing content creation. In software development, OneDigital uses third-party AI agents as coding assistants.

When choosing whether to sign up for these agents, Gidwaney says he doesn’t think of it the way he thinks about licensing software, but more to hiring a human consultant or contractor. For example, will the agent be a good cultural fit?

But in some cases, it’s worse than hiring humans since a bad human hire who turns out to be toxic will only interact with a small number of other employees. But an AI agent might interact with thousands of them.

“You have to apply the same level of scrutiny as how you hire real humans,” he says.

A vendor who looks like a technology company might also, in effect, be a staffing firm. “They look and feel like humans, and you have to treat them like that,” he adds.

Another way that AI agents are similar to human consultants is when they leave the company, they take their expertise with them, including what they gained along the way. Data can be downloaded, Gidwaney says, but not necessarily the fine-tuning or other improvements the agent received. Realistically, there might not be any practical way to extract that from a third-party agent, and that could lead to AI vendor lock-in.

Edward Tull, VP of technology and operations at JBGoodwin Realtors, says he, too, sees AI agents as something akin to people. “I see it more as a teammate,” he says. “As we implement more across departments, I can see these teammates talking to each other. It becomes almost like a person.”

Today, JBGoodwin uses two main platforms for its AI agents. Zapier lets the company build its own and HubSpot has its own AaaS, and they’re already pre-built. “There are lead enrichment agents and workflow agents,” says Tull.

And the company is open to using more. “In accounting, if someone builds an agent to work with this particular type of accounting software, we might hire that agent,” he says. “Or a marketing coordinator that we could hire that’s built and ready to go and connected to systems we already use.”

With agents, his job is becoming less about technology and more about management, he adds. “It’s less day-to-day building and more governance, and trying to position the company to be competitive in the world of AI,” he says.

He’s not the only one thinking of AI agents as more akin to human workers than to software.

“With agents, because the technology is evolving so far, it’s almost like you’re hiring employees,” says Sheldon Monteiro, chief product officer at Publicis Sapient. “You have to determine whom to hire, how to train them, make sure all the business units are getting value out of them, and figure when to fire them. It’s a continuous process, and this is very different from the past, where I make a commitment to a platform and stick with it because the solution works for the business.”

This changes how the technology solutions are managed, he adds. What companies will need now is a CHRO, but for agentic employees.

Managing outcomes, not persons

Vituity is one of the largest national, privately-held medical groups, with 600 hospitals, 13,800 employees, and nearly 14 million patients. The company is building its own AI agents, but is also using off-the-shelf ones, as AaaS. And AI agents aren’t people, says CIO Amith Nair. “The agent has no feelings,” he says. “AGI isn’t here yet.”

Instead, it all comes down to outcomes, he says. “If you define an outcome for a task, that’s the outcome you’re holding that agent to.” And that part isn’t different to holding employees accountable to an outcome. “But you don’t need to manage the agent,” he adds. “They’re not people.”

Instead, the agent is orchestrated and you can plug and play them. “It needs to understand our business model and our business context, so you ground the agent to get the job done,” he says.

For mission-critical functions, especially ones related to sensitive healthcare data, Vituity is building its own agents inside a HIPAA-certified LLM environment using the Workato agent development platform and the Microsoft agentic platform.

For other functions, especially ones having to do with public data, Vituity uses off-the-shelf agents, such as ones from Salesforce and Snowflake. The company is also using Claude with GitHub Copilot for coding. Nair can already see that agentic systems will change the way enterprise software works.

“Most of the enterprise applications should get up to speed with MCP, the integration layer for standardization,” he says. “If they don’t get to it, it’s going to become a challenge for them to keep selling their product.”

A company needs to be able to access its own data via an MCP connector, he says. “AI needs data, and if they don’t give you an MCP, you just start moving it all to a data warehouse,” he adds.

Sharp learning curve

In addition to providing a way to store and organize your data, enterprise software vendors also offer logic and functionality, and AI will soon be able to handle that as well.

“All you need is a good workflow engine where you can develop new business processes on the fly, so it can orchestrate with other agents,” Nair says. “I don’t think we’re too far away, but we’re not there yet. Until then, SaaS vendors are still relevant. The question is, can they charge that much money anymore.”

The costs of SaaS will eventually have to come down to the cost of inference, storage, and other infrastructure, but they can’t survive the way they’re charging now he says. So SaaS vendors are building agents to augment or replace their current interfaces. But that approach itself has its limits. Say, for example, instead of using Salesforce’s agent, a company can use its own agents to interact with the Salesforce environment.

“It’s already happening,” Nair adds. “My SOC agent is pulling in all the log files from Salesforce. They’re not providing me anything other than the security layer they need to protect the data that exists there.”

AI agents are set to change the dynamic between enterprises and software vendors in other ways, too. One major difference between software and agents is software is well-defined, operates in a particular way, and changes slowly, says Jinsook Han, chief of strategy, corporate development, and global agentic AI at Genpact.

“But we expect when the agent comes in, it’s going to get smarter every day,” she says. “The world will change dramatically because agents are continuously changing. And the expectations from the enterprises are also being reshaped.”

Another difference is agents can more easily work with data and systems where they are. Take for example a sales agent meeting with customers, says Anand Rao, AI professor at Carnegie Mellon University. Each salesperson has a calendar where all their meetings are scheduled, and they have emails, messages, and meeting recordings. An agent can simply access those emails when needed.

“Why put them all into Salesforce?” Rao asks. “If the idea is to do and monitor the sale, it doesn’t have to go into Salesforce, and the agents can go grab it.”

When Rao was a consultant having a conversation with a client, he’d log it into Salesforce with a note, for instance, saying the client needs a white paper from the partner in charge of quantum.

With an agent taking notes during the meeting, it can immediately identify the action items and follow up to get the white paper.

“Right now we’re blindly automating the existing workflow,” Rao says. “But why do we need to do that? There’ll be a fundamental shift of how we see value chains and systems. We’ll get rid of all the intermediate steps. That’s the biggest worry for the SAPs, Salesforces, and Workdays of the world.”

Another aspect of the agentic economy is instead of a human employee talking to a vendor’s AI agent, a company agent can handle the conversation on the employee’s behalf. And if a company wants to switch vendors, the experience will be seamless for employees, since they never had to deal directly with the vendor anyway.

“I think that’s something that’ll happen,” says Ricardo Baeza-Yates, co-chair of the  US technology policy committee at the Association for Computing Machinery. “And it makes the market more competitive, and makes integrating things much easier.”

In the short term, however, it might make more sense for companies to use the vendors’ agents instead of creating their own.

“I recommend people don’t overbuild because everything is moving,” says Bret Greenstein, CAIO at West Monroe Partners, a management consulting firm. “If you build a highly complicated system, you’re going to be building yourself some tech debt. If an agent exists in your application and it’s localized to the data in that application, use it.”

But over time, an agent that’s independent of the application can be more effective, he says, and there’s a lot of lock-in that goes into applications. “It’s going to be easier every day to build the agent you want without having to buy a giant license. “The effort to get effective agents is dropping rapidly, and the justification for getting expensive agents from your enterprise software vendors is getting less,” he says.

The future of software

According to IDC, pure seat-based pricing will be obsolete by 2028, forcing 70% of vendors to figure out new business models.

With technology evolving as quickly as it is, JBGoodwin Realtors has already started to change its approach to buying tech, says Tull. It used to prefer long-term contracts, for example but that’s not the case anymore “You save more if you go longer, but I’ll ask for an option to re-sign with a cap,” he says.

That doesn’t mean SaaS will die overnight. Companies have made significant investments in their current technology infrastructure, says Patrycja Sobera, SVP of digital workplace solutions at Unisys.

“They’re not scrapping their strategies around cloud and SaaS,” she says. “They’re not saying, ‘Let’s abandon this and go straight to agentic.’ I’m not seeing that at all.”

Ultimately, people are slow to change, and institutions are even slower. Many organizations are still running legacy systems. For example, the FAA has just come out with a bold plan to update its systems by getting rid of floppy disks and upgrading from Windows 95. They expect this to take four years.

But the center of gravity will move toward agents and, as it does, so will funding, innovation, green-field deployments, and the economics of the software industry.

“There are so many organizations and leaders who need to cross the chasm,” says Sobera. “You’re going to have organizations at different levels of maturity, and some will be stuck in SaaS mentality, but feeling more in control while some of our progressive clients will embrace the move. We’re also seeing those clients outperform their peers in revenue, innovation, and satisfaction.”

Before yesterdayMain stream

US federal software reform bill aims to strengthen software management controls

4 December 2025 at 11:57

Software management struggles that have pained enterprises for decades cause the same anguish to government agencies, and a bill making its way through the US House of Representatives to strengthen controls around government software management holds lessons for enterprises too.

The Strengthening Agency Management and Oversight of Software Assets (SAMOSA) bill, H.R. 5457, received unanimous approval from a key US House of Representative committee, the Committee on Oversight and Government Reform, on Tuesday.

SAMOSA is mostly focused on trying to fix “software asset management deficiencies” as well as requiring more “automation of software license management processes and incorporation of discovery tools,” issues that enterprises also have to deal with.

In addition, it requires anyone involved in software acquisition and development to be trained in the agency’s policies and, more usefully, in negotiation of contract terms, especially those that put restrictions on software deployment and use.

This training could also be quite useful for enterprise IT operations. It would teach “negotiating options” and specifically the “differences between acquiring commercial software products and services and acquiring or building custom software and determining the costs of different types of licenses and options for adjusting licenses to meet increasing or decreasing demand.”

The mandated training would also include tactics for measuring “actual software usage via analytics that can identify inefficiencies to assist in rationalizing software spending” along with methods to “support interoperable capabilities between software.”

Outlawing shadow IT

The bill also attempts to rein in shadow IT by “restricting the ability of a bureau, program, component, or operational entity within the agency to acquire, use, develop, or otherwise leverage any software entitlement without the approval of the Chief Information Officer of the agency.” But there are no details about how such a rule would be enforced.

It would require agencies “to provide an estimate of the costs to move toward more enterprise, open-source, or other licenses that do not restrict the use of software by the agency, and the projected cost savings, efficiency measures, and improvements to agency performance throughout the total software lifecycle.” But the hiccup is that benefits will only materialize if technology vendors change their ways, especially in terms of transparency.

However, analysts and consultants are skeptical that such changes are likely to happen.

CIOs could be punished

Yvette Schmitter, a former Price Waterhouse Coopers principal who is now CEO of IT consulting firm Fusion Collective, was especially pessimistic about what would happen if enterprises tried to follow the bill’s rules.

“If the bill were to become law, it would set enterprise CIOs up for failure,” she said. “The bill doubles down on the permission theater model, requiring CIO approval for every software acquisition while providing zero framework for the thousands of generative AI tools employees are already using without permission.”

She noted that although the bill mandates comprehensive assessments of “software paid for, in use, or deployed,” it neglects critical facets of today’s AI software landscape. “It never defines how you access an AI agent that writes its own code, a foundation model trained on proprietary data, or an API that charges per token instead of per seat,” she said. “Instead of oversight, the bill would unlock chaos, potentially creating a compliance framework where CIOs could be punished for buying too many seats for a software tool, but face zero accountability for safely, properly, and ethically deploying AI systems.”

Schmitter added: “The bill is currently written for the 2015 IT landscape and assumes that our current AI systems come with instruction manuals and compliance frameworks, which they obviously do not.”

She also pointed out that the government seems to be working at cross-purposes. “The H.R. 5457 bill is absurd,” she said. “Congress is essentially mandating 18-month software license inventories while the White House is simultaneously launching the Genesis Mission executive order for AI that will spin up foundation models across federal agencies in the next nine months. Both of these moves are treating software as a cost center and AI as a strategic weapon, without recognizing that AI systems are software.”

Scott Bickley, advisory fellow at Info-Tech Research Group, was also unimpressed with the bill. “It is a sad, sad day when the US Federal government requires a literal Act of Congress to mandate the Software Asset Management (SAM) behaviors that should be in place across every agency already,” Bickley said. “One can go review the [Office of Inspector General] reports for various government agencies, and it is clear to see that the bureaucracy has stifled all attempts, assuming there were attempts, at reining in the beast of software sprawl that exists today.”

Right goal, but toothless

Bickley said that the US government is in dire need of better software management, but that this bill, even if it was eventually signed into law, would be unlikely to deliver any meaningful reforms. 

“This also presumes the federal government actually negotiates good deals for its software. It unequivocally does not. Never has there been a larger customer that gets worse pricing and commercial terms than the [US] federal government,” Bickley said. “At best, in the short term, this bill will further enrich consultants, as the people running IT for these agencies do not have the expertise, tooling, or knowledge of software/subscription licensing and IP to make headway on their own.”

On the bright side, Bickley said the goal of the bill is the right one, but the fact that the legislation didn’t deliver or even call for more funding makes it toothless. “The bill is noble in its intent. But the fact that it requires a host of mandatory reporting, [Government Accountability Office] oversight, and actions related to inventory and overall [software bill of materials] rationalization with no new budget authorization is a pipe dream at best,” he said. 

Sanchit Vir Gogia, the chief analyst at Greyhound Research, was more optimistic, saying that the bill would change the law in a way that should have happened long ago.

“[It] corrects a long-standing oversight in federal technology management. Agencies are currently spending close to $33 billion every year on software. Yet most lack a basic understanding of what software they own, what is being used, or where overlap exists. This confusion has been confirmed by the Government Accountability Office, which reported that nine of the largest agencies cannot identify their most-used or highest-cost software,” Gogia said. “Audit reports from NASA and the Environmental Protection Agency found millions of dollars wasted on licenses that were never activated or tracked. This legislation is designed to stop such inefficiencies by requiring agencies to catalogue their software, review all contracts, and build plans to eliminate unused or duplicate tools.”

Lacks operational realism

Gogia also argued, “the added pressure of transparency may also lead software providers to rethink their pricing and make it easier for agencies to adjust contracts in response to actual usage.” If that happens, it would likely trickle into greater transparency for enterprise IT operations. 

Zahra Timsah, co-founder and CEO of i-GENTIC AI, applauded the intent of the bill, while raising logistical concerns about whether much would ultimately change even if it ultimately became law.

“The language finally forces agencies to quantify waste and technical fragmentation instead of talking about it in generalities. The section restricting bureaus from buying software without CIO approval is also a smart, direct hit on shadow IT. What’s missing is operational realism,” Timsah said. “The bill gives agencies a huge mandate with no funding, no capacity planning, and no clear methodology. You can’t ask for full-stack interoperability scoring and lifecycle TCO analysis without giving CIOs the tools or budget to produce it. My concern is that agencies default to oversized consulting reports that check the box without actually changing anything.”

Timsah said that the bill “is going to be very difficult to implement and to measure. How do you measure it is being followed?” She added that agencies will parrot the bill’s wording and then try to hire people to manage the process. “It’s just going to be for optic’s sake.”

AWS offers new service to make AI models better at work

3 December 2025 at 09:30

Enterprises are no longer asking whether they should adopt AI; rather, they want to know why the AI they have already deployed still can’t reason as their business requires it to.

Those AI systems are often missing an enterprise’s specific business context, because they are trained on generic, public data, and it’s expensive and time-consuming to fine-tune or retrain them on proprietary data, if that’s even possible.

Microsoft’s approach, unveiled at Ignite last month, is to wrap AI applications and agents with business context and semantic intelligence in its Fabric IQ and Work IQ offerings.

AWS is taking a different route, inviting enterprises to build their business context directly into the models that will run their applications and agents, as its CEO Matt Garman explained in his opening keynote at the company’s re:Invent show this week.

Third-party models don’t have access to proprietary data, he said, and building models with that data from scratch is impractical, while adding it to an existing model through retrieval augmented generation (RAG), vector search, or fine-tuning has limitations.

But, he asked, “What if you could integrate your data at the right time during the training of a frontier model and then create a proprietary model that was just for you?”

AWS’s answer to that is Nova Forge, a new service that enterprises can use to customize a foundation large language model (LLM) to their business context by blending their proprietary business data with AWS-curated training data. That way, the model can internalize their business logic rather than having to reference it externally again and again for inferencing.

Analysts agreed with Garman’s assessment of the limitations in existing methods that Nova Forge aims to circumvent.

“Prompt engineering, RAG, and even standard supervised fine-tuning are powerful, but they sit on top of a fully trained model and are inherently constrained. Enterprises come up against context windows, latency, orchestration complexity. It’s a lot of work, and prone to error, to continuously ‘bolt on’ domain expertise,” said Stephanie Walter, practice leader of AI stack at HyperFRAME Research.

In contrast, said ISG’s executive director of software research, David Menninger, Nova Forge’s approach can simplify things: “If the LLM can be modified to incorporate the relevant information, it makes the inference process much easier to manage and maintain.”

Who owns what

HFS Research’s associate practice leader Akshat Tyagi, broke down the two companies’ strategies: “Microsoft wants to own the AI experience. AWS wants to own the AI factory. Microsoft is packaging intelligence inside its ecosystem. AWS is handing you the tools to create your own intelligence and run it privately,” he said.

While Microsoft’s IQ message essentially argues that enterprises don’t need sprawling frontier models and can work with compact, business-aware models that stay securely within their tenant and boost productivity, AWS is effectively asking enterprises not to settle for tweaking an existing model but use its tools to create a near–frontier-grade model tailored to their business, Tyagi said.

The subtext is clear, he said: AWS knows it’s unlikely to dominate the assistant or productivity layer, so it’s doubling down on its core strengths of deep infrastructure, while Microsoft is playing the opposite game.

Nova Forge is a clear infrastructure play, Walter said. “It gives AWS a way to drive Trainium, Bedrock, and SageMaker as a unified frontier-model platform while offering enterprises a less expensive path than bespoke AI labs.”

The approach AWS is taking with Nova Forge will curry favor with enterprises working on use cases that require precision and nuance, including drug discovery, healthcare, industrial control, highly regulated financial workflows, and enterprise-wide code assistants, she said.

Custom LLM training costs

In his keynote, Garman said that Nova Forge eliminates the prohibitive cost, time, and engineering drag of designing and training a LLM from scratch — the same barrier that has stopped most enterprises, and even rivals such as Microsoft, from attempting to provide a solution at this layer.

It does so by offering a pre-trained model and various training checkpoints or snapshots of the model to jumpstart the custom model building activity instead of having to pre-train it from scratch or retrain it for context again and again, which AWS argues is a billion-dollar affair.

By choosing whether they want to start from a checkpoint in early pre-training, mid-training, or post‑training, said Robert Kramer, principal analyst at Moor Strategy and Insights, “Enterprise choose how deeply they want their domain to shape the model.”

AWS plans to offer the service through a subscription model rather than an open-ended compute consumption model. It didn’t disclose the price publicly, referring customers to an online dashboard, but CNBC reported that Nova Forge’s price starts at $100,000 per year.

Enterprises can start building a custom building a model via the new service on SageMaker Studio and later export it to Bedrock for consumption, AWS said. Nova Forge’s availability is currently limited to the US East region in Northern Virginia.

End-to-end encryption is next frontline in governments’ data sovereignty war with hyperscalers

1 December 2025 at 08:21

Data residency is no longer enough. As governments lose faith that storing data within their borders, but on someone else’s servers, provides real sovereignty, regulators are demanding something more fundamental: control over the encryption keys for their data.

Privatim, a collective of Swiss local government data protection officers, last week called on their employers to avoid the use of international software-as-a-service solutions for sensitive government data unless the agencies themselves implement end-to-end encryption. The resolution specifically cited Microsoft 365 as an example of the kinds of platforms that fall short.

“Most SaaS solutions do not yet offer true end-to-end encryption that would prevent the provider from accessing plaintext data,” said the Swiss data protection officers’ resolution. “The use of SaaS applications therefore entails a significant loss of control.”

Security analysts say this loss of control undermines the very concept of data sovereignty. “When a cloud provider has any ability to decrypt customer data, either through legal process or internal mechanisms, the data is no longer truly sovereign,” said Sanchit Vir Gogia, chief analyst at Greyhound Research.

The Swiss position isn’t isolated, Gogia said. Across Europe, Germany, France, Denmark and the European Commission have each issued warnings or taken action, pointing to a loss of faith in the neutrality of foreign-owned hyperscalers, he said. “Switzerland distinguished itself by stating explicitly what others have implied: that the US CLOUD Act and foreign surveillance risk renders cloud solutions lacking end-to-end encryption unsuitable for high-sensitivity public sector use, according to the resolution.”

Encryption, location, location

Privatim’s resolution identified risks that geographic data residency cannot address. Globally operating companies offer insufficient transparency for authorities to verify compliance with contractual obligations, the group said. This opacity extends to technical implementations, change management, and monitoring of employees and subcontractors who can form long chains of external service providers.

Data stored in one jurisdiction can still be accessed by foreign governments under extraterritorial laws like the US Clarifying Lawful Overseas Use of Data (CLOUD) Act, said Ashish Banerjee, senior principal analyst at Gartner. Software providers can also unilaterally amend contract terms periodically, further reducing customer control, he said.

“Several clients in the Middle East and Europe have raised concerns that, regardless of where their data is stored, it could still be accessed by cloud providers — most of which are US-based,” Banerjee said.

Prabhjyot Kaur, senior analyst at Everest Group, said the Swiss stance accelerates a broader regulatory pivot toward technical sovereignty controls. “While the Swiss position is more stringent than most, it is not an isolated outlier,” she said. “It accelerates a broader regulatory pivot toward technical sovereignty controls, even in markets that still rely on contractual or procedural safeguards today.”

Given these limitations, Privatim called for stricter rules on cloud use at all levels of government: “The use of international SaaS solutions for particularly sensitive personal data or data subject to legal confidentiality obligations by public bodies is only possible if the data is encrypted by the responsible body itself and the cloud provider has no access to the key.”

This represents a departure from current practices, where many government bodies rely on cloud providers’ native encryption features. Services like Microsoft 365 offer encryption at rest and in transit, but Microsoft retains the ability to decrypt that data for operational purposes, compliance requirements, or legal requests.

More security, less insight

Customer-controlled end-to-end encryption comes with significant trade-offs, analysts said.

“When the provider has zero visibility into plaintext, governments would face reduced search and indexing capabilities, limited collaboration features, and restrictions on automated threat detection and data loss prevention tooling,” said Kaur. “AI-driven productivity enhancements like copilots also rely on provider-side processing, which becomes impossible under strict end-to-end encryption.”

Beyond functionality losses, agencies would face significant infrastructure and cost challenges. They would need to operate their own key management systems, introducing governance overhead and staffing needs. Encryption and decryption at scale can impact system performance, as they require additional hardware resources and increase latency, Banerjee said.

“This might require additional hardware resources, increased latency in user interactions, and a more expensive overall solution,” he said.

These constraints mean most governments will likely adopt a tiered approach rather than blanket encryption, said Gogia. “Highly confidential content, including classified documents, legal investigations, and state security dossiers, can be wrapped in true end-to-end encryption and segregated into specialized tenants or sovereign environments,” he said. Broader government operations, including administrative records and citizen services, will continue to use mainstream cloud platforms with controlled encryption and enhanced auditability.

A shift in cloud computing power

If the Swiss approach gains momentum internationally, hyperscalers will need to strengthen technical sovereignty controls rather than relying primarily on contractual or regional assurances, Kaur said. “The required adaptations are already visible, particularly from Microsoft, which has begun rolling out more stringent models around customer-controlled encryption and jurisdictional access restrictions.”

The shift challenges fundamental assumptions in how cloud providers have approached government customers, according to Gogia. “This invalidates large portions of the existing government cloud playbooks that depend on data center residency, regional support, and contractual segmentation as the primary guarantees,” he said. “Client-side encryption, confidential computing, and external key management are no longer optional capabilities but baseline requirements for public sector contracts in high-compliance markets.”

The market dynamics could shift significantly as a result. Banerjee said this could create a two-tier structure: global cloud services for commercial customers less concerned about sovereignty, and premium sovereign clouds for governments demanding full control. “Non-US cloud providers and local vendors — such as emerging players in Europe — could gain market share by delivering sovereign solutions that meet strict encryption requirements,” he said.

Privatim’s recommendations apply specifically to Swiss public bodies and serve as guidance rather than binding policy. But the debate signals that data location alone may no longer satisfy regulators’ sovereignty concerns in an era where geopolitical rivalries are increasingly playing out through technology policy.

How to Choose the Right Virtual Data Room for Your Startup

Learn how to choose the right virtual data room for your startup with pricing models, key features, cost factors, and tips to secure the best VDR deal.

The post How to Choose the Right Virtual Data Room for Your Startup appeared first on Security Boulevard.

Stop Optimizing for Google. Start Optimizing for AI That Actually Answers Questions.

AI answer engines changed the game. It's no longer about ranking #1—it's about being cited in AI-generated responses. Learn how to build content infrastructure that ChatGPT, Perplexity, and Claude actually reference. Includes real implementation strategies from scaling B2B SaaS content.

The post Stop Optimizing for Google. Start Optimizing for AI That Actually Answers Questions. appeared first on Security Boulevard.

SaaS tools Black Friday deals For Developer 2025

Explore the best SaaS tools Black Friday deals for developers in 2025. Save big on AI, security, automation, and productivity tools before offers expire.

The post SaaS tools Black Friday deals For Developer 2025 appeared first on Security Boulevard.

SOC 2 Compliance for SaaS: How to Win and Keep Client Trust

23 April 2025 at 03:16
3.4/5 - (8 votes)

The Software as a Service (SaaS) industry has seen both great expansion and notable downturns in recent years, with key market shifts redefining the landscape.As companies adapt to the shifting SaaS landscape, SOC 2 Compliance for SaaS has emerged as a key priority—not just as a checkbox for security, but as a signal of trustworthiness and a commitment to protecting customer data in an increasingly cautious market. After reaching record highs in 2021, the SaaS industry faced a major downturn in 2022, with company valuations dropping by almost 50%, according to Meritech Capital.

This downturn shook the market, creating pressures around profitability and customer retention. However, now in 2024, it is a different story. That is despite the challenges, the SaaS industry is now stabilizing, with B2B SaaS companies projected to grow at an 11% compound annual growth rate (CAGR) and B2C SaaS at 8% for the remainder of the year according to the recent report of Paddle.

This period of cautious optimism underscores an undeniable priority for SaaS companies: client trust, particularly as clients increasingly scrutinize data security and compliance practices. Getting SOC 2 (System and Organization Controls 2) compliance has become a critical step in building this trust, as it ensures that a company’s data handling and security protocols meet the appropriate standards.

In this guide, we will learn why SOC 2 for SaaS companies is essential and offer practical steps to achieve SOC 2 compliance for SaaS in 2024.

Why SaaS companies need SOC 2?

As a SaaS company, you are handling a vast number of customer data from personal information to financial records. Now data breaches and mishandling of those information cannot only impact your reputation but can also lead to the loss of your client’s trust. As we learned in the introduction, SOC 2 is an important step that helps you build trust and transparency that you will need to assure clients that their data is protected at every level.

By being SOC 2 compliant, you will be able to stand out in a competitive market expressing your serious concern and approach to data security. That will show also how much serious you are about data security and are willing to go the extra mile to safeguard your client’s trust.

Plus, many companies often need to comply with various regulations to operate securely on a global scale which often includes frameworks like ISO 27001, a widely recognized security standard. When comparing SOC 2 vs ISO 27001, the key difference lies in their specific scope and focus.

While SOC 2 emphasizes trust principles for data security, ISO 27001 provides a broader framework for information security management. This is also true for other regulations like GDPR or HIPAA, which may apply depending on your industry or location.

Once your SaaS company becomes SOC 2 compliant, you’ll not only be able to demonstrate a proactive approach to data security but also align with broader regulatory standards. This will build trust, strengthen your reputation, and position your company as a security-focused partner in an increasingly competitive marketplace.

soc2 compliance checklist

Core Trust Principles: Building blocks of SOC 2 for SaaS

SOC 2 compliance is built around five core trust principles that serve as the framework’s foundation. Each principle addresses a crucial aspect of data protection, making SOC 2 comprehensive and adaptable to SaaS environments:

  1. Security: Measures to protect against unauthorized access, such as firewalls, encryption, and intrusion detection.
  2. Availability: Ensuring systems are accessible to users, with safeguards against downtime and disruptions.
  3. Processing integrity: Assuring that systems process data accurately, reliably, and free from errors.
  4. Confidentiality: Protecting sensitive data from unauthorized disclosure, particularly in shared environments.
  5. Privacy: Ensuring that personal data is collected, used, retained, and disposed of in compliance with privacy regulations.

By adhering to the above principles, your SaaS organization can build a strong security foundation that meets client expectations and supports compliance.

Which type of SOC 2 report is suitable for SaaS?

  • SOC 2 Type 1: This report will assess the design of your company’s control at a specific point in time and verify whether the necessary controls are in place. If your SaaS company is just starting out with SOC 2 compliance a Type 1 report would be helpful as an ideal starting point.
  • SOC 2 Type 2: This report is generally comprehensive and goes a step further in evaluating the effectiveness of those controls over a defined time period (6 to 1 year). Type 2 report is ideal if your SaaS company is looking to demonstrate sustained adherence to security practices, a requirement often favored by enterprise-level clients and partners who prioritize reliability and consistency in security measures.

Considering both options, you should first evaluate your company’s current stage in the SOC 2 compliance journey and the needs of your clients. If you’re just starting out, a SOC 2 Type 1 report is a good first step as I mentioned before, but then again if you’re working with enterprise clients who require proof of ongoing security practices, a SOC 2 Type 2 report is more appropriate.

Key steps to achieve SOC 2 compliance for SaaS companies

1. Identify the relevant SOC 2 trust principles

Determine which SOC 2 trust principles apply to your business. While SaaS providers prioritize the Security principle, client requirements may require identifying and addressing other principles such as Availability or Confidentiality.

2. Conduct a readiness assessment

Perform a SOC 2 readiness assessment or gap analysis to identify gaps in your current security practices compared to SOC 2 requirements. This helps in understanding what controls need to be added or improved.

3. Establish and document security policies and procedures

Develop detailed, documented policies and procedures addressing each selected SOC 2 principle. These should cover areas like data encryption, access control, incident response, and more, and will serve as the foundation for your compliance efforts.

4. Implement required security controls

Based on the readiness assessment, implement or strengthen controls to meet SOC 2 standards. This can include access management protocols, network monitoring, secure software development practices, and continuous vulnerability assessments.

5. Train employees on SOC 2 requirements

Conduct regular training sessions to ensure employees understand their role in achieving and maintaining SOC 2 compliance. This step is crucial to prevent insider threats and maintain a high standard of security awareness.

6.Engage in ongoing monitoring and logging

Set up logging and monitoring systems to track access, detect security incidents, and provide evidence of control operation. For SOC 2 Type 2 compliance, monitoring must demonstrate consistent control effectiveness over a period (usually 3, 6 months to a year).

7.Conduct a readiness review with an auditor

Engage a SOC 2 auditor for a readiness review, which provides an informal evaluation of your current controls and identifies areas needing improvement. This step prepares you for the official audit by allowing time to address any remaining gaps.

8. Schedule and complete the SOC 2 audit

Once ready, schedule the SOC 2 audit with a certified public accounting (CPA) firm. For a Type 1 report, the audit will assess controls at a specific point in time, while a Type 2 audit will assess controls over an extended period.

9. Address findings and achieve continuous compliance

If the audit identifies areas for improvement, address them promptly. Once compliant, continue regular monitoring, updating policies, and conducting internal audits to maintain SOC 2 standards over time.

Check out this YouTube video to learn in detail about the SOC 2 requirements and practical tips to ensure a smooth audit process.

SOC2 Audit and Attestation

The Best way to get your SOC 2 ready

While securing SOC 2 compliance is definitely beneficial, the process could feel quite overwhelming. This is especially true for SaaS companies that are just starting out, due to complex regulations and security standards which could make it challenging to know where to start and what to prioritize.

Plus, SOC 2 compliance requires not only the implementation of strong security measures but also an ongoing commitment to maintaining them which could be time consuming and resource intensive. Now this is where VISTA InfoSec comes in. At VISTA InfoSec, we provide SOC 2 audit and attestation services, helping SaaS providers confidently achieve and sustain SOC 2 compliance.

Our approach to SOC 2 compliance is designed to take the stress out of the process. With us you will not only meet compliance standards but will also build a solid foundation of trust with your clients, proving your dedication to protecting their data. Contact us today to start your journey to SOC 2 compliance. You can also book a FREE 1 time consultation with our expert by filling in the ‘Enquire Now’ form.

The post SOC 2 Compliance for SaaS: How to Win and Keep Client Trust appeared first on Information Security Consulting Company - VISTA InfoSec.

❌
❌