Reading view

There are new articles available, click to refresh the page.

SAP employees’ trust in leadership has diminished since the restructuring

SAP’s restructuring may have been good for its bottom line, but behind the scenes, it has backfired.

The company did what it promised, said Greyhound Research chief analyst Sanchit Vir Gogia: It wrapped up its restructuring plan, affecting 10,000 employees, by early 2025, kept headcount steady, and delivered strong financial results. But, he said, “Numbers only tell half the story. Inside the organization, something broke.”

That’s evidenced by a recent internal survey, which revealed that many staff no longer trust company leaders or their strategy.

Trust in SAP’s executive board has fallen by six percentage points since April, to a mere 59%, Chief People Officer Gina Vargiu-Breuer wrote in an internal email seen by Bloomberg. In April 2021, that number was more than 80%, said Bloomberg, citing a local media report.

Confidence in SAP’s execution of its strategy has now dropped to 70%, from 77% in April of this year.

No time to learn

“That number drops even further in Germany, where only 38% say they fully trust leadership,” Gogia said. “And it’s not just a sentiment dip. Over 38,000 employees voiced specific concerns. The pattern is clear: confusion around new performance goals, not enough support to implement AI, unstable team dynamics, and barely any breathing room to learn.”

This isn’t resistance to change: “People get the vision,” he said. “The problem is executional load. You cannot drive large-scale transformation, especially AI-first initiatives, without building systems that carry your people with you.”

As part of SAP’s plan to transform into a skills-led company by 2028, 80% of employees were to be assigned to modernized job profiles by the middle of this year, SAP chief people officer Gina Vargiu-Breuer told financial analysts attending a company event in May. This was intended to “unleash AI-personalized growth opportunities” based on an “externally benchmarked global skills taxonomy” of 1,500 future-ready skills, she said. The company devotes 15% of working time to continuous personal development, she said.

In the recent email seen by Bloomberg, however, she admitted: “The feedback shows that not every step [in the transformation] has landed how it should.”

Unfiltered Pulse

In an email statement to CIO, SAP said that the Unfiltered Pulse survey that highlighted the issues “was designed to gather nuanced feedback from employees, focusing on both strengths and areas for improvement. Employees, for instance, have stated that helpful feedback as well as learning and development opportunities are supporting their growth. Results related to team culture are also positive worldwide.”

SAP said 84% of its more than 100,000 employees responded to the most recent of the surveys, conducted every six months, and while there were some positives this time, “the findings also clearly indicate that employee engagement and trust in the board require attention. Following increases in sentiment in the previous iteration, there has been a decrease in the recent Pulse survey. We greatly value this feedback and are taking targeted action in response. We are therefore implementing specific measures to address the input from our employees and to drive meaningful change.”

SAP has not revealed details of these measures.

However, Info-Tech Research Group senior advisory analyst Yaz Palanichamy said, “SAP employees feel a sense of disillusionment as a result of the efforts undertaken in supporting this restructuring program.”

While SAP had framed the initiative as a strategic pivot towards embracing scalable cloud and AI growth, he said, the sheer number of roles affected, and the way that number ballooned well beyond the original target of 8,000, has left many of SAP’s employees concerned about their jobs and about senior leadership.

They worry about organizational stability and clarity, and about whether there is adequate support for their reskilling, he said: “If the cultural and operational gaps concerning morale, talent retention, and organizational role alignment are not proactively addressed, this could severely hinder SAP’s growth ambitions [in cloud and AI].”

SAP is not alone

SAP isn’t the only company that has faced these issues, Gogia pointed out. “Look beyond SAP, and the same symptoms show up elsewhere. Salesforce saw trust scores crater after its 2023 layoffs. Oracle’s morale took a hit when staff felt shut out of decisions. But SAP’s case stands out because performance at the top was solid, yet employee confidence eroded underneath. That divergence is dangerous. When momentum at the surface isn’t backed by alignment at the core, cracks appear in delivery, consistency breaks down, and partners feel the wobble. Execution doesn’t fail all at once. It frays. Quietly. Progressively.”

SAP isn’t ignoring the issues, he noted. It has begun to take action, appointing leaders, communicating priorities, and revisiting how teams are measured.

“That’s good,” he said. “But the real fix will come not from announcements but from behavioral evidence. Trust comes back when people stop guessing what’s next, when systems stabilize, when leaders stay visible, when workloads balance out.”

Behavioral drift: The hidden risk every CIO must manage

It’s the slow change no one notices: AI models evolve and people adapt to that. Systems learn and then they forget. Behavioral drift is quietly rewriting how enterprises operate, often without anyone noticing until it is too late.

In my own work leading AI-driven transformations, I have learned that change rarely happens through grand rewrites. It happens quietly, through hundreds of micro-adjustments and no dashboard flags. The model that once detected fraud with 95% accuracy slowly starts to slip. Employees sometimes clone automation scripts to meet deadlines. Chatbots begin answering differently than they were trained. Customers discover new ways to use your product that were never accommodated as part of the design.

This slow, cumulative divergence between intended and actual behavior is called behavioral drift: A phenomenon that happens when systems, models and humans evolve out of sync with their original design. It sounds subtle, but its impact is enormous: the line between reliable performance and systemic risk.

For CIOs running AI-native enterprises, understanding drift isn’t optional anymore. It’s the foundation of reliability, accountability and innovation.

Why behavioral drift matters for CIOs

1. It impacts governance

Under frameworks like the EU Artificial Intelligence Act (2024) and the NIST AI Risk Management Framework (2023), enterprises must continuously monitor AI systems for changes in accuracy, bias and behavior. Drift monitoring isn’t just a “nice to have” anymore; instead it’s a compliance requirement.

2. It erodes value quietly

Unlike outages, drift doesn’t announce itself. Systems keep running, dashboards stay green, but results slowly degrade. The ROI that once justified an initiative evaporates. CIOs need to treat behavioral integrity the same way they treat uptime: to be measured and managed continuously.

3. It’s also a signal for innovation

Not all drift can be considered bad. When employees adapt workflows or customers use tools in unexpected ways, that leads to a productive drift. The best CIOs read these signals as early indicators of emerging value rather than deviations to correct.

What causes behavioral drift?

Drift doesn’t come from one source; it emerges from overlapping feedback loops among data, models, systems and people. It often starts with data drift, as new inputs enter the system. That leads to model drift, where relationships between inputs and outcomes change. Then system drift creeps in as code and configurations evolve. Finally, human drift completes the loop where people adapt their behavior to the changing systems, often inventing workarounds.

These forces reinforce one another, creating a self-sustaining cycle. Unless CIOs monitor the feedback loop, they’ll notice it only when something breaks.

Chart 1: Forces behind behavioral drift

Ankush Dhar and Rohit Dhawan

The human side of drift

Behavioral drift doesn’t just happen in code; it happens in culture as well. When delivery pressures rise, employees often create shadow automations: unofficial scripts or AI shortcuts that bypass governance. Teams adapt dashboards, override AI recommendations or alter workflows to meet goals. These micro-innovations may start as survival tactics but gradually reshape institutional behavior.

This is where policy drift also emerges: procedures written for static systems fail to reflect how AI-driven environments evolve. CIOs must therefore establish behavioral observability — not just technical observability — encouraging teams to report workarounds and exceptions as data points, not violations.

Some organizations run drift retrospectives, which are cross-functional sessions modeled on Agile reviews to discuss where behaviors or automations have diverged from their original intent. This human-centered feedback loop complements technical drift detection and helps identify when adaptive behavior signals opportunity instead of non-compliance.

Detecting and managing drift

Forward-thinking CIOs now treat behavioral drift as an operational metric, not a research curiosity.

  • Detection. Define what normal looks like for your critical systems and instrument your dashboards accordingly. At Uber, engineers built automated drift-detection pipelines that compared live data distributions with training data, flagging early deviations before performance collapses.
  • Diagnosis. Once drift is detected, it is critical to determine its cause. Is it harmful — risking compliance or customer trust — or productive, signaling innovation? Cross-functional analysis across IT, risk, data science and operations helps identify and separate what to fix from what to amplify.
  • Response. For a harmful drift, you can retrain it, adjust its settings or update your rules. For productive drift: document and formalize it into best practices.
  • Institutionalize. Make drift management part of your quarterly reviews. Align it with NIST’s AI RMF 1.0 “Measure and Manage” functions. Behavioral drift shouldn’t live in the shadows; it belongs on your risk dashboard.

Frameworks and metrics for drift management

Once CIOs recognize how drift unfolds, the next challenge is operationalizing its detection and control. CIOs can anchor their drift monitoring efforts using established standards such as the NIST AI Risk Management Framework or the ISO/IEC 23894:2023 standard for AI risk governance. Both emphasize continuous validation loops and quantitative thresholds for behavioral integrity.

In practice, CIOs can operationalize this by implementing model observability stacks that include:

  • Data drift metrics: Utilize population stability index (PSI), Jensen–Shannon divergence and KL divergence to measure how current input data deviates from training distributions.
  • Model drift metrics: Monitor changes in F1 Score, precision-recall trade-offs or calibration curves over time to assess predictive reliability.
  • Behavioral drift dashboards: Combine telemetry from system logs, automation scripts and user activity to visualize divergences across people, process and technology layers.
  • Automated retraining pipelines integrated with CI/CD workflows, where drift beyond tolerance automatically triggers retraining or human review.

Some organizations use tools from Evidently AI or Fiddler AI to implement these controls, embedding drift management directly into their MLOps life cycle. The goal isn’t to eliminate drift altogether: it’s to make it visible, measurable and actionable before it compounds into systemic risk

Seeing drift in action

Every dashboard tells a unique story. But the most valuable stories aren’t about uptime or throughput; they’re about behavior. When your fraud model’s precision quietly slips or when customer-service escalations surge or when employees automate workarounds outside official tools, your organization is sending a message that something fundamental is shifting. These aren’t anomalies; they’re patterns of evolution. CIOs who can read these signals early don’t just prevent failure, they steer innovation.

The visual below captures that moment when alignment begins to fade. Performance starts as expected, but reality soon bends away from prediction. That growing distance, reflected as the space between designed intent and actual behavior, is where risk hides, but also where opportunity begins.

Chart 2: Behavioral drift over time

Ankush Dhar and Rhoit Dhawan

From risk control to strategic advantage

Behavioral drift management isn’t only defensive: it’s a strategic sensing mechanism. Global financial leaders such as Mastercard and American Express have publicly reported measurable improvements from monitoring how employees and customers interact with AI systems in real time. These adaptive behaviors, while not formally labeled as behavioral drift, illustrate how organizations can turn unplanned human-AI adjustments into structured innovation.

For example, Mastercard’s customer-experience teams have leveraged AI insights to refine workflows and enhance service consistency, while American Express has used conversational-AI monitoring to identify and scale employee-driven adaptations that reduced IT escalations and improved service reliability.

By reframing drift as organizational learning, CIOs can turn adaptive behaviors into repeatable value creation. In continuous-learning enterprises, managing drift becomes a feedback engine for innovation, linking operational resilience with strategic agility.

The mindset shift

The most advanced CIOs are redefining behavioral management as the foundation of digital leadership. In the AI-native enterprise, behavior is infrastructure. When systems learn, people adapt and markets shift, your job isn’t to freeze behavior; it’s to keep everything aligned. Ignoring drift leads to slow decay. Over-controlling it kills creativity. Managing it well builds resilient, adaptive organizations that learn faster than their competitors. The CIO of tomorrow isn’t just the architect of technology; they’re the steward of enterprise behavior.

CIOs who master this balance build learning architectures, systems and cultures designed to evolve safely. The organizations that thrive in the AI era won’t be those that eliminate drift, but those that can sense, interpret and harness it faster than their competitors.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

AI isn’t teaching machines to think — it’s teaching leaders to rethink

When we hosted our first-ever IT leadership conference, FUSION ’25, I expected to spend most of my time thinking about AI. Instead, I found myself thinking about people.

Moderating a panel of CIOs that day changed the way I looked at tech leadership.

It wasn’t a conversation about software or infrastructure or even innovation. It was about what happens when machines start learning and how we, as CIOs, must learn alongside them.

Because here’s the thing: AI doesn’t just upgrade your systems. It rewires your culture, your expectations, your patience and your definition of progress.


The real transformation isn’t happening inside the algorithms; it’s happening inside us.

Here are seven lessons I took away from that room. Not as a technologist. As a student of leadership!

1. Clarity is the new speed

Every organization today is under pressure to move fast on AI. But as Chad Ghosn, CIO of Ammex Corporation, reminded us, “Speed without clarity just creates noise.”

The teams that move with confidence aren’t the ones automating the most, but the ones that can clearly articulate why something matters.

Before every AI experiment, they ask:

  • What decision does this help someone make faster?
  • What problem does it actually solve?
  • How will we measure if it’s working?

It turns out that clarity is the real form of speed. Because when everyone knows why they’re running, no one needs to be told how fast to go.

2. Culture learns slower than code

Every technical leap has a human half-life — the time it takes for people to catch up emotionally and behaviorally.

Chad shared how his team held open “AI office hours,” not to show off tools, but to help people talk through what they were afraid of. It wasn’t training; it was trust-building.

That struck me deeply. We like to talk about digital transformation as a technical project, but it’s actually a cultural one. Because no matter how good your model is, belief can’t be deployed but earned.

3. Trust takes repetition, not rhetoric

“AI handles 98% of the task, but I still like a person to press ‘Send,’” Chad joked.

That last 2% (the human check) is where confidence lives. It’s not inefficiency; it’s assurance.

You can’t tell people to trust the system. You have to let them watch it earn their trust, one accurate response at a time.

That’s leadership in this new world: managing the space between almost right and certain.

4. Data is everyone’s responsibility

When we talk about AI, we often forget that intelligence is only as good as the information it learns from.

Venki Rangachari, CDO at HPE Networking, shared how his teams appoint data stewards– people who are accountable for the quality of their datasets–across departments and not just the infrastructure they sit in.

Mark Gill, Senior Director of IT at Zuora, echoed this, pointing out that data can’t live in isolation. The minute you make it someone else’s problem, it becomes everyone’s bottleneck.

A recent Gartner study also points out how 34% of IT leaders in low AI maturity organizations say that data availability and quality are among the top hurdles for them in implementing AI.

Clearly, AI doesn’t fail because it’s complex; it fails because it’s missing context.

Good data is less about accuracy and more about agreement. When everyone defines truth the same way, machines can finally learn something meaningful.

5. Guardrails are the architecture of trust

Innovation needs freedom, but freedom needs boundaries.

Venki spoke about how HPE developed an internal framework termed “ChatHPE” to ensure that AI runs within trusted environments, alongside an ethics committee that reviews new use cases.

It reminded me that guardrails aren’t constraints. They’re proof that we take innovation seriously.

In leadership, too, the goal isn’t to remove friction; it’s to define where it should exist.

6. The CIO’s job description just changed

As Mark Gill put it, “My job used to be just managing systems. Now, I’m managing systems with reasoning.”

That line has lived rent-free in my head ever since.

The modern CIO isn’t just the keeper of infrastructure; they’re the interpreter of intelligence.

They not only ensure that systems are connected, but also the decisions.

In many ways, they’ve become the organization’s conscience,  the ones who must decide not just what AI can do, but what it should.

7. Experience is the only metric that matters

For all the dashboards and KPIs, the conversation kept circling back to one thing: employee experience.

Patrick Young, Senior Director of IT at Skydio, described how his team deployed AI agents as the first line of support. The result wasn’t just faster resolutions — it was calmer, more confident employees.

“When systems quietly work,” he said, “people feel seen.”

That line captured the heart of it for me.

AI shouldn’t feel like a replacement for people. It should feel like a quiet, invisible partner that gives them time, focus and clarity back.

As the session ended, I realized that AI isn’t teaching machines to think; it’s teaching leaders to rethink.

What we’re really learning is how to lead in uncertainty, how to slow down before we speed up, how to ask sharper questions before demanding instant answers and how to lead with empathy before leading with problems.

Leaders who chase speed without clarity will find themselves buried in chaos. Those who ignore people will end up with tools no one wants. And those who confuse efficiency for progress will miss the point entirely; that the purpose of intelligence, artificial or otherwise, is to make us more human, not less.

As I think about the road ahead, I envision a new kind of organization that balances purpose and innovation with introspection. A place where machines handle the repetition and people reclaim the reflection.

Technology may learn faster, but wisdom is still a human pursuit.

When machines learn, so must we.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

❌