Reading view

There are new articles available, click to refresh the page.

Microsoft shareholders invoke Orwell and Copilot as Nadella cites ‘generational moment’

From left: Microsoft CFO Amy Hood, CEO Satya Nadella, Vice Chair Brad Smith, and Investor Relations head Jonathan Nielsen at Friday’s virtual shareholder meeting. (Screenshot via webcast)

Microsoft’s annual shareholder meeting Friday played out as if on a split screen: executives describing a future where AI cures diseases and secures networks, and shareholder proposals warning of algorithmic bias, political censorship, and complicity in geopolitical conflict.

One shareholder, William Flaig, founder and CEO of Ridgeline Research, quoted two authorities on the topic — George Orwell’s 1984 and Microsoft’s Copilot AI chatbot — in requesting a report on the risks of AI censorship of religious and political speech.

Flaig invoked Orwell’s dystopian vision of surveillance and thought control, citing the Ministry of Truth that “rewrites history and floods society with propaganda.” He then turned to Copilot, which responded to his query about an AI-driven future by noting that “the risk lies not in AI itself, but in how it’s deployed.”

In a Q&A session during the virtual meeting, Microsoft CEO Satya Nadella said the company is “putting the person and the human at the center” of its AI development, with technology that users “can delegate to, they can steer, they can control.”

Nadella said Microsoft has moved beyond abstract principles to “everyday engineering practice,” with safeguards for fairness, transparency, security, and privacy.

Brad Smith, Microsoft’s vice chair and president, said broader societal decisions, like what age kids should use AI in schools, won’t be made by tech companies. He cited ongoing debates about smartphones in schools nearly 20 years after the iPhone.

“I think quite rightly, people have learned from that experience,” Smith said, drawing a parallel to the rise of AI. “Let’s have these conversations now.”

Microsoft’s board recommended that shareholders vote against all six outside proposals, which covered issues including AI censorship, data privacy, human rights, and climate. Final vote tallies have yet to be released as of publication time, but Microsoft said shareholders turned down all six, based on early voting. 

While the shareholder proposals focused on AI risks, much of the executive commentary focused on the long-term business opportunity. 

Nadella described building a “planet-scale cloud and AI factory” and said Microsoft is taking a “full stack approach,” from infrastructure to AI agents to applications, to capitalize on what he called “a generational moment in technology.”

Microsoft CFO Amy Hood highlighted record results for fiscal year 2025 — more than $281 billion in revenue and $128 billion in operating income — and pointed to roughly $400 billion in committed contracts as validation of the company’s AI investments.

Hood also addressed pre-submitted shareholder questions about the company’s AI spending, pushing back on concerns about a potential bubble. 

“This is demand-driven spending,” she said, noting that margins are stronger at this stage of the AI transition than at a comparable point in Microsoft’s cloud buildout. “Every time we think we’re getting close to meeting demand, demand increases again.”

Cultural Lag Leaves Security as the Weakest Link

cybersecurity

For too long, security has been cast as a bottleneck – swooping in after developers build and engineers test to slow things down. The reality is blunt; if it’s bolted on, you’ve already lost. The ones that win make security part of every decision, from the first line of code to the last boardroom conversation...

The post Cultural Lag Leaves Security as the Weakest Link appeared first on Security Boulevard.

CDC tells staff telework reasonable accommodations ‘will be repealed,’ as HHS sets stricter rules

The Department of Health and Human Services is setting new restrictions on telework as a reasonable accommodation for employees with disabilities.

A new, departmentwide reasonable accommodation policy shared with employees this week states that all requests for telework, remote work, or reassignment must be reviewed and approved by an assistant secretary or a higher-level official — a decision that is likely to slow the approval process.

The new policy, as Federal News Network reported on Monday, generally restricts employees from using telework as an “interim accommodation,” while the agency processes their reasonable accommodation request.

“Telework is not appropriate for an interim accommodation, unless approved at the assistant secretary level or above,” the new policy states.

The updated reasonable accommodation policy, signed on Sept. 15 by HHS Chief Human Capital Officer and Deputy Assistant Secretary Thomas Nagy, Jr., replaces a more than decade-old policy, and applies to all HHS component agencies.

“This policy is effective immediately and must be followed by HHS component in accordance with applicable laws, regulations, and departmental policy,” the policy states.

It’s not clear how long it will take HHS to review each individual reasonable accommodation request. But HHS, which now handles all reasonable accommodation requests from its component agencies, faces a backlog of more than 3,000 cases — which it expects will take six to eight months to complete.

The new policy allows frontline supervisors to grant “simple, obvious requests” without consulting with an HHS reasonable accommodation coordinator, but prohibits them from granting telework or remote work.

“Telework and reassignment are not simple, obvious requests,” the policy states.

The policy also directs HHS to collect data on the “number of requests that involve telework or remote work, in whole or in part.”

A memo from the Centers for Disease Control and Prevention states that “all telework related to RAs will be repealed,” and that CDC leadership will no longer be allowed to approve telework as an interim accommodation.

“Staff currently on an agreement will need to report back to the worksite,” the memo states.

The CDC memo states employees can still request telework as a reasonable accommodation, but “until they are reviewed and approved by HHS they must report to the worksite.”

It also states that employees can request what was previously known as “medical telework,” which can be approved by the CDC chief operating officer for around six months in length.

According to the memo, CDC can temporarily grant medical telework to employees who are dealing with recovering from chemotherapy, hip replacement surgery or pregnancy complications.

If HHS rejects a reasonable accommodation, the CDC memo states an employee can challenge the decision before an appeal board. The CDC, however, expects that appeal will “also take months to process,” and that employees must continue to work from the office while the appeal is pending.

“We know this is going to be tough, especially on front-line supervisors,” the CDC memo states.

HHS Press Secretary Emily Hilliard said in a statement that the new reasonable accommodation policy “establishes department-wide procedures to ensure consistency with federal law.”

“Interim accommodations may be provided while cases move through the reasonable-accommodation process toward a final determination. The department remains committed to processing these requests as quickly as possible,” Hilliard said.

Jodi Hershey, a former FEMA reasonable accommodation specialist and the founder of EASE, LLC, a firm that helps employers and employees navigate workplace accessibility issues, said the new policy suggests HHS is “playing fast and loose with the Rehabilitation Act, and what’s required of them” under the legislation.

“This is the most inefficient way to handle reasonable accommodations possible. By centralizing reasonable accommodation-deciding officials, you’re removing the decision from the person who knows the most about the job. The immediate supervisor or manager knows what the job is, how the job is normally performed. They know the employee. When you remove that level of familiarity from the process, and you move it up the chain … that person has no idea what the job even is. They don’t know the person that they’re dealing with. They don’t know the office. They don’t know the particulars at all,” Hershey said.

In a message obtained by Federal News Network, Cheryl Prigodich, principal deputy director for the CDC’s Office of Safety, Security and Asset Management, told an HHS employee that because their one-year reasonable accommodation had expired, they needed to submit a new request for approval.

“The timeframe for approval on your request is not known at this time. In the interim, however, we are not allowed to approve telework as an interim accommodation for a reasonable accommodation,” Prigodich said.

Prigodich told the employee that, according to HHS, employees must either use annual leave, 80 hours of annual ad hoc telework available to each HHS employee, take leave under the Family and Medical Leave Act or report to the workplace “with the possibility of another acceptable accommodation (work tour, physical modifications to the workplace, etc).”

Prigodich directed the employee to submit their request to renew their reasonable accommodation request to the HHS assistant secretary for administration, but recommended that they “efficiently summarize your concern and request (with appropriate documentation) into no greater than a single-page memo.”

“The ASA will not want to comb through previous emails or too many attachments,” Prigodich said.

The one-page request, she added, should include “why no other alternative accommodation will work,” documentation of the disability, and records showing the previously approved reasonable accommodation.

“I know this is frustrating. We are certainly frustrated too — and this represents a significant policy change for a great number of people who rely on this type of accommodation for their personal health and needs,” Prigodich said.

The post CDC tells staff telework reasonable accommodations ‘will be repealed,’ as HHS sets stricter rules first appeared on Federal News Network.

© Federal News Network

Fired EPA employees challenge agency, alleging free speech violations

Former Environmental Protection Agency employees who were fired after signing a letter criticizing the Trump administration are now appealing their dismissals before the Merit Systems Protection Board.

The six former EPA employees, who were among roughly 140 workers who signed a “declaration of dissent” in June, argued their firings were not only an illegal response to exercising their First Amendment rights, but also a form of retaliation for “perceived political affiliation,” and executed without cause.

The former employees are represented by attorneys at several law firms in the MSPB case, including the Public Employees for Environmental Responsibility (PEER).

“Federal employees have the right to speak out on matters of public concern in their personal capacities, even when they do so in dissent,” Joanna Citron Day, general counsel for PEER, said Wednesday. “EPA is not only undermining the First Amendment’s free speech protections by trying to silence its own workforce, it is also placing U.S. citizens in peril by removing experienced employees who are tasked with carrying out EPA’s critical mission.”

An EPA spokesperson declined to comment, stating that the agency has a longstanding practice of not commenting on pending litigation.

The June dissent letter from EPA employees warned that the Trump administration and EPA Administrator Lee Zeldin were “recklessly undermining” the agency’s mission, and criticized the administration’s policies on public health and the environment. The letter led EPA to launch an investigation into employees who signed the letter, resulting in at least eight probationary employees and nine tenured career employees receiving termination notices. Dozens more who signed the declaration were suspended without pay for two weeks, according to the American Federation of Government Employees.

Justin Chen, president of AFGE Council 238, which represents EPA employees, said the firings of these employees added to a “brain drain” at EPA, on top of other workforce losses stemming from the deferred resignation program (DRP) and other actions from the Trump administration this year.

“These were subject matter experts — extremely talented people who were working on behalf of the American public to protect them,” Chen said in an interview. “The loss of these people will be felt for quite some time. And honestly, the intent of this action is to put a chilling effect on the rest of the civil service.”

A termination notice delivered to one of the EPA employees shows that in response to concerns of free speech and whistleblower protection violations, the agency’s general counsel office stated that it believed the issues raised “do not outweigh the seriousness of your offense.”

“The Agency is not required to tolerate actions from its employees that undermine the Agency’s decisions, interfere with the Agency’s operations and mission, and the efficient fulfillment of the Agency’s responsibilities to the public,” the termination letter reads. “You hold a trust-sensitive position that requires sound judgement and alignment with the Agency’s communication strategies.”

Despite the employee having a high performance rating and a lack of disciplinary history, the termination letter stated that “the serious nature of your misconduct outweighs all mitigating factors.”

“I also considered that you took no responsibility for your conduct, which reflects a lack of acknowledgment of the seriousness of your actions and raises concerns about your ability to exercise sound judgment and undermines your potential for rehabilitation,” the letter reads.

In August, EPA leadership also canceled all its collective bargaining agreements and told its unions it would no longer recognize them. The decision came after an appeals court allowed agencies to move forward with implementing President Donald Trump’s March executive order to terminate union contracts at a majority of federal agencies.

“If we still had our collective bargaining rights, none of this would have happened in the first place. We would have immediately filed grievances,” Chen said. “[With the MSPB appeal] our hope is that these employees get everything back — that they will have full reinstatement and full back pay.”

The post Fired EPA employees challenge agency, alleging free speech violations first appeared on Federal News Network.

© AP Photo/Pablo Martinez Monsivais

FILE - The Environmental Protection Agency (EPA) Building is shown in Washington, Sept. 21, 2017. (AP Photo/Pablo Martinez Monsivais, File)

Closing the Document Security Gap: Why Document Workflows Must Be Part of Cybersecurity

security, risk, vector

Organizations are spending more than ever on cybersecurity, layering defenses around networks, endpoints, and applications. Yet a company’s documents, one of the most fundamental business assets, remains an overlooked weak spot. Documents flow across every department, cross company boundaries, and often contain the very data that compliance officers and security teams work hardest to protect...

The post Closing the Document Security Gap: Why Document Workflows Must Be Part of Cybersecurity appeared first on Security Boulevard.

Behavioral drift: The hidden risk every CIO must manage

It’s the slow change no one notices: AI models evolve and people adapt to that. Systems learn and then they forget. Behavioral drift is quietly rewriting how enterprises operate, often without anyone noticing until it is too late.

In my own work leading AI-driven transformations, I have learned that change rarely happens through grand rewrites. It happens quietly, through hundreds of micro-adjustments and no dashboard flags. The model that once detected fraud with 95% accuracy slowly starts to slip. Employees sometimes clone automation scripts to meet deadlines. Chatbots begin answering differently than they were trained. Customers discover new ways to use your product that were never accommodated as part of the design.

This slow, cumulative divergence between intended and actual behavior is called behavioral drift: A phenomenon that happens when systems, models and humans evolve out of sync with their original design. It sounds subtle, but its impact is enormous: the line between reliable performance and systemic risk.

For CIOs running AI-native enterprises, understanding drift isn’t optional anymore. It’s the foundation of reliability, accountability and innovation.

Why behavioral drift matters for CIOs

1. It impacts governance

Under frameworks like the EU Artificial Intelligence Act (2024) and the NIST AI Risk Management Framework (2023), enterprises must continuously monitor AI systems for changes in accuracy, bias and behavior. Drift monitoring isn’t just a “nice to have” anymore; instead it’s a compliance requirement.

2. It erodes value quietly

Unlike outages, drift doesn’t announce itself. Systems keep running, dashboards stay green, but results slowly degrade. The ROI that once justified an initiative evaporates. CIOs need to treat behavioral integrity the same way they treat uptime: to be measured and managed continuously.

3. It’s also a signal for innovation

Not all drift can be considered bad. When employees adapt workflows or customers use tools in unexpected ways, that leads to a productive drift. The best CIOs read these signals as early indicators of emerging value rather than deviations to correct.

What causes behavioral drift?

Drift doesn’t come from one source; it emerges from overlapping feedback loops among data, models, systems and people. It often starts with data drift, as new inputs enter the system. That leads to model drift, where relationships between inputs and outcomes change. Then system drift creeps in as code and configurations evolve. Finally, human drift completes the loop where people adapt their behavior to the changing systems, often inventing workarounds.

These forces reinforce one another, creating a self-sustaining cycle. Unless CIOs monitor the feedback loop, they’ll notice it only when something breaks.

Chart 1: Forces behind behavioral drift

Ankush Dhar and Rohit Dhawan

The human side of drift

Behavioral drift doesn’t just happen in code; it happens in culture as well. When delivery pressures rise, employees often create shadow automations: unofficial scripts or AI shortcuts that bypass governance. Teams adapt dashboards, override AI recommendations or alter workflows to meet goals. These micro-innovations may start as survival tactics but gradually reshape institutional behavior.

This is where policy drift also emerges: procedures written for static systems fail to reflect how AI-driven environments evolve. CIOs must therefore establish behavioral observability — not just technical observability — encouraging teams to report workarounds and exceptions as data points, not violations.

Some organizations run drift retrospectives, which are cross-functional sessions modeled on Agile reviews to discuss where behaviors or automations have diverged from their original intent. This human-centered feedback loop complements technical drift detection and helps identify when adaptive behavior signals opportunity instead of non-compliance.

Detecting and managing drift

Forward-thinking CIOs now treat behavioral drift as an operational metric, not a research curiosity.

  • Detection. Define what normal looks like for your critical systems and instrument your dashboards accordingly. At Uber, engineers built automated drift-detection pipelines that compared live data distributions with training data, flagging early deviations before performance collapses.
  • Diagnosis. Once drift is detected, it is critical to determine its cause. Is it harmful — risking compliance or customer trust — or productive, signaling innovation? Cross-functional analysis across IT, risk, data science and operations helps identify and separate what to fix from what to amplify.
  • Response. For a harmful drift, you can retrain it, adjust its settings or update your rules. For productive drift: document and formalize it into best practices.
  • Institutionalize. Make drift management part of your quarterly reviews. Align it with NIST’s AI RMF 1.0 “Measure and Manage” functions. Behavioral drift shouldn’t live in the shadows; it belongs on your risk dashboard.

Frameworks and metrics for drift management

Once CIOs recognize how drift unfolds, the next challenge is operationalizing its detection and control. CIOs can anchor their drift monitoring efforts using established standards such as the NIST AI Risk Management Framework or the ISO/IEC 23894:2023 standard for AI risk governance. Both emphasize continuous validation loops and quantitative thresholds for behavioral integrity.

In practice, CIOs can operationalize this by implementing model observability stacks that include:

  • Data drift metrics: Utilize population stability index (PSI), Jensen–Shannon divergence and KL divergence to measure how current input data deviates from training distributions.
  • Model drift metrics: Monitor changes in F1 Score, precision-recall trade-offs or calibration curves over time to assess predictive reliability.
  • Behavioral drift dashboards: Combine telemetry from system logs, automation scripts and user activity to visualize divergences across people, process and technology layers.
  • Automated retraining pipelines integrated with CI/CD workflows, where drift beyond tolerance automatically triggers retraining or human review.

Some organizations use tools from Evidently AI or Fiddler AI to implement these controls, embedding drift management directly into their MLOps life cycle. The goal isn’t to eliminate drift altogether: it’s to make it visible, measurable and actionable before it compounds into systemic risk

Seeing drift in action

Every dashboard tells a unique story. But the most valuable stories aren’t about uptime or throughput; they’re about behavior. When your fraud model’s precision quietly slips or when customer-service escalations surge or when employees automate workarounds outside official tools, your organization is sending a message that something fundamental is shifting. These aren’t anomalies; they’re patterns of evolution. CIOs who can read these signals early don’t just prevent failure, they steer innovation.

The visual below captures that moment when alignment begins to fade. Performance starts as expected, but reality soon bends away from prediction. That growing distance, reflected as the space between designed intent and actual behavior, is where risk hides, but also where opportunity begins.

Chart 2: Behavioral drift over time

Ankush Dhar and Rhoit Dhawan

From risk control to strategic advantage

Behavioral drift management isn’t only defensive: it’s a strategic sensing mechanism. Global financial leaders such as Mastercard and American Express have publicly reported measurable improvements from monitoring how employees and customers interact with AI systems in real time. These adaptive behaviors, while not formally labeled as behavioral drift, illustrate how organizations can turn unplanned human-AI adjustments into structured innovation.

For example, Mastercard’s customer-experience teams have leveraged AI insights to refine workflows and enhance service consistency, while American Express has used conversational-AI monitoring to identify and scale employee-driven adaptations that reduced IT escalations and improved service reliability.

By reframing drift as organizational learning, CIOs can turn adaptive behaviors into repeatable value creation. In continuous-learning enterprises, managing drift becomes a feedback engine for innovation, linking operational resilience with strategic agility.

The mindset shift

The most advanced CIOs are redefining behavioral management as the foundation of digital leadership. In the AI-native enterprise, behavior is infrastructure. When systems learn, people adapt and markets shift, your job isn’t to freeze behavior; it’s to keep everything aligned. Ignoring drift leads to slow decay. Over-controlling it kills creativity. Managing it well builds resilient, adaptive organizations that learn faster than their competitors. The CIO of tomorrow isn’t just the architect of technology; they’re the steward of enterprise behavior.

CIOs who master this balance build learning architectures, systems and cultures designed to evolve safely. The organizations that thrive in the AI era won’t be those that eliminate drift, but those that can sense, interpret and harness it faster than their competitors.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

Cybersecurity Coalition to Government: Shutdown is Over, Get to Work

budget open source supply chain cybersecurity ransomware White House Cyber Ops

The Cybersecurity Coalition, an industry group of almost a dozen vendors, is urging the Trump Administration and Congress now that the government shutdown is over to take a number of steps to strengthen the country's cybersecurity posture as China, Russia, and other foreign adversaries accelerate their attacks.

The post Cybersecurity Coalition to Government: Shutdown is Over, Get to Work appeared first on Security Boulevard.

The Cyber Resilience Act and SaaS: Why Compliance is Only Half the Battle 

resilience, SaaS, risk, security, Grip, SaaS adoption, security, , AI tools, vulnerabilities, applications, security, AppOmni, SaaS, security, cybersecurity, SaaS, Palo Alto, third-party vendors, SaaS security, CISO, SSPM, SaaS security, SentinelLabs AppOmni Valence SaaS security Thirdera SaaS management SSPM CISOs SaaS

The EU’s Cyber Resilience Act is reshaping global software security expectations, especially for SaaS, where shared responsibility, lifecycle security and strong identity protections are essential as attackers increasingly “log in” instead of breaking in.

The post The Cyber Resilience Act and SaaS: Why Compliance is Only Half the Battle  appeared first on Security Boulevard.

Securing AI-Generated Code in Enterprise Applications: The New Frontier for AppSec Teams 

GenAI, multimodal ai, AI agents, CISO, AI, Malware, DataKrypto, Tumeryk,

AI-generated code is reshaping software development and introducing new security risks. Organizations must strengthen governance, expand testing and train developers to ensure AI-assisted coding remains secure and compliant.

The post Securing AI-Generated Code in Enterprise Applications: The New Frontier for AppSec Teams  appeared first on Security Boulevard.

NSFOCUS Receives International Recognition: 2025 Global Competitive Strategy Leadership for AI-Driven Security Operation

By: NSFOCUS

SANTA CLARA, Calif., Nov 25, 2025 – Recently, NSFOCUS Generative Pre-trained Transformer (NSFGPT) and Intelligent Security Operations Platform (NSFOCUS ISOP) were recognized by the internationally renowned consulting firm Frost & Sullivan and won the 2025 Global Competitive Strategy Leadership for AI-Driven Security Operation [1]. Frost & Sullivan Best Practices Recognition awards companies each year in […]

The post NSFOCUS Receives International Recognition: 2025 Global Competitive Strategy Leadership for AI-Driven Security Operation appeared first on NSFOCUS, Inc., a global network and cyber security leader, protects enterprises and carriers from advanced cyber attacks..

The post NSFOCUS Receives International Recognition: 2025 Global Competitive Strategy Leadership for AI-Driven Security Operation appeared first on Security Boulevard.

Guardrails and governance: A CIO’s blueprint for responsible generative and agentic AI

The promise of generative AI (genAI) and agentic AI is electrifying. From automating complex tasks to unlocking unprecedented creativity, these technologies are poised to redefine your enterprise landscape. But as a CIO, you know that with great power comes great responsibility — and significant risk. The headlines are already filled with cautionary tales of data breaches, biased outputs and compliance nightmares.

The truth is, without robust guardrails and a clear governance framework, the very innovations you champion could become your biggest liabilities. This isn’t about stifling innovation; it’s about channeling it responsibly, ensuring your AI initiatives drive value without compromising security, ethics or trust.

Let’s dive into the critical areas where you must lead the charge.

Guardrails & governance: Why they are necessary, but no longer sufficient for agents

Many in the industry are confusing the function of guardrails and thinking they’re a flimsy substitute for true oversight. This is a critical misconception that must be addressed. Guardrails and governance are not interchangeable; they are two essential parts of a single system of control.

Think of it this way:

  • AI governance is the blueprint and the organization. It’s the framework of policies, roles, committees (like your AI review board) and processes that define what is acceptable, who is accountable and how you will monitor and audit all AI systems across the enterprise. Governance is the strategy and the chain of command.
  • AI guardrails are the physical controls and the rules in the code. These are the technical mechanisms embedded directly into the AI system’s architecture, APIs and interfaces to enforce the governance policies in real-time. Guardrails are the enforcement layer.

While we must distinguish between governance (the overarching policy framework) and guardrails (technical, in-the-moment controls), the reality of agentic AI has revealed a critical flaw: current soft guardrails are failing catastrophically. These controls are often probabilistic, pattern-based or rely on LLM self-evaluation, which is easily bypassed by an agent’s core capabilities: autonomy and composability (the ability to chain tools and models).

Guardrail failure mode Core flaw  CIO takeaway: Why static fails
PII/ModerationPattern Reliance & Shallow FiltersFails when sensitive data is slightly obfuscated (e.g., using “SNN” instead of “SSN”) or harmful content is wrapped in code/leetspeak.
Hallucination/JailbreakCircular Confidence & Probabilistic DefenseRelies on one model to judge another’s truthfulness or intent. The defense is easily manipulated, as the system can be confidently wrong or tricked by multi-turn or encoded attacks.

The agent’s ability to choose an alternate, unguarded path renders simple static checks useless. Your imperative is to move from relying on these flawed, soft defenses to implementing continuous, deterministic control.

The path forward: Implementing continuous control

To address these systemic vulnerabilities, CIOs must take the following actions:

  1. Mandate hard data boundaries: Replace the agent’s probabilistic PII detection with deterministic, non-LLM-based security tools (DLP, tokenization) enforced by your API gateway. This creates an un-bypassable security layer for all data entering or leaving the agent.
  2. Shift to pre-execution governance: Require all agentic deployments to utilize an agent orchestration layer that performs a pre-execution risk assessment on every tool call and decision step. This continuous governance module checks the agent’s compliance before it executes a financial transaction or high-privilege API call.
  3. Ensure forensic traceability: Implement a “Digital Ledger” approach for all agent actions. Every LLM call, parameter passed and reasoning step must be logged sequentially and immutably to allow for forensic reconstruction and accountability.

Data security: Your ‘private-by-default’ AI strategy

The fear of proprietary data leaking into public models is palpable and for good reason. Every piece of intellectual property inadvertently fed into a large language model (LLM) becomes a potential competitive disadvantage. This is where a “private-by-default” strategy becomes non-negotiable for your organization, a necessity widely discussed by KPMG in analyses such as The new rules of data governance in the age of generative AI.

This means you need to consider:

  • Embracing private foundation models: For highly sensitive workloads, investing in or leveraging private foundation models hosted within your secure environment is paramount. This gives you ultimate control over the model, its training data and its outputs.
  • Leveraging retrieval augmented generation (RAG) architectures: RAG is a game-changer. Instead of training a model directly on your entire private dataset, RAG systems allow the AI to retrieve relevant information from your secure, internal knowledge bases and then use a public or private LLM to generate a response. This keeps your sensitive data isolated while still providing contextually rich answers.
  • Robust data anonymization and masking: For any data that must interact with external models, implement stringent anonymization and masking techniques. This minimizes the risk of personally identifiable information (PII) or sensitive business data being exposed.

Your goal isn’t just to prevent data leakage; it’s to build a resilient AI ecosystem that protects your most valuable assets from the ground up.

Explainability & auditability: The imperative for agentic AI

Generative AI creates; agentic AI acts. When an autonomous AI agent is making decisions, executing transactions or interacting with customers, the stakes escalate dramatically. Regulators, auditors and even internal stakeholders will demand to know why an agent took a particular action.

This necessitates a forensic level of explainability and auditability:

  • Comprehensive decision logging: Every single decision, every parameter change, every data point considered by an AI agent must be meticulously logged. This isn’t just about output; it’s about the entire chain of reasoning.
  • Clear audit trails: These logs must be easily accessible, searchable and structured to form a clear, human-readable audit trail. When an auditor asks how an AI agent processed a loan application, you should be able to trace every step, from input to final decision.
    • Agentic AI example: An agent is tasked with automating supplier payments. A key guardrail must be a transaction limit filter that automatically holds any payment over $100,000 for human approval. The corresponding governance policy requires that the log for that agent details the entire sequence of API calls, the exact rule that triggered the hold and the human who provided the override, creating a perfect audit trail.
  • Transparency in agent design: The design and configuration of your AI agents should be documented and version-controlled. Understanding the rules, logic and external integrations an agent uses is crucial for diagnosing issues and ensuring compliance.

Ethical oversight: Nurturing responsible AI

Beyond security and compliance lies the profound ethical dimension of AI. Addressing this requires a proactive, human-centric approach:

  • Establish an AI review board or center of excellence (CoE): This isn’t a suggestion; it’s a necessity. This multidisciplinary group, comprising representatives from legal, ethics, data science and business units, should be the conscience of your AI strategy, aligning with guidance found in resources like The CIO’s guide to AI governance. Their mandate is to:
    • Proactive bias detection: Scrutinize model training data for potential biases before deployment.
    • Fairness in agent design: Review the logic and rules governing AI agents to ensure they don’t inadvertently discriminate or produce unfair results.
    • Ethical guidelines & policies: Develop and enforce clear ethical guidelines for the use and deployment of all AI within the organization.
    • Ethical AI example: A new genAI model is deployed to screen job candidates. A technical guardrail is implemented as an output toxicity filter to block any language the model suggests that could be interpreted as discriminatory. The governance policy dictates that the AI review board must regularly audit the model’s screening outcomes to ensure the overall hiring rate for protected groups remains statistically unbiased.
  • Human-in-the-loop mechanisms: For critical AI-driven decisions, ensure there’s always an opportunity for human review and override.
  • Bias mitigation techniques: Invest in techniques like re-weighting training data and Explainable AI (XAI) tools to understand and reduce bias in your models.

Responsible AI isn’t a checkbox; it’s a continuous journey of introspection, vigilance and commitment.

The CIO’s leadership imperative

The deployment of genAI and agentic AI isn’t just an IT project; it’s a strategic transformation that touches every facet of the enterprise. As the CIO, you are uniquely positioned to lead this charge, not just as a technologist, but as a strategist, risk manager and ethical guardian.

By prioritizing a “private-by-default” data strategy, enforcing rigorous explainability and auditability for autonomous agents and establishing robust ethical oversight, you can unlock the full potential of AI while building an enterprise that is secure, compliant and profoundly responsible. The future of AI is bright, but only if you build it on a foundation of trust and accountability. Make sure your blueprints reflect that.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

AI Agent Does the Hacking: First Documented AI-Orchestrated Cyber Espionage

By: Tom Eston

In this episode, we discuss the first reported AI-driven cyber espionage campaign, as disclosed by Anthropic. In September 2025, a state-sponsored Chinese actor manipulated the Claude Code tool to target 30 global organizations. We explain how the attack was executed, why it matters, and its implications for cybersecurity. Join the conversation as we examine the […]

The post AI Agent Does the Hacking: First Documented AI-Orchestrated Cyber Espionage appeared first on Shared Security Podcast.

The post AI Agent Does the Hacking: First Documented AI-Orchestrated Cyber Espionage appeared first on Security Boulevard.

💾

토스인사이트, 마이데이터 현황 보고서 발간…데이터 주권 중심의 전략적 과제 제시

토스(운영사 비바리퍼블리카)의 금융경영연구소 토스인사이트(Toss Insight)가 새로운 보고서 ‘마이데이터의 이해와 현황’을 발간했다고 밝혔다. 데이터가 경제활동의 핵심 자원으로 부상한 흐름 속에서 기업 중심의 데이터 관리 체계를 개인 중심으로 전환하는 ‘마이데이터(My Data)’ 제도의 형성과 발전 과정을 금융·산업·정책 관점에서 종합적으로 분석했다.

보고서는 마이데이터를 단순한 금융 서비스가 아닌 데이터 주권(data sovereignty)을 실질적으로 구현하는 새로운 거버넌스로 규정했다. 지난 2020년 데이터 3법 개정 이후 금융 분야를 중심으로 단기간 내 정착한 한국의 마이데이터 제도가 2025년 현재 비금융 분야로 확장되는 과정을 중심으로 제도의 성과와 한계를 균형 있게 검토했다. 또한 글로벌 주요국과의 정책 모델을 비교해 정부 주도형·민간 중심형·공공-민간 협력형 등 3가지 거버넌스 유형을 제시했다.

특히 보고서 집필팀은 한국형 마이데이터 제도의 성과로 ▲정보주체 권리 강화 ▲데이터 기반 산업 혁신 ▲금융 포용성 제고를 꼽았다. 동시에 ▲동의 절차의 효율화 ▲전송요구권 실효성 확보 ▲가명정보 활용 유연성 확대 ▲데이터 연계 구조 고도화 ▲수익모델의 지속가능성 등 향후 과제도 함께 짚었다. 이런 진단을 바탕으로 정부와 민간이 협력해 데이터 주권과 활용이 공존하는 ‘균형적 데이터 거버넌스’로 발전해야 한다는 방향을 제시했다.

토스인사이트는 이번 보고서를 통해 한국 마이데이터 제도의 정책적 성과와 국제적 위치를 입체적으로 분석함으로써, 향후 데이터 경제 시대의 전략 수립에 기초 자료로 활용될 것으로 기대하고 있다.

토스인사이트 홍기훈 연구소장은 “마이데이터는 단순한 데이터 이동 제도를 넘어 정보주체의 권리를 강화하고 산업의 혁신을 촉진하는 새로운 패러다임”이라며 “이번 보고서가 한국형 마이데이터 제도의 지속가능한 발전과 글로벌 표준 논의에 기여하길 바란다”라고 말했다.
dl-ciokorea@foundryco.com

Making A Cyber Crisis Plan! Key Components Not To Be Missed

Do you think cyberattacks are headlines anymore? Given the frequent occurrences, it has turned headlines into a day-to-day reality, and that’s scarier! Organizations that are big names to small organizations that are still growing, every one of them is being hit one way or the other. From supply chain attacks to data breaches, the impact […]

The post Making A Cyber Crisis Plan! Key Components Not To Be Missed appeared first on Kratikal Blogs.

The post Making A Cyber Crisis Plan! Key Components Not To Be Missed appeared first on Security Boulevard.

❌