Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Golden Dome got $23 billion, but lawmakers still don’t know how it will be spent

When the Defense Department received a $23 billion down payment for the Golden Dome initiative through a reconciliation bill, lawmakers demanded a detailed plan for how the Pentagon plans to spend that money.

Six months later, lawmakers are still waiting for the Pentagon to provide “complete budgetary details and justification of the $23,000,000,000 in mandatory funding.” That includes a comprehensive deployment schedule, cost, schedule and performance metrics and a finalized system architecture. 

As a result, Congressional appropriators were unable to conduct oversight of Golden Dome programs for fiscal 2026.

The department’s $175 billion Golden Dome initiative President Donald Trump first ordered last January aims to build a network of satellites — possibly numbering in the hundreds or even thousands — that would detect, track and intercept incoming missiles. Pentagon officials have described the program as a “top priority for the nation.”

The effort has been shrouded in secrecy, and lawmakers’ demand for more detail on how the Pentagon plans to spend the initial tranche of funding is another sign of Congress’s limited visibility into the program’s early spending plans.

“Due to insufficient budgetary information, the House and Senate Defense Appropriations Subcommittees were unable to effectively assess resources available to specific program elements and to conduct oversight of planned programs and projects for fiscal year 2026 Golden Dome efforts in consideration of the final agreement,” appropriators wrote.

Elaine McCusker, senior fellow at the American Enterprise Institute, said it is not unusual or surprising for lawmakers to seek complete budget information for a complex program like the Golden Dome that pulls in multiple complex ongoing efforts and includes classified components.

“Congress often requests new budget exhibits and supplementary information for evolving, complicated programs with potentially high price tags so they can better understand what is existing and ongoing funding and what is really new or accelerated in the budget request,” McCusker told Federal News Network.

But Greg Williams, director of the Center for Defense Information at the Project on Government Oversight, said Congress’ request for complete budgetary information highlights a broader challenge with how the administration has rolled out major initiatives without providing sufficient detail.

Golden Dome is an extraordinarily complex and ambitious program, for which we should expect extraordinarily comprehensive information. Instead, the American people and Congress have the opposite. The fiscal 2026 Defense Appropriations Act and its explaining document appear to appropriately reflect that disparity,” Williams told Federal News Network.

The House passed the final 2026 minibus funding package Thursday, which includes money for the Defense Department. If the spending bill becomes law, Defense Secretary Pete Hegseth, along with Gen. Michael Guetlein, the Golden Dome director, will have two months to provide a comprehensive spend plan for the initiative. Lawmakers want to see planned obligations and expenditures by program, descriptions, justification and the corresponding system architecture mission areas for fiscal 2025 through 2027. 

The Pentagon comptroller would also have to submit a separate budget justification volume annually beginning in fiscal 2028.

McCusker said Congress bears some responsibility for the delay — budget uncertainty has complicated the department’s efforts to develop the program.

The Pentagon is pursuing new ideas in how it partners with industry to rapidly develop, build and deploy the myriad systems that make up Golden Dome while also navigating annual delays and uncertainty in getting its budget,” she said. “Congress has an understandable thirst for information on high profile defense programmatic priorities and may perceive a delay in getting the level of detail it seeks, but failing to pass annual appropriations on time has become so common it is a perpetual factor to mitigate. Congress has to accept responsibility for this and be willing to take some risk in providing funds in advance of all the information it needs.”

President Donald Trump said in May that the Golden Dome’s architecture had been “officially selected,” but details about the initiative remain scarce and the Pentagon has restricted officials from publicly discussing the initiative.

McCusker said that Congress’ request for detailed planning, performance and budget information doesn’t say much about the program itself other than “its level of complexity and maturity and the need to develop and convey the overall strategy and projected timeframe for its execution.”

There is no single “Golden Dome” line item in the 2026 spending bill, though it includes billions for related programs that will most likely support the broader system.

The Pentagon leadership received its first official briefing on the Golden Dome architecture in September, and an implementation plan was expected to be delivered in November.

Williams said producing a detailed plan of this complexity in a short period of time is understandably difficult, but added that crafting a plan that credibly explains how its goals will be achieved is “likely impossible according to many experts.”

“Golden Dome is a program of unprecedented, arguably reckless, complexity and ambition.” Williams said. 

“The lack of information is also a result of Congress’s choice to use reconciliation to increase defense spending: The reconciliation process does not provide for the formal submission of budget request materials from the executive branch and so risks exactly this kind of lack of information. Congress should return to the statutory process for clean Defense authorization and appropriations acts to ensure adequate information,” he added.

If you would like to contact this reporter about recent changes in the federal government, please email anastasia.obis@federalnewsnetwork.com or reach out on Signal at (301) 830-2747.

The post Golden Dome got $23 billion, but lawmakers still don’t know how it will be spent first appeared on Federal News Network.

© The Associated Press

FILE - This Dec. 10, 2018, file photo, provided by the U.S. Missile Defense Agency (MDA),shows the launch of the U.S. military's land-based Aegis missile defense testing system, that later intercepted an intermediate range ballistic missile, from the Pacific Missile Range Facility on the island of Kauai in Hawaii. The Trump administration is considering ways to expand U.S. homeland and overseas defenses against a potential missile attack, possibly adding a layer of satellites in space to detect and track hostile targets. (Mark Wright/Missile Defense Agency via AP)

CI/CD Under Attack: What the AWS CodeBuild “CodeBreach” Flaw Reveals About Modern Supply Chain Risk

21 January 2026 at 06:11

A recent disclosure revealed a critical flaw in AWS CodeBuild that could allow attackers to abuse CI/CD pipelines and inject malicious code into trusted software builds by exploiting weaknesses in webhook validation, according to WebProNews. Rather than targeting production systems directly, the issue exposed how attackers can compromise software supply chains by manipulating trusted automation.

The post CI/CD Under Attack: What the AWS CodeBuild “CodeBreach” Flaw Reveals About Modern Supply Chain Risk appeared first on Seceon Inc.

The post CI/CD Under Attack: What the AWS CodeBuild “CodeBreach” Flaw Reveals About Modern Supply Chain Risk appeared first on Security Boulevard.

When Data Leaks Don’t Look Like Breaches: The Instagram Exposure Explained

21 January 2026 at 06:04

A recent disclosure revealed that data associated with more than 17.5 million Instagram accounts was exposed through a large-scale data leak, with records reportedly including user IDs, contact details, and account metadata, according to CyberPress. While no direct breach of Instagram’s core infrastructure has been publicly confirmed, the exposed dataset highlights a persistent challenge for

The post When Data Leaks Don’t Look Like Breaches: The Instagram Exposure Explained appeared first on Seceon Inc.

The post When Data Leaks Don’t Look Like Breaches: The Instagram Exposure Explained appeared first on Security Boulevard.

Cybersecurity in the Age of AIOps: Proactive Defense Strategies for IT Leaders

20 January 2026 at 12:27
Cybersecurity Appsec

There is a rise in cybersecurity threats in today’s rapidly changing digital landscape. Organizations have struggled to safeguard sensitive data and systems from ransomware and breaches. In fact, about 87% of security professionals report that AI-based cyberattacks are plaguing organizations worldwide. Traditional cybersecurity solutions are effective to a degree. However, they tend to be limited..

The post Cybersecurity in the Age of AIOps: Proactive Defense Strategies for IT Leaders appeared first on Security Boulevard.

Jaclyn Kagey Shapes Humanity’s Return to the Moon 

20 January 2026 at 06:00
4 Min Read

Jaclyn Kagey Shapes Humanity’s Return to the Moon 

Two people practice underwater operations in a Moon like environment. The person on the left is holding an U.S. flag.
Jaclyn Kagey trains in NASA’s Neutral Buoyancy Laboratory, where astronauts and flight controllers rehearse spacewalk procedures in a simulated microgravity environment.
Credits: NASA

For Jaclyn Kagey, helping astronauts put boots on the Moon is part of her daily work. 

As the Artemis III extravehicular activity lead in NASA’s Flight Operations Directorate, Kagey plays a central role in preparing astronauts for humanity’s return to the lunar surface.  

She helps define how astronauts will work on the Moon, from planning detailed spacewalk timelines to guiding real-time operations. Crews will conduct these activities after stepping outside NASA’s human landing system, a commercial lander designed to safely transport astronauts from lunar orbit to the surface and back during Artemis missions. 

A woman poses in a black suit in front of the U.S. flag (left) and the NASA flag.
Official portrait of Jaclyn Kagey.
NASA/Robert Markowitz

As NASA prepares to return humans to the lunar surface for the first time in more than 50 years, Kagey’s work is helping shape how Artemis missions will unfold. Astronauts will explore the Moon’s south polar region, an area never visited by humans, and the Artemis III mission will serve as the proving ground for future lunar exploration.  

Kagey’s career at NASA spans more than 25 years and includes work across some of the agency’s most complex human spaceflight programs. While studying at Embry-Riddle Aeronautical University, she watched space shuttle launches that solidified her goal of working in human spaceflight. That goal became reality through United Space Alliance, where she and her husband began their careers as contractors. 

A woman smiles and poses at a desk in front of several monitors at mission control.
Jaclyn Kagey works in the Mission Control Center during a spacewalk simulation at NASA’s Johnson Space Center in Houston.
NASA/Robert Markowitz

One of Kagey’s career-defining moments came during a high-pressure operation aboard the International Space Station. 

“I’ve planned and executed seven spacewalks, but one that stands out was U.S. EVA 21,” she said. “We had a critical ammonia leak on the station, and from the time the issue was identified, we had just 36 hours to plan, prepare the spacesuits, and execute the repair.” 

The team successfully completed the spacewalk and restored the system. “The agility, dedication, and teamwork shown during that operation were remarkable,” Kagey said. “It demonstrated what this team can accomplish under pressure.” 

Two people practice underwater operations in a Moon like environment. The person on the left is holding an U.S. flag.
Jaclyn Kagey trains in NASA’s Neutral Buoyancy Laboratory, where astronauts and flight controllers rehearse spacewalk procedures in a simulated microgravity environment.
NASA

Throughout her career, Kagey has learned that adaptability is essential in human spaceflight. 

“You have to be flexible,” she said. “Things rarely go exactly as planned, and your job is to respond in a way that keeps the crew safe and the mission moving forward.” 

She has also learned the importance of balance. “There are times when the mission requires everything you have,” she said. “And there are times when you have to step back. Learning when to do each is critical.” 

A woman, left, wearing a spacesuit poses next to a man at a facility.
Jaclyn Kagey suited up in Axiom Space’s Extravehicular Mobility Unit (AxEMU) spacesuit during a test on the Active Response Gravity Offload System (ARGOS) at Johnson’s Space Vehicle Mockup Facility.
Axiom Space

Kagey’s influence also extends to the future of spacesuit development. Standing on the shorter end of the height spectrum, she once could not complete a full test in the legacy Extravehicular Mobility Unit despite passing the fit check. Although Kagey could don the suit, its proportions were too large for her and made it difficult to move as needed for the test. That experience drove her to advocate for designs that better support a wider range of body types.  

That effort came full circle when she recently completed her first test in Axiom Space’s lunar spacesuit, called the Axiom Extravehicular Mobility Unit (AxEMU), on the Active Response Gravity Offload System at Johnson Space Center. 

“It’s exciting to literally fit into the future of spacewalks!” Kagey said. 

A woman wears a lunar backpack while practicing picking up rocks with a lunar tool at a rock yard.
Jaclyn Kagey conducts lunar surface operations training in the Rock Yard at Johnson Space Center, where teams test tools and procedures for future Artemis missions.
NASA

As momentum builds around Artemis, Kagey remains focused on the responsibility that comes with advancing human space exploration.  

“My mission is to shape this historic endeavor by working closely with scientists and industry partners to define lunar surface activities,” Kagey said. “We are setting the standard for humanity’s return to the Moon.” 

About the Author

Sumer Loggins

Sumer Loggins

Share

Details

Last Updated
Jan 08, 2026
Keep Exploring

Discover More Topics From NASA

칼럼 | 2026년 IT 전략에 앞서 ‘표준 운영절차’를 손봐야 할 이유

20 January 2026 at 02:43

수십 년 동안 IT 운영 매뉴얼은 대개 50페이지 분량의 빽빽한 PDF 문서였다. 사람이 만들고 사람이 읽도록 설계된 문서는, 감사가 필요해질 때까지 디지털 저장소 어딘가에서 방치되는 경우가 대부분이었다. 그러나 2026년에 접어든 지금, 전통적인 SOP는 사실상 수명을 다한 상태다. 이제 이 매뉴얼의 주된 사용자가 사람이 아니기 때문이다.

시스템은 점점 더 에이전트 기반으로 진화하고 있다. 단순히 대시보드를 감시하는 수준을 넘어, 스스로 사고하고 계획하며 인프라 내 변경을 실행하는 자율형 에이전트가 배치되고 있다. 이들 에이전트는 PDF 문서를 읽을 수 없고, 법률 용어로 작성된 보안 정책의 취지를 해석하지도 못한다. 자율형 IT 시대에 통제력을 유지하려면 고정된 규칙에 머무르지 않고 ‘에이전트 헌법’, 즉 앤트로픽이 제시한 ‘헌법 중심 AI(Constitutional AI)’를 기업 환경에 적용해야 한다. 이는 AI의 문제점을 AI가 스스로 검증하고 고치기 위한 시스템을 의미한다.

문서 속 정책에서 코드로 구현된 정책으로

과거 IT 거버넌스는 사후 대응만 가능한 ‘체크리스트’ 방식이었다. 그러나 오늘날 기업은 정책을 코드로 구현하는 ‘PaC(Policy as Code)’로의 전환이 필요하다.

  • 전전두엽 역할: 에이전트 헌법은 자율 시스템 기계가 읽을 수 있는 기본 원칙 집합이다.
  • 운영 경계: 에이전트가 수행할 수 있는 작업 범위와 절대 넘지 말아야 할 윤리적 한계를 규정한다.
  • 실행 가능한 규칙: 코드로 구현된 강제 규칙의 예로는 ‘피크 시간대에는 인간 개입 토큰 없이는 운영 데이터베이스를 수정하지 않는다’와 같은 원칙이 있다.
  • LLM 이해 가능성: 이러한 규칙은 실행 가능하며, 오케스트레이션을 담당하는 LLM이 이해하고 활용할 수 있다.

이런 전환은 근본적인 변화를 의미한다. IT 전문가의 역할은 ‘운영자’에서 ‘의도 설계자’로 변화하고 있다. IT 직원은 더 이상 시스템을 직접 조작하는 사람이 아니라, 자율 시스템이 따라야 할 행동 규칙을 설계하는 주체가 되고 있다.

IT 운영을 위한 자율성 계층 구조

기업이 ‘킬 스위치’에 대한 통제권을 유지하면서 AI 역량을 확장하려면, ‘자율성의 계층 구조’에 초점을 맞출 필요가 있다. 이는 1978년 연구자 토머스 셰리던와 윌리엄 버플랭크의 기초 연구에서 제시된 프레임워크에 기반한 개념이다.

1단계: 완전 자율화 영역(가장 쉽게 도입할 수 있는 영역)

  • 이는 사람이 개입하는 비용이 해당 작업의 가치보다 더 큰 업무를 의미한다.
  • 사례
    • 자동 확장
    • 로그 로테이션
    • 기본 티켓 라우팅
    • 캐시 정리
  • 거버넌스: 사전에 정의된 임계값 조건에 따라 동작하는 통제된 자동화 영역(sandbox of trust)에서 관리된다.

2단계: 감독형 자율화 영역(사전 확인 구간)

  • 에이전트가 데이터 수집과 문제 원인 분석, 해결 방안 도출 등 핵심 작업을 수행하지만, 최종 실행 단계에서는 사람의 승인, 즉 확인 절차가 필요한 단계다.
  • 사례
    • 시스템 패치
    • 사용자 계정 프로비저닝
    • 비중요 설정 변경
  • 거버넌스: 에이전트는 해당 조치를 왜 수행하려는지에 대한 판단 근거, 즉 추론 과정을 관리자에게 제시해야 한다.

3단계: 사람 전용 영역

  • 어떤 상황에서도 에이전트가 자율적으로 수행해서는 안 되는, 시스템의 존립과 직결된 핵심 작업을 의미한다.
  • 사례
    • 데이터베이스 삭제
    • 핵심 보안 설정 우회
    • 에이전트 헌법 자체에 대한 수정
  • 거버넌스: 다단계 인증(MFA) 또는 복수 인원의 이중 승인과 같은 강력한 통제 절차를 적용한다.

숨겨진 공격 표면 줄이기

중앙화된 헌법 체계를 구현하면, 중앙 IT의 관리 및 감독 없이 배포되는 섀도우 AI 에이전트로 인한 리스크를 완화할 수 있다.

  • 통합 API: 모든 에이전트는 핵심 인프라와 상호작용하기 전에 해당 운영 원칙 체계에 따라 인증을 거쳐야 한다.
  • 컴플라이언스 이력: 이를 통해 SOC2나 EU AI 법과 같은 규제 대응에 활용할 수 있는 중앙화된 감사 이력이 생성된다.
  • 검증 가능한 의사결정: 자율적으로 내려진 판단과 실행에 대한 검증 가능한 기록을 축적할 수 있다.

기계 중심 세계 속 사람의 목소리

이른바 ‘헌법’은 코드가 아니라, 엔지니어의 경험과 판단이 집약된 사람의 문서다. 따라서 사람의 역할은 여전히 중요하다.

  • 의도 설계자: IT 전문가의 역할은 ‘운영자’에서 ‘의도의 설계자’로 변화하고 있다.
  • 문화적 전환: IT 팀은 개인이 나서서 문제를 해결하는 방식에서 벗어나, 시스템 중심의 거버넌스 문화로 전환해야 한다.

‘헌법 제정 회의’를 시작할 때

2020년대 후반에도 PDF 형식의 기존 SOP에 의존한다면, IT 운영은 비즈니스의 발목을 잡는 병목으로 전락할 가능성이 크다.

지금 바로 취해야 할 단계는 다음과 같다.

  • 레드라인 정의: 수석 아키텍트를 중심으로 3단계 영역의 경계를 명확히 설정한다.
  • 자동화 성과 도출: 즉시 자동화가 가능한 1단계 업무를 식별한다.
  • 전략에 집중: 사람은 봇을 감시하는 데 시간을 쓰는 대신, 전략과 혁신에 집중하도록 방향을 전환한다.

dl-ciokorea@foundryco.com

Why your 2026 IT strategy needs an agentic constitution

19 January 2026 at 06:30

For decades, the IT operations manual was a dense, 50-page PDF — a document designed by humans, for humans, and usually destined to gather digital dust until an audit required its retrieval. But as we enter 2026, the traditional standard operating procedure (SOP) is officially on life support. Humans are no longer the primary users of their own manuals.

Our systems are becoming agentic, deploying autonomous agents that don’t just monitor dashboards but actively “think,” plan, and execute changes within our infrastructure. These agents cannot read a PDF, nor can they “interpret the spirit” of a security policy written in legalese. If you want to maintain control in an era of autonomous IT, you must move beyond static guardrails and adopt an Agentic Constitution, which is the enterprise application of Constitutional AI, a term pioneered by Anthropic.

From policy on paper to policy as code 

In the past, IT governance was a reactive “check-the-box” exercise. The modern enterprise must shift toward Policy as Code (PaC).

  • The pre-frontal cortex: An Agentic Constitution is a machine-readable set of foundational principles for your autonomous systems.
  • Operational boundaries: They define what an agent can do and the ethical boundaries it must never cross.
  • Actionable rules: An example of an encoded hard rule is: “Never modify production data during peak hours without a human-in-the-loop token”.
  • Understandable by LLMs: These rules are actionable and understandable by the models powering your orchestration.

This shift represents a fundamental transformation: the role of the IT professional is moving from “Operator” to “Architect of Intent”. IT professionals are no longer the ones turning the wrenches; they are the ones writing the rules of engagement.

The hierarchy of autonomy: A framework for IT ops 

To scale AI capabilities without ceding total control of the “kill switch”, enterprises should adopt a hierarchy of autonomy, a framework credited to the foundational work of Thomas Sheridan & William Verplank (1978).

Tier 1: Full autonomy (the low-hanging fruit) 

  • Description: Tasks where the cost of human intervention exceeds the value of the task.
  • Examples
    • Auto-scaling 
    • Log rotation 
    • Basic ticket routing 
    • Cache clearing 
  • Governance: Defined by threshold-based triggers within a “sandbox of trust”.

Tier 2: Supervised autonomy (the ‘check-back’ zone) 

  • Description: Agents perform heavy lifting — gathering data and identifying fixes — but require a “human nod” before final execution.
  • Examples
    • System patching 
    • User provisioning 
    • Non-critical configuration changes 
  • Governance: Agents must present a “reasoning trace” to the admin explaining why the action is being taken.

Tier 3: Human-only (the red line) 

  • Description: “Existential” actions that no agent should ever perform autonomously.
  • Examples
    • Database deletions 
    • Critical security overrides 
    • Modifications to the Agentic Constitution itself 
  • Governance: Multi-factor authentication (MFA) or multi-person “dual-key” approvals.

Reducing the ‘hidden attack surface’ 

Implementing a centralized constitution helps mitigate the risks of shadow AI agents — autonomous tools deployed without central IT oversight.

  • Unified API: Any agent must “authenticate” against the constitution before it can interact with core infrastructure.
  • Compliance history: This creates a centralized audit trail invaluable for compliance frameworks like SOC2 or the EU AI Act.
  • Verifiable decision-making: You are building a verifiable history of autonomous decision-making.

The human voice in a machine world 

The “Constitution” is a human document representing the collective wisdom of your engineers.

  • Architects of intent: The role of the IT professional shifts from “Operator” to “Architect of Intent”.
  • Cultural shift: IT teams must move away from “hero culture” firefighting toward a culture of systemic governance.

Conclusion: Starting your constitutional convention 

If you rely on human-readable SOPs in the second half of the decade, your IT operations will become a bottleneck for the business.

Steps to take this quarter:

  • Identify red lines: Gather lead architects to define your Tier 3 boundaries.
  • Map automated wins: Identify Tier 1 tasks for immediate automation.
  • Focus on strategy: Ensure humans focus on strategy and innovation, not babysitting a bot.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

The top 6 project management mistakes — and what to do instead

19 January 2026 at 05:00

Project managers are doing exactly what they were taught to do. They build plans, chase team members for updates, and report status. Despite all the activity, your leadership team is wondering why projects take so long and cost so much.

When projects don’t seem to move fast enough or deliver the ROI you expected, it usually has less to do with effort and more with a set of common mistakes your project managers make because of how they were trained, and what that training left out. Most project teams operate like order takers instead of the business-focused leaders you need to deliver your organization’s strategy.

To accelerate strategy delivery in your organization, something has to change. The way projects are led needs to shift, and traditional project management approaches and mindsets won’t get you there.

Here are the most common project management mistakes we see holding teams back, and what you can do to help your project leaders shift from being order takers to drivers of IMPACT: instilling focus, measuring outcomes, performing, adapting, communicating, and transforming.

Mistake #1: Solving project problems instead of business problems

Project managers are trained to solve project problems. Scope creep. Missed deadlines. Resource bottlenecks. They spend their days managing tasks and chasing status updates, but most of them have no idea whether the work they manage is solving a real business problem.

That’s not their fault. They’ve been taught to stay in their lane in formal training and by many executives. Keep the project moving. Don’t ask questions. Focus on delivery.

But no one is talking to them about the purpose of these projects and what success looks like from a business perspective, so how can they help you achieve it?

You don’t need another project checked off the list. You need the business problem solved.

IMPACT driver mindset: Instill focus

Start by helping your teams understand the business context behind the work. What problem are we trying to solve? Why does this project matter to the organization? What outcome are we aiming for?

Your teams can’t answer those questions unless you bring them into the strategy conversation. When they understand the business goals, not just the project goals, they can start making decisions differently. Their conversations change to ensure everyone knows why their work matters. The entire team begins choosing priorities, tradeoffs, and solutions that are aligned with solving that business problem instead of just checking tasks off the list.

Mistake #2: Tracking progress instead of measuring business value

Your teams are taught to track progress toward delivering outputs. On time, on scope, and on budget are the metrics they hear repeatedly. But those metrics only tell you if deliverables will be created as planned, not if that work will deliver the results the business expects.

Most project managers are taught to measure how busy the team is. Everyone walks around wearing their busy badge of honor as if that proves value. They give updates about what’s done, what’s in progress, and what’s late. But the metrics they use show how busy everyone is at creating outputs, not how they’re tracking toward achieving outcomes.

All of that busyness can look impressive on paper, but it’s not the same as being productive. In fact, busy gets in the way of being productive.

IMPACT driver mindset: Measure outcomes

Now that the team understands what they’re doing and why, the next question to answer is how will we know we’re successful.

Right from the start of the project, you need to define not just the business goal but how you’ll measure it was successful in business terms. Did the project reduce cost, increase revenue, improve the customer experience? That’s what you and your peers care about, but often that’s not the focus you ask the project people to drive toward.

Think about a project that’s intended to drive revenue but ends up costing you twice as much to deliver. If the revenue target stays the same, the project may no longer make sense. Or they might come up with a way to drive even higher revenue because they understood the way you measure success.

Shift how you measure project success from outputs to outcomes and watch how quickly your projects start creating real business value.

Mistake #3: Perfecting process instead of streamlining it

If your teams spend more time tweaking templates, building frameworks, or debating methodology than actually delivering results, processes become inefficient.

Often project managers are hired for their certifications, which leads many of them to believe their value is tied to how much of and how perfectly they create and follow that process. They work hard to make sure every box is checked, every template is filled out, and every report is delivered on time. But if the process becomes the goal, they’re missing the point.

You invested in project management to get business results, not build a deliverable machine, and the faster you achieve those results, the higher your return on your project investments.

IMPACT driver mindset: Perform relentlessly

With a clear plan to drive business value, now we need to show them how to accelerate. That means relentlessly evaluating, streamlining, and optimizing the delivery process so it helps the team achieve the project goals faster.

Give them permission to simplify. When the process slows them down or adds work that doesn’t add value, they should be able to call it out.

This isn’t an excuse to have no process or claim you’re being agile just to skip the necessary steps. It’s about right-sizing the process, simplifying where you can, and being thoughtful about what’s truly needed to deliver the outcome. Do you really need a 30-page document no one will read, or would two pages that people actually use be enough? You don’t need perfection. You need progress.

Mistake #4: Blaming people instead of leading them through change

A lot of leaders start from the belief that people are naturally resistant to change. When projects stall or results fall short, it’s easy to assume someone just didn’t want to change. Project teams blame people, then layer on more governance, more process, and more pressure. Most of the time, it’s not a people problem. It’s how the changes are being done to people instead of with them.

People don’t resist because they’re lazy or difficult. They resist because they don’t understand why it’s happening or what it means for them. And no amount of process will fix that.

IMPACT driver mindset: Adapt to thrive

With an accelerated delivery plan designed to drive business value, your project teams can now turn their attention to bringing people with them through the change process.

Change management is everyone’s job, not something you outsource to HR or a change team. Projects fail without good change management and everyone needs to be involved. Your teams must understand that people aren’t resistant to change. They’re resistant to having change done to them. You have to teach them how to bring others through the change process instead of pushing change at them.

Teach your project teams how to engage stakeholders early and often so they feel part of the change journey. When people are included, feel heard, and involved in shaping the solution, resistance starts to fade and you create a united force that supports your accelerated delivery plan.

Mistake #5: Communicating for compliance instead of engagement

The reason most project communication fails is because it’s treated like a one-way path. Status reports people don’t understand. Steering committee slides read to a room full of executives who aren’t engaged. Unread emails. The information goes out because it’s required, not because it’s helping people make better decisions or take the right action.

But that kind of communication doesn’t create clarity, build engagement, or drive alignment. And it doesn’t inspire anyone to lean in and help solve the real problems.

IMPACT driver mindset: Communicate with purpose

To keep people engaged in the project and help it keep accelerating toward business goals, you need purpose-driven communication designed to drive actions and decisions. Your teams shouldn’t just push information but enable action. That means getting the right people and the right message at the right time, with a clear next step.

If you want your projects to move faster, communication can’t be a formality. When teams, sponsors, and stakeholders know what’s happening and why it matters, they make decisions faster. You don’t need more status reports. You need communication that drives actions and decisions.

Mistake #6: Driving project goals instead of business outcomes

Most organizations still define the project leadership role around task-focused delivery. Get the project done. Hit the date. Stay on budget. Project managers have been trained to believe that finishing the project as planned is the definition of success. But that’s not how you define project success.

If you keep project managers out of the conversations about strategy and business goals, they’ll naturally focus on project outputs instead of business outcomes. This leaves you in the same place you are today. Projects are completed, outputs are delivered, but the business doesn’t always see the impact expected.

IMPACT driver mindset: Transform mindset

When you help your teams instill focus, measure outcomes, perform relentlessly, adapt to thrive, and communicate with purpose, you do more than improve project delivery. You build the foundation for a different kind of leadership.

Shift how you and your organization see the project leadership role. Your project managers are no longer just running projects. You’re developing strategy navigators who partner with you to guide how strategy gets delivered, and help you see around corners, connect initiatives, and decide where to invest next.

When project managers are trusted to think this way and given visibility into the strategy, they learn how the business really works. They stop chasing project success and start driving business success.

More on project management:

IT portfolio management: Optimizing IT assets for business value

16 January 2026 at 05:01

In finance, portfolio management involves the strategic selection of a collection of investments that align with an investor’s financial goals and risk tolerance. 

This approach can also apply to IT’s portfolio of systems, with one addition: IT must also assess each asset in that portfolio for operational performance.

Today’s IT is a mix of legacy, cloud-based, and emerging or leading-edge systems, such as AI. Each category contains mission-critical assets, but not every system performs equally well when it comes to delivering business, financial, and risk avoidance value to the enterprise. How can CIOs optimize their IT portfolio performance?

Here are five evaluative criteria for maximizing the value of your IT portfolio.

Mission-critical assets

The enterprise’s most critical systems for conducting day-to-day business are a category unto themselves. These systems may be readily apparent, or hidden deep in a technical stack. So all assets should be evaluated as to how mission-critical they are.

For example, it might be that your ERP solution is a 24/7 “must have” system because it interfaces with a global supply chain that operates around the clock and drives most company business. On the other hand, an HR application or a marketing analytics system could probably be down for a day with work-arounds by staff.

More granularly, the same type of analysis needs to be performed on IT servers, networks and storage. Which resources do you absolutely have to have, and which can you do without, if only temporarily?

As IT identifies these mission-critical assets, it should also review the list with end-users and management to assure mutual agreement.

Asset utilization

Zylo, which manages SaaS inventory, licenses, and renewals, estimates that “53% of SaaS licenses go unused or underused on average, so finding dormant software should be a priority.” This “shelfware” problem isn’t only with SaaS; it can be found in underutilized legacy and modern systems, in obsolete servers and disk drives, and in network technologies that aren’t being used but are still being paid for.

Shelfware in all forms exists because IT is too busy with projects to stop for inventory and obsolescence checks. Consequently, old stuff gets set on the shelf and auto-renews.

The shelfware issue should be solved if IT portfolios are to be maximized for performance and profitability. If IT can’t spare the time for a shelfware evaluation, it can bring in a consultant to perform an assessment of asset use and to flag never-used or seldom-used assets for repurposing or elimination.

Asset risk

The goal of an IT portfolio is to contain assets that are presently relevant and will continue to be relevant well into the future. Consequently, asset risk should be evaluated for each IT resource.

Is the resource at risk for vendor sunsetting or obsolescence? Is the vendor itself unstable? Does IT have the on-staff resources to continue running a given system, no matter how good it is (a custom legacy system written in COBOL and Assembler, for example)? Is a particular system or piece of hardware becoming too expense to run? Do existing IT resources have a clear path to integration with the new technologies that will populate IT in the future?

For IT assets that are found to be at risk, strategies should be enacted to either get them out of “risk” mode, or to replace them.

Asset IP value

There is a CIO I know in the hospitality industry who boasts that his hotel reservation program, and the mainframe it runs on, have not gone down in 30 years. He attributes much of this success to custom code and a specialized operating system that the company uses, and he and his management view it as a strategic advantage over the competition.

He is not the only CIO who feels this way. There are many companies that operate with their “own IT special sauce” that makes their businesses better. This special sauce could be a legacy system or an AI algorithm. Assets like these that become IT intellectual property (IP) present a case for preservation in the IT portfolio.

Asset TCO and ROI

Is every IT asset pulling its weight? Like monetary and stock investments, technologies under management must show they are continuing to produce measurable and sustainable value. The primary indicators of asset value that IT uses are total cost of ownership (TCO) and return on investment (ROI).

TCO is what gauges the value of an asset over time. For instance, investments in new servers for the data center might have paid off four years ago, but now the data center has an aging bay of servers with obsolete technology and it is cheaper to relocate compute to the cloud.

ROI is used when new technology is acquired. Metrics are set that define at what point the initial investment into the technology will be recouped. Once the breakeven point has been reached, ROI continues to be measured because the company wants to see new profitability and/or savings materialize from the investment. Unfortunately, not all technology investments go as planned. Sometimes the initial business case that called for the technology changes or unforeseen complications arise that turn the investment into a loss leader.

In both cases, whether the issue is TCO or ROI, the IT portfolio must be maintained in a way such that losing or wasted assets are removed.

Summing it up

IT portfolio management is an important part of what CIOs should be doing on an ongoing basis, but all too often, it is approached in a reactionary mode — for example, with a system being replaced only when users ask for it to be replaced, or a server needing to be removed from the data center because it fails.

The CEO, the CFO, and other key stakeholders whom the CIO deals with during technology budgeting time don’t help, either. While they will be interested in how long it will take for a new technology acquisition to “pay for itself,” no one ever asks the CIO about the big picture of IT portfolio management: how the overall assets in the IT portfolio are performing, and which assets will require replacement for the portfolio to sustain or improve company value.

To improve their own IT management, CIOs should seize the portfolio management opportunity. They can do this by establishing a portfolio for their company’s IT assets and reviewing these assets periodically with those in the enterprise who have direct say over IT budgets.

IT portfolio management will resonate with the CFO and CEO because both continually work with financial and risk portfolios for the business. Broader visibility of the IT portfolio will also make it easier for CIOs to present new technology recommendations and to obtain approvals for replacing or upgrading existing assets when these actions are called for.

See also:

MS, 사내 도서관 폐관 결정… AI 기반 학습 전환·정보 구독 축소

16 January 2026 at 02:57

IT 언론사 더버지는 최근 MS가 사내 도서관을 폐쇄하기로 했다고 15일 보도했다. 이와 함께 기존에 직원들에게 제공되던, 결제를 해야 열람할 수 있었던 언론사 등 유료 정보 서비스 구독도 중단한 것으로 파악됐다.

더버지가 입수한 MS 사내 안내문에 따르면 MS는 구독 서비스가 갱신되지 않는 이유에 대해 “스킬링 허브(Skilling Hub)라는 내부 플랫폼으로 보다 현대적인 AI 기반 학습 경험으로 전환하기 위한 조치”라고 설명했다. 이어 “도서관은 스킬링 허브를 중심으로 한 보다 현대적이고 연결된 학습 경험으로 이동하는 과정에서 폐쇄됐다”며 “이 공간을 소중히 여겨온 많은 사람들에게 이번 변화가 영향을 미친다는 점을 알고 있다”고 밝혔다.

또 다른 IT 언론사 긱와이어가 15일 보도한 내용에 따르면, MS에 문의한 결과 MS는 “미국 레드먼드, 인도 하이데라바드, 중국 베이징, 아일랜드 더블린에 위치한 MS의 사내 도서관이 이번 주를 기점으로 폐쇄됐으며, 해당 공간들은 직원들이 신기술을 탐색할 수 있는 그룹 학습과 실험을 위한 협업 공간으로 재구성되고 있다”고 설명했다.

한편 MS가 구독을 중단한 정보 서비스에는 IT 언론사 더인포메이션과 기술·경제 분석 전문 매체 스트래티직 뉴스 서비스(Strategic News Service)가 포함됐다. 스트래티직 뉴스 서비스는 약 20년간 MS 직원들에게 글로벌 리포트를 제공해온 것으로 알려져 있다. 다만 MS는 모든 구독 및 정보 서비스를 중단하는 것은 아니라며 “20개 이상의 디지털 자료와 구독 서비스에 대한 접근을 제공하고 있고, 직원들에게 가장 가치 있는 자원에 우선순위를 두고 있다”고 긱와이어를 통해 설명했다.

MS의 사내 도서관은 유서 깊은 역사를 가진 곳으로, MS 설립 초기부터 운영되며 지속적으로 규모를 확장해온 것으로 알려져 있다. 도서관 및 사서 커뮤니티인 립콘프(LibConf)의 2018년 자료에 따르면 MS는 1983년 사내에 첫 사서를 채용했으며, 당시 약 50권의 책으로 시작했다. 이후에는 책의 양이 너무 많아 건물 하중을 고려해야 할 정도였다는 일화도 전해진다. 해당 내용은 MS 내부 개발자인 레이몬드 첸이 2020년 자신의 블로그를 통해 공개했다.

MS 초창기 엔지니어이자 윈도우 부문 사장을 역임한 스티븐 시노프스키는 X 계정을 통해, MS 사내 도서관이 회사 초창기 시절 PC 관련 서적을 빠짐없이 구입하고 직원들이 필요로 하는 기사 사본을 전달하는 역할을 했다고 회상했다.
jihyun.lee@foundryco.com

“위치 관계없이 주권 구현한다”···IBM, 새로운 해법으로 ‘소버린 코어’ 공개

16 January 2026 at 02:46

IBM은 기업 및 정부가 클라우드 업체의 데이터센터 위치에 의존하지 않고도 소버린 클라우드 배포에 대한 운영 통제권을 확보할 수 있도록 설계된 소프트웨어 스택 ‘소버린 코어(Sovereign Core)’를 출시했다. 이를 통해 CIO가 강화되는 규제 심사에 대응하고 컴플라이언스를 자동화하며, 데이터의 엄격한 위치 조건 아래에서 민감한 AI 워크로드를 실제 운영 환경에 배치할 수 있도록 지원하는 것을 목표로 하고 있다.

소버린 클라우드는 일반적으로 클라우드의 효율성을 활용하면서도 데이터와 IT 운영에 대한 통제권을 유지하는 데 초점을 맞춘다. 이는 데이터 위치 규제와 같은 현지 법규를 준수하는 동시에, 데이터와 운영, 보안에 대해 국가 또는 조직 차원의 완전한 통제를 보장하기 위해 대부분 특정 지역에 구축된다. 이상적으로는 격리된 클라우드 환경에서 운영되는 IT 인프라를 의미한다.

마이크로소프트나 구글의 소버린 클라우드가 전용 데이터센터를 기반으로 설계되는 것과 달리, IBM은 기업이나 정부가 배포하려는 모든 소프트웨어와 애플리케이션에 주권을 기본적으로 탑재하겠다는 입장이다. IBM은 오는 2월 기술 프리뷰 공개가 예정된 소버린 코어를 통해, 고객이 자체 하드웨어는 물론 지역 클라우드 업체나 다른 클라우드 환경에서도 워크로드를 실행할 수 있다고 밝혔다.

퓨처럼 그룹(Futurum Group)의 CIO 실무 책임자 디온 힌치클리프는 “이는 전통적인 소버린 클라우드라기보다는, 각 조직이 자체적으로 클라우드를 구축할 수 있도록 하는 소프트웨어 스택에 가깝다”라고 설명했다. 그는 소버린 코어가 온프레미스 데이터센터, 지역 내에서 지원되는 클라우드 인프라, IT 서비스 업체를 통한 환경 등 다양한 운영 환경에서 활용될 수 있다고 분석했다.

벤더 종속성 제거

분석가들은 이러한 접근 방식이 소버린 클라우드 관리 방식을 재정의하고, 벤더 종속성을 피하는 데 도움이 될 수 있다고 진단했다.

힌치클리프는 기존 소버린 클라우드 환경에서는 클라우드 업체가 업데이트나 접근 권한과 같은 핵심 운영 요소를 계속 통제하는 경우가 많다고 언급했다. 이로 인해 규제 리스크가 커질 뿐 아니라, 고객이 특정 업체의 아키텍처와 API, 컴플라이언스 도구에 종속되는 구조가 형성될 수 있다는 것이다.

또한 워크로드를 다른 환경으로 이전할 경우, 기존 업체의 신원 관리 체계와 암호화 키, 감사 추적 정보가 매끄럽게 이전되지 않는 문제가 발생할 수 있다. 힌치클리프는 이로 인해 CIO가 새로운 환경에서도 규제 요건을 충족하기 위해 거버넌스 체계를 다시 구축해야 하는 부담을 떠안게 된다고 지적했다.

반면 IBM의 소버린 코어는 암호화 키와 신원 관리, 운영 권한을 각 조직의 관할 영역 안에 유지할 수 있도록 함으로써 CIO에게 더 많은 통제권을 부여할 수 있다. 이런 구조로 인해 CIO는 거버넌스 체계를 다시 구축하지 않고도 클라우드 업체를 전환할 수 있다.

하이퍼프레임 리서치(HyperFRAME Research)의 AI 스택 총괄인 스테파니 월터는 규제 기관 주도의 감사가 점점 더 빈번해지고, 요구 수준도 강화되고 있다고 진단했다. 특히 유럽연합(EU)의 규제 당국은 기업의 규제 준수 약속만으로는 충분하지 않다고 보고, 실제 준수 여부를 입증할 수 있는 증거와 감사 기록, 상시적인 컴플라이언스 보고를 요구하고 있다.

힌치클리프는 소버린 코어가 자동화된 증거 수집과 지속적인 모니터링을 통해 이런 요구에 대응할 수 있다고 분석했다. 이를 통해 은행과 정부 기관, 방위 산업과 연관된 분야에서 발생하는 운영 부담을 줄이는 데도 도움이 될 수 있다고 평가했다.

소버린 AI 파일럿의 실제 배포 지원

분석가들은 소버린 코어가 기업의 AI 파일럿 프로그램을 실제 운영 환경에 배포하는 데도 힘을 실어줄 수 있다고 봤다. 특히 엄격한 데이터 위치 조건과 컴플라이언스 통제가 요구되는 AI 프로젝트에서 효과가 클 것이라는 분석이다.

HFS 리서치(HFS Research)의 CEO 필 퍼슈트는 대부분의 기업과 조직이 자체 데이터를 범용 AI 모델에 전달하는 데 여전히 부담을 느끼고 있다고 진단하면서, 동시에 GPU 기반 추론을 완전히 자체 주권 경계 안에서만 실행하는 것도 현실적으로 제약이 많은 상황이라고 설명했다.

이에 비해 소버린 코어의 기능과 역량은 기업 및 정부 조직이 내부 환경에서 AI 추론을 실행할 수 있도록 지원한다. 이를 통해 처리되는 데이터뿐 아니라 AI 모델 자체도 주권 요구사항을 충족할 수 있으며, 결과적으로 CIO가 주권을 확보하면서 AI를 파일럿 단계에서 운영 단계로 옮길 수 있는 기반을 제공한다고 퍼슈트는 설명했다.

시장 환경의 변화

소버린 코어는 IBM이 향후 AI 규제 강화 흐름을 염두에 두고 소버린 클라우드 시장 공략을 본격화하려는 전략으로 풀이된다. 동시에 마이크로소프트와 AWS, 구글 등 주요 클라우드 업체보다 한발 앞서 주도권을 잡으려는 의도도 담겨 있다.

힌치클리프는 “유럽이 규제를 강화하고 아시아태평양(APAC) 지역도 이를 뒤따르는 상황에서, IBM은 주권 문제가 기업의 AI 도입 여부를 가르는 핵심 요인이 될 것으로 보고 있다. 일부 기업에서는 비용이나 성능보다도 훨씬 더 중요한 요소가 될 수 있다”라고 설명했다.

특히 EU는 주요 클라우드 업체 대부분이 미국에 본사를 두고 있다는 점에서, 외국 기업이 데이터에 접근하거나 핵심 IT 시스템을 통제하는 것을 엄격하게 규제하고 있다.

EU 규제를 충족하기 위해 클라우드 업체는 보통 지역 통합 업체나 관리형 서비스 업체와 협력한다. 다만 힌치클리프에 따르면, 이 경우에도 기본 플랫폼에 대한 운영 통제권은 클라우드 업체가 유지하고, 파트너는 그 위에서 서비스 구축과 운영을 맡는 경우가 대부분이다.

IBM의 소버린 코어는 파트너가 고객을 대신해 전체 환경을 직접 운영할 수 있고, IBM은 운영 과정에 전혀 개입하지 않는 구조다. 힌치클리프는 이러한 접근이 규제 준수 측면에서 더 높은 신뢰성을 제공한다고 분석했다.

이와 관련해 IBM은 독일의 컴퓨타센터(Computacenter) 및 유럽 지역을 시작으로 전 세계 IT 서비스 업체와 협력을 확대할 계획이라고 밝혔다. IBM은 소버린 코어에 추가 기능을 더해 2026년 중반 정식 출시할 계획이다.
dl-ciokorea@foundryco.com

Despite delay, Space Force still plans futures command to guide force design

15 January 2026 at 18:47

The nation’s newest military service still has a lot of work to do to chart its future. The Space Force had been planning to use a new “Futures Command” to handle that work, and it was supposed to be up and running by last year. That didn’t happen as scheduled, but the idea’s not dead either.

Leaders say they’re still planning a new organization to help shape the service’s future, but they also needed to make sure it aligns with the new administration’s priorities.

The Space Force first unveiled its plans for a new Futures Command almost two years ago. The idea at the time was to combine the existing Space Warfighting Analysis Center and the Concepts and Technologies Center with a new Wargaming Center. Those plans were put on pause late in 2024 when it became apparent new political leadership was on the way.

But Gen. Chance Saltzman, the chief of space operations, said Air Force Secretary Troy Meink is on board with the overall idea.

“Secretary Meink 100% understands what we were trying to accomplish with Futures Command and the importance of it,” he said during the annual Spacepower conference in Orlando, Florida, last month. “How are we looking at the future? How are we categorizing and characterizing the threats we’re going to face, the missions we’re going to be asked to do, and how are we going to respond so that we can put the force in place to meet those challenges? We will look at concepts, we will do the war gaming, we will do the simulations, we will do all the manpower assessment, we will do the military construction surveys to figure out what facilities are needed, and then document that so that everybody can see what we’re progressing towards. It is this idea of establishing a command that’s focused on what is it we’re going to need in the future and making sure all the planning is done, synchronized with the resources so we get that right.”

And while the Space Force certainly isn’t the first military service in recent years to contemplate a new command as part of big organizational changes, it is the first time in modern history that a service is having to do that from scratch.

“In December of 2019, the law said, ‘There is a Space Force,’ and nothing could have been further from the truth,” Saltzman said. “It legally made there be a Space Force, but it was still in work. It was a thought process, it was pulling things together as rapidly as possible. So I think the hardest thing is overcoming this mentality that there’s been a Space Force for decades, that we’ve got all this figured out. These are hard things to do on a government scale with government oversight and government resources. And so convincing people that we had to start from scratch on almost every process we had, on every decision we make, that was unprecedented. Convincing people that we don’t really have anything to fall back on. If I don’t deliver a service dress [uniform], then we’re using an Air Force service dress — there wasn’t something else. We had plenty of uniform changes when we were growing up, but there was always an Air Force uniform before those changes that we were in until we transitioned. Not the case for the Space Force. We had to start from scratch. We’re not just enhancing the Space Force, we’re actually creating one. And that’s been a real challenge.”

New leadership education initiatives

The Space Force traces most of its roots to the Air Force, and until now, it’s leaned heavily on its sister service within the Department of the Air Force for combat support and other functions. But it’s increasingly working to build infrastructure, doctrine and culture of its own.

As one example, Saltzman said just last month, the Space Force launched its own Captains Leadership Course. That initiative is a partnership with Texas A&M University and led by the Space Force’s Space Training and Readiness Command.

“The bottom line is each service brings something unique in terms of what it focuses on for professional military education. I remember General [Jay] Raymond, when he stood up the service, talked about some of the things that services have to do. You have to have your own budget, you have to have your own doctrine, you have to develop your own people. And that’s kind of stuck with me,” he said. “We have to develop our Guardians for the specifics of the Space Force. And this basic understanding at the captain’s level is going to be foundational to what follows in the rest of their career. And so while we need to find ways to give them experience with other services, I wanted to make sure that the service had a core offering at that grade to educate our officers on the Space Force. Now we’re going to include joint doctrine, will include communications and leadership. But they need that foundational understanding of the service first before they start to branch out and figure out how they integrate with the other services.”

First Space Force OTS graduates

And in 2025, the service graduated its first group of newly-minted officers from officer training school. Those first 80 officers, Saltzman said, represent a mentality within the service that seeks to build “multidisciplinary” leaders. The enlisted force, he says, will be tactical experts, while officers will need expertise in “joint integration.”

“Do we need deep expertise? Absolutely. Do we need people that broadly understand how to integrate with a joint force? Absolutely. How do you do both? This is the tough part of the job, you have to get that balance just right,” he said. “If you go down to kind of the micro management side of this and ask how you develop a single Guardian to best perform, then you get caught in that conundrum. I have to think about what I need the entire service to be able to do. Do I need deep experts? Yes. Do I need broad integrators? Yes. So we have to find a way to, across the entire service, create opportunities to maximize what people can do, what they do best, and fill the jobs that are required based on those skills and those competencies. You have to make sure you think about it from an enterprise perspective, and what might apply to any one Guardian doesn’t necessarily have to apply to all Guardians.”

The post Despite delay, Space Force still plans futures command to guide force design first appeared on Federal News Network.

© Staff Sgt. Kayla White/U.S. Air Force via AP

FILE - In this photo released by the U.S. Air Force, Capt. Ryan Vickers stands for a photo to display his new service tapes after taking his oath of office to transfer from the U.S. Air Force to the U.S. Space Force at Al-Udeid Air Base, Qatar, Sept. 1, 2020. (Staff Sgt. Kayla White/U.S. Air Force via AP, File)

The workforce shift — why CIOs and people leaders must partner harder than ever

15 January 2026 at 07:20

AI won’t replace people. But leaders who ignore workforce redesign will begin to fail and be replaced by leaders who adapt and quickly.

For the last decade or so, digital transformation has been framed as a technology challenge. New platforms. Cloud migrations. Data lakes. APIs. Automation. Security layered on top. It was complex, often messy and rarely finished — but the underlying assumption stayed the same: Humans remained at the center of work, with technology enabling them.

AI breaks that assumption.

Not because it is magical or sentient — it isn’t — but because it behaves in ways that feel human. It writes, reasons, summarizes, analyzes and decides at speeds that humans simply cannot match. That creates a very different emotional and organizational response to any technology that has come before it.

I was recently at a breakfast session with HR leaders where the topic was simple enough on paper: AI and how to implement it in organizations. In reality, the conversation quickly moved away from tools and vendors and landed squarely on people — fear, confusion, opportunity, resistance and fatigue. That is where the real challenge sits.

AI feels human and that changes everything

AI is just technology. But it feels human because it has been designed to interact with us in human ways. Large language models combined with domain data create the illusion that AI can do anything. Maybe one day it will. Right now, what it can do is expose how unprepared most organizations are for the scale and pace of change it brings.

We are all chasing competitive advantages — revenue growth, margin improvement, improving resilience — and AI is being positioned as the shortcut. But unlike previous waves of automation, this one does not sit neatly inside a single function.

Earlier this year I made what I thought was an obvious statement on a panel: “AI is not your colleague. AI is not your friend. It is just technology.” After the session, someone told me — completely seriously — that AI was their colleague. It was listed on their Teams org chart. It was an agent with tasks allocated to it.

That blurring of boundaries should make leaders pause.

Perception becomes reality very quickly inside organizations. If people believe AI is a colleague, what does that mean for accountability, trust and decision-making? Who owns outcomes when work is split between humans and machines? These are not abstract questions — they show up in performance, morale and risk.

When I spoke to younger employees outside that HR audience, the picture was even more stark. They understood what AI was. They were already using it. But many believed it would reduce the number of jobs available to their generation. Nearly half saw AI as a net negative force. None saw it as purely positive.

That sentiment matters. Because engagement is not driven by strategy decks — it is driven by how people feel about their future.

Roles, skills and org design are already out of date

One of the biggest problems organizations face is that work is changing faster than their structures can keep up.

As Zoe Johnson, HR director at 1st Central, put it: “The biggest mismatch is in how fast the technology is evolving and how possible it is to redesign systems, processes and people impacts to keep pace with how fast work is changing. We are seeing fast progress in our customer-facing areas, where efficiencies can clearly be made.”

Job frameworks, skills models and career paths are struggling to keep up with reality. This mirrors what we are now seeing publicly, with BBC reporting that many large organizations expect HR and IT responsibilities to converge as AI reshapes how work actually flows through the enterprise.

AI does not neatly replace a role — it reshapes tasks across multiple roles simultaneously. That shift is already forcing leadership teams to rethink whether work should be organized by function at all or instead designed end‑to‑end around outcomes. That makes traditional workforce planning dangerously slow.

Organizations are also hitting change saturation. We have spent years telling ourselves that “the only constant is change,” but AI feels relentless. It lands on top of digital transformation, cloud, cyber, regulation and cost pressure.

Johnson is clear-eyed about this tension: “This is a constant battle, to keep on top of technology development but also ensure performance is consistent and doesn’t dip. I’m not sure anyone has all the answers, but focusing change resource on where the biggest impact can be made has been a key focus area for us.”

That focus is critical. Because indiscriminate AI adoption does not create advantages — it creates noise.

This is no longer an IT problem

For years, organizations have layered technology on top of broken processes. Sometimes that was a conscious trade-off to move faster. Sometimes it was avoidance. Either way, humans could usually compensate.

AI does not compensate. It amplifies. This is the same dynamic highlighted recently in the Wall Street Journal, where CIOs describe AI agents accelerating both productivity and structural weakness when layered onto poorly designed processes.

Put AI on top of a poor process and you get faster failure. Put it on top of bad data and you scale mistakes at speed. This is not something a CIO can “fix” alone — and it never really was.

The value chain — how people, process, systems and data interact to create outcomes — is the invisible thread most organizations barely understand. AI pulls on that thread hard.

That is why the relationship between CIOs and people leaders has moved from important to existential.

Johnson describes what effective partnership actually looks like in practice: “Constant communication and connection is key. We have an AI governance forum and an AI working group where we regularly discuss how AI interventions are being developed in the business.”

That shared ownership matters. Not governance theatre, but real, ongoing collaboration where trade-offs are explicit and consequences understood.

Culture plays a decisive role here. As Johnson notes, “Culture and trust is at the heart of keeping colleagues engaged during technological change. Open and honest communication is key and finding more interesting and value-adding work for colleagues.”

AI changes what work is. People leaders are the ones who understand how that lands emotionally.

The CEO view: Speed, restraint and cultural expectations

From the CEO seat, AI is both opportunity and risk. Hayley Roberts, CEO of Distology, is pragmatic about how leadership teams get this wrong.

“All new tech developments should be seen as an opportunity,” she said. “Leadership is misaligned when the needs of each department are not congruent with the business’s overall strategy. With AI it has to be bought in by the whole organization, with clear understanding of the benefits and ethical use.”

Some teams want to move fast. Others hesitate — because of regulation, fear or lack of confidence. Knowing when to accelerate and when to hold back is a leadership skill.

“We love new tech at Distology,” Roberts explains, “but that doesn’t mean it is all going to have a business benefit. We use AI in different teams but it is not yet a business strategy. It will become part of our roadmap, but we are using what makes sense — not what we think we should be using.”

That restraint is often missing. AI is not a race to deploy tools — it is a race to build sustainable advantage.

Roberts is also clear that organizations must reset cultural expectations: “Businesses are still very much people, not machines. Comprehensive internal assessment helps allay fear of job losses and assists in retaining positive culture.”

There is no finished AI product. Just constant evolution. And that places a new burden on leadership coherence.

“I trust what we are doing with our AI awareness and strategy,” Roberts says. “There is no silver bullet. Making rash decisions would be catastrophic. I am excited about what AI might do for us as a growing business over time.”

Accountability doesn’t disappear — it concentrates

One uncomfortable truth sits underneath all of this: AI does not remove accountability. It concentrates it. Recent coverage in The HR Director on AI‑driven restructuring, role redesign and burnout reinforces that outcomes are shaped less by the technology itself and more by the leadership choices made around design, data and pace of change.

When decisions are automated or augmented, the responsibility still sits with humans — across the entire C-suite. You cannot outsource judgement to an algorithm and then blame IT when it goes wrong.

This is why workforce redesign is not optional. Skills, org design and leadership behaviors must evolve together. CIOs bring the technical understanding. CPOs and HRDs bring insight into capability, culture and trust. CEOs set the tone and pace.

Ignore that partnership and AI will magnify every weakness you already have.

Get it right and it becomes a powerful force for growth, resilience and better work.

The workforce shift is already underway. The question is whether leaders are redesigning for it — or reacting too late.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

6 maxims for today’s digital leader playbook

15 January 2026 at 05:00

Modern CIOs and tech leaders carry responsibility not only for an organization’s technology but, as key partners, for its entire business success. So having access to readily transferable lessons is critical in order to solve real business challenges, and lead with clarity, confidence, and purpose.

As a jumping off point, I’ve distilled here some of my favourite maxims from different business functions.

Maxim 2: Try to be human

You’re more interesting than you think. Try to be human. I realize this is a tough ask for us classic IT introvert types, but with many interactions now conducted remotely, it’s even more important to find opportunities to meet in person.

Letting people know what makes you tick personally is of more interest than you could probably imagine. Colleagues are interested in you as a whole person, not simply as the person they work with. So don’t be afraid to bring yourself to work, as the phrase goes. This allows others to do the same, and to talk about their own feelings and circumstances.

As an INTP (an introverted, intuitive, thinking, and perceiving type from the Myers-Briggs personality assessment), social events aren’t my natural environment. And we’ve probably all experienced how work and socializing sometimes don’t mix. Is an orchestrated corporate event all that comfortable for anyone? But try to show up and meet people, relax a bit, and have some fun.

Maxim 6: Beware the IT cultural cringe

IT people often prefer to vent about the technology-ignorant business rather than stand up and explain the tech. Instead of declaring something’s bad for the company or a dead-end, they shrug and say the business just doesn’t get it.

No matter how great your strategy is, your plans will fail without a company culture that encourages people to implement it. I know from speaking to other CIOs that a frequent role for them is standing up for IT and defending their teams in a culture where the business blames IT for its failures.

It’s therefore vital to coach your teams to deal on equal terms with their internal business customers. Key to this is talking in business terms, not IT jargon. The reason for not adopting a nonstandard piece of tech is it’ll inflate future company running costs, not that it doesn’t neatly fit the IT estate. So stand up and be counted on a matter of tech principle, and win the debate.

Maxim 8: There are no IT projects, only business projects.

When IT projects fail, it’s often because of a lack of ownership by the business.

The entire purpose of your IT department is to move the organization forward. So any investment must deliver on quantifiable financial targets or defined business objectives. If it doesn’t, move on. This is fundamental. Forgetting to do so is easy when under pressure, as others press you with their own agendas, but dangerous for you and the business.

Everything I’ve learned and seen reinforces this. Without this focus, you’re just an IT supplier taking orders, not the executive IT partner of the business. Question any actions by your team that can’t be linked back to the company’s core objectives.

It all comes down to building relationships based on trust with your business colleagues who recognize that you understand what the business needs and can afford, so challenge projects not owned by the business leaders.

Maxim 10: The CIO as the personification of IT

Be vocal about your team’s successes and be honest about your mistakes. As CIO, you’re the face of the IT function in your organization, and you set the tone for everyone in IT.

Try not to talk about the business and IT as separate entities. You and your team are just as integral to the company as sales, operations, or finance. Always talk about our business needs and what we should do.

Remember, you’re accountable for all the IT. These days, we talk about being authentic, so being honest about your slip-ups, and how you feel about them, is important in establishing your reputation, both internally and externally.

Explain a success to others in the organization and why it worked. Bring out how collaboration between their teams and IT, working to aligned plans and objectives, made good things happen for everyone involved.

Maxim 36: Join up digital and IT

Digital natives need to work together with old techies. Advances of the last decade have been delivered by fast-moving digital startups, financed by deep-pocketed investors. Unsurprisingly, this has spawned organizational impatience with the costs and time taken by traditional or legacy IT functions. This frustration can then translate into setting up a completely separate digital department under a CDO, charged with implementing the new and faster-moving business.

Your current business is built on long-established ways of working, and processes that remain necessary, unless you’re going to build them all a second time for the new digital channel. If not, then new components, including services and products, will have to interface with existing systems, as well as firmly established and mission-critical business processes. So with this dynamic, ensure that both traditional IT and new digital report to you.

Maxim 56: AI is a tech-driven business revolution

AI is the most overhyped bandwagon in technology, more than bitcoin, big data, and augmented and virtual reality. Nevertheless, it’s the most far-reaching tech-driven change since the advent of the internet. In a matter of months, AI and AI agents are doing to white-collar jobs what production line robots did to blue-collar jobs 20 years ago.

AI is transforming the world and we’re just at the beginning of this revolution. So what are you doing about it?

Your challenge as CIO is that AI has cut through to your board and executive leadership like nothing before. Furthermore, all your partners and suppliers are building AI agents into their software and services. Plus, all your best digital innovators in the business, and definitely all your recent grad hires, are using Chat GPT and bespoke AI tools in their day jobs. As CIO, you hold the keys to AI working well by effectively wielding the data in your systems. After all, you and your team are the ones who best understand how the AI works as the means to achieve business value.

Madrid arranca un centro para controlar las infraestructuras críticas en la comunidad

13 January 2026 at 12:19

Hoy se ha inaugurado el Centro de Control de Infraestructuras Críticas (CCIC) que ha puesto en marcha el Gobierno de la Comunidad de Madrid para controlar, desde un solo lugar y de forma centralizada, los sistemas tecnológicos de la región. Esta iniciativa, aseguran desde la organización, permitirá “minimizar el impacto de cualquier incidencia y ofrecer una respuesta inmediata”.

Según el consejero de Digitalización de la Comunidad de Madrid, Miguel López-Valverde, “el centro estará operativo las 24 horas del día los 365 días del año y será el corazón digital de la región desde el que se reforzará la capacidad para que los madrileños puedan relacionarse con la Administración con plenas garantías”. Desde el Gobierno regional apuntan que esta infraestructura “innovadora y pionera en España, en cuanto a sus dimensiones, competencias y alcance”, permitirá gestionar y monitorizar en tiempo real todas las plataformas y aplicaciones esenciales para velar por su correcto rendimiento y proteger los datos que manejan.

Un millón de euros de inversión y más de 20 profesionales técnicos

El nuevo centro nace con una inversión cercana al millón de euros y está integrado por un jefe de operaciones y más de una veintena de profesionales técnicos, entre los que se encuentran ingenieros de infraestructuras críticas, analistas, consultores, especialistas en gestión de incidentes, expertos en ciberseguridad y en inteligencia artificial.

El nuevo recurso, explican desde el Gobierno regional, “tiene la capacidad de analizar al instante el funcionamiento de los sistemas informáticos, predecir posibles ciberataques o reaccionar con agilidad y de manera coordinada, en cuestión de segundos, ante cualquier incidente. También está provisto de entornos de respaldo eléctrico, grupos electrógenos y medios de alimentación con baterías, así como de conectividad con múltiples operadores para salvar contratiempos y asegurar su propia continuidad”.

Dentro del CCIC estará ubicada una representación del Centro regional de Operaciones de Ciberseguridad, que permitirá reforzar la protección de los sistemas críticos de la Administración autonómica.

Más de 2.300 sistemas informáticos sólo en la Comunidad de Madrid

La Consejería de Digitalización se encarga de proporcionar servicios digitales a todos los ciudadanos y empresas de la región y dirigir los recursos de las Tecnologías de la Información y la Comunicación (TIC) de 4.000 sedes administrativas, dar soporte a cerca de 200.000 empleados públicos o llevar todos los trámites tecnológicos de las consejerías. Asimismo, Madrid Digital realiza anualmente hasta 22.000 actuaciones para mejorar aplicaciones y otras herramientas y más de 12.000 cambios técnicos. Todo ello, recuerdan desde el Ejecutivo regional, se sustenta en más de 2.300 sistemas informáticos que se ubican en infraestructuras tecnológicas, “cuyo mantenimiento y seguimiento es esencial”.

Drone Hacking: Build Your Own Hacking Drone, Part 2

13 January 2026 at 10:12

Welcome back, aspiring cyberwarriors!

We are really glad to see you back for the second part of this series. In the first article, we explored some of the cheapest and most accessible ways to build your own hacking drone. We looked at practical deployment problems, discussed how difficult stable control can be, and even built small helper scripts to make your life easier. That was your first step into this subject where drones become independent cyber platforms instead of just flying gadgets. 

We came to the conclusion that the best way to manage our drone would be via 4G. Currently, in 2026, Russia is adapting a new strategy in which it is switching to 4G to control drones. An example of this is the family of Shahed drones. These drones are generally built as long-range, loitering attack platforms that use pre-programmed navigation systems, and initially they relied only on satellite guidance to reach their targets rather than on a constant 4G data link. However, in some reported variants, cellular connectivity was used to support telemetry and control-related functionality.

russian shahed drone with manpads mounted atop and equipped with a 4G module
MANPADS mounted on Shahed

In recent years, Russia has been observed modifying these drones to carry different types of payloads and weapons, including missiles and MANPADS (Man-Portable Air-Defense System) mounted onto the airframe. The same principle applies here as with other drones. Once you are no longer restricted to a short-range Wi-Fi control link and move to longer-range communication options, your main limitation becomes power. In other words, the energy source ultimately defines how long the aircraft can stay in the air.

Today, we will go further. In this part, we are going to remove the smartphone from the back of the drone to reduce weight. The free space will instead be used for chipsets and antennas.

4G > UART > Drone

In the previous part, you may have asked yourself why an attacker would try to remotely connect to a drone through its obvious control interfaces, such as Wi-Fi. Why not simply connect directly to the flight controller and bypass the standard communication layers altogether? In the world of consumer-ready drones, you will quickly meet the same obstacle over and over again. These drones usually run closed proprietary control protocols. Before you can talk to them directly, you first need to reverse engineer how everything works, which is neither simple nor fast.

However, there is another world of open-source drone-control platforms. These include projects such as Betaflight, iNav, and Ardupilot. The simplest of these, Betaflight, supports direct control-motor command transmission over UART. If you have ever worked with microcontrollers, UART will feel familiar. The beauty here is that once a drone listens over UART, it can be controlled by almost any small Linux single-board computer. All you need to do is connect a 4G module and configure a VPN, and suddenly you have a controllable airborne hacking robot that is reachable from anywhere with mobile coverage. Working with open systems really is a pleasure because nothing is truly hidden.

So, what does the hacker need? The first requirement is a tiny and lightweight single-board computer, paired with a compact 4G modem. A very convenient combination is the NanoPi Neo Air together with the Sim7600G module. Both are extremely small and almost the same size, which makes mounting easier.

Single-board computer and 4G modem for remote communication with a drone
Single-board computer and 4G modem for remote communication with a drone

The NanoPi communicates with the 4G modem over UART. It actually has three UART interfaces. One UART can be used exclusively for Internet connectivity, and another one can be used for controlling the drone flight controller. The pin layout looks complicated at first, but once you understand which UART maps to which pins, the wiring becomes straightforward.

Pinout of contacts on the NanoPi mini-computer for drone control and 4G communication
Pinout of contacts on the NanoPi mini-computer for drone control and 4G communication

After some careful soldering, the finished 4G control module will look like this:

Ready-made 4G control module
Ready-made 4G control module

Even very simple flight controllers usually support at least two UART ports. One of these is normally already connected to the drone’s traditional radio receiver, while the second one remains available. This second UART can be connected to the NanoPi. The wiring process is exactly the same as adding a normal RC receiver.

Connecting NanoPi to the flight controller
Connecting NanoPi to the flight controller

The advantage of this approach is flexibility. You can seamlessly switch between control modes through software settings rather than physically rewiring connectors. You attach the NanoPi and Sim7600G, connect the cable, configure the protocol, and the drone now supports 4G-based remote control.

Connecting NanoPi to the flight controller
Connecting NanoPi to the flight controller

Depending on your drone’s layout, the board can be mounted under the frame, inside the body, or even inside 3D-printed brackets. Once the hardware is complete, it is time to move into software. The NanoPi is convenient because, when powered, it exposes a USB-based console. You do not even need a monitor. Just run a terminal such as:

nanoPi >  minicom -D /dev/ttyACM0 -b 9600

Then disable services that you do not need:

nanoPi >  systemctl disable wpa_supplicant.service

nanoPi >  systemctl disable NetworkManager.service

Enable the correct UART interfaces with:

nanoPi >  armbian-config

From the System menu you go to Hardware and enable UART1 and UART2, then reboot.

Next, install your toolkit:

nanoPi >  apt install minicom openvpn python3-pip cvlc

Minicom is useful for quickly checking UART traffic. For example, check modem communication like this:

minicom -D /dev/ttyS1 -b 115200
AT

If all is well, then you need to config files for the modem. The first one goes to /etc/ppp/peers/telecom. Replace “telecom” with the name of the cellular provider you are going to use to establish 4G connection.

setting up the internet connection with a telecom config

And the second one goes to /etc/chatscripts/gprs

gprs config for the drone

To activate 4G connectivity, you can run:

nanoPi >  pon telecom

Once you confirm connectivity using ping, you should enable automatic startup using the interfaces file. Open /etc/network/interfaces and add these lines:

auto telecom
iface telecom inet ppp
provider telecom

Now comes the logical connectivity layer. To ensure you can always reach the drone securely, connect it to a central VPN server:

nanoPi > cp your_vds.ovpn /etc/openvpn/client/vds.conf

nanoPi > systemctl enable openvpn-client@vds

This allows your drone to “phone home” every time it powers on.

Next, you must control the drone motors. Flight controllers speak many logical control languages, but with UART the easiest option is the MSP protocol. We install a Python library for working with it:

NanoPi > cd /opt/; git clone https://github.com/alduxvm/pyMultiWii

NanoPi > pip3 install pyserial

The protocol is quite simple, and the library itself only requires knowing the port number. The NanoPi is connected to the drone’s flight controller via UART2, which corresponds to the ttyS2 port. Once you have the port, you can start sending values for the main channels: roll, propeller RPM/throttle, and so on, as well as auxiliary channels:

control.py script on github

Find the script on our GitHub and place the it in ~/src/ named as control.py

The NanoPi uses UART2 for drone communication, which maps to ttyS2. You send MSP commands containing throttle, pitch, roll, yaw, and auxiliary values. An important detail is that the flight controller expects constant updates. Even if the drone is idle on the ground, neutral values must continue to be transmitted. If this stops, the controller assumes communication loss. The flight controller must also be told that MSP data is coming through UART2. In Betaflight Configurator you assign UART2 to MSP mode.

betafight drone configuration

We are switching the active UART for the receiver (the NanoPi is connected to UART2 on the flight controller, while the stock receiver is connected to UART1). Next we go to Connection and select MSP as the control protocol.

betafight drone configuration

If configured properly, you now have a drone that you can control over unlimited distance as long as mobile coverage exists and your battery holds out. For video streaming, connect a DVP camera to the NanoPi and stream using VLC like this:

cvlc v4l2:///dev/video0:chroma=h264:width=800:height= \
--sout '#transcode{vcodec=h264,acodec=mp3,samplerate=44100}:std{access=http,mux=ffmpeg{mux=flv},dst=0.0.0.0:8080}' -vvv

The live feed becomes available at:

http://drone:8080/

Here “drone” is the VPN IP address of the NanoPi.

To make piloting practical, you still need a control interface. One method is to use a real transmitter such as EdgeTX acting as a HID device. Another approach is to create a small JavaScript web app that reads keyboard or touchscreen input and sends commands via WebSockets. If you prefer Ardupilot, there are even ready-made control stacks.

By now, your drone is more than a toy. It is a remotely accessible cyber platform operating anywhere there is mobile coverage.

Protection Against Jammers

Previously we discussed how buildings and range limitations affect RF-based drone control. With mobile-controlled drones, cellular towers actually become allies instead of obstacles. However, drones can face anti-drone jammers. Most jammers block the 2.4 GHz band, because many consumer drones use this range. Higher end jammers also attack 800-900 MHz and 2.4 GHz used by RC systems like TBS, ELRS, and FRSKY. The most common method though is GPS jamming and spoofing. Spoofing lets an attacker broadcast fake satellite signals so the drone believes false coordinates. Since drone communication links are normally encrypted, GPS becomes the weak point. That means a cautious attacker may prefer to disable GPS completely. Luckily, on many open systems such as Betaflight drones or FPV cinewhoops, GPS is optional. Indoor drones usually do not use GPS anyway.

As for mobile-controlled drones, jamming becomes significantly more difficult. To cut the drone off completely, the defender must jam all relevant 4G, 3G, and 2G bands across multiple frequencies. If 4G is jammed, the modem falls back to 3G. If 3G goes down, it falls back to 2G. This layering makes mobile-controlled drones surprisingly resilient. Of course, extremely powerful directional RF weapons exist that wipe out all local radio communication when aimed precisely. But these tools are expensive and require high accuracy.

Summary

We transformed the drone into a fully independent device capable of long-range remote operation via mobile networks. The smartphone was replaced with a NanoPi Neo Air and a Sim7600G 4G modem, routed UART communication directly into the flight controller, and configured MSP-based command delivery. We also explored VPN connectivity, video streaming, and modern control interfaces ranging from RC transmitters to browser-based tools. Open-source flight controllers give us incredible flexibility.

In Part 3, we will build the attacking part and carry out our first wireless attack.

If you like the work we’re doing here and want to take your skills even further, we also offer a full SDR for Hackers Career Path. It’s a structured training program designed to guide you from the fundamentals of Software-Defined Radio all the way to advanced, real-world applications in cybersecurity and signals intelligence. 

How analytics capability has quietly reshaped IT operations

13 January 2026 at 07:15

As CIOs have entered 2026 anticipating change and opportunity, it is worth looking back at how 2025 reshaped IT operations in ways few anticipated.

In 2025, IT operations crossed a threshold that many organizations did not fully recognize at the time. While attention remained fixed on AI, automation platforms and next-generation tooling, the more consequential shift occurred elsewhere. IT operations became decisively shaped by analytics capability, not as a technology layer, but as an organizational system that governs how insight is created, trusted and embedded into operational decisions at scale.

This distinction matters. Across 2025, a clear pattern emerged. Organizations that approached analytics largely as a set of tools often found it difficult to translate operational intelligence into material performance gains. Those that focused more explicitly on analytics capability, spanning governance, decision rights, skills, operating models and leadership support, tended to achieve stronger operational outcomes. The year did not belong to the most automated IT functions. It belonged to the most analytically capable ones.

The end of tool-centric IT operations

One of the clearest lessons of 2025 was the diminishing return of tool-centric IT operations strategies. Most large organizations now possess advanced monitoring and observability platforms, AI-driven alerting and automation capabilities. Yet despite this maturity, CIOs continued to report familiar challenges such as alert fatigue and poor prioritization, along with difficulty turning operational data into decisions and actions.

The issue was not a lack of data or intelligence. It was the absence of an organizational capability to turn operational insight into coordinated action. In many IT functions, analytics outputs existed in dashboards and models but were not embedded in decision forums or escalation pathways. Intelligence was generated faster than the organization could absorb it.

2025 made one thing clear. Analytics capability, not tooling, has become the primary constraint on IT operations performance.

A shift from monitoring to decision-enablement

Up until recently, the focus of IT operations analytics was on visibility. Success was defined by how comprehensively systems could be monitored and how quickly anomalies could be detected. In 2025, leading organizations moved beyond visibility toward decision-enablement.

This shift was subtle but profound. High-performing IT operations teams did not ask, “What does the data show?” They asked, “What decisions should this data change?” Analytics capability matured where insight was explicitly linked to operational choices such as incident triage, capacity investment decisions, vendor escalation, technical debt prioritization and resilience trade-offs.

Crucially, this required clarity on decision ownership. Analytics that is not anchored to named decision-makers and decision rights rarely drives action. In 2025, the strongest IT operations functions formalized who decides what, at what threshold and with what analytical evidence. This governance layer, not AI sophistication, proved decisive.

AI amplified weaknesses as much as strengths

AI adoption accelerated across IT operations in 2025, particularly in areas such as predictive incident management, root cause analysis and automated remediation. But AI did not uniformly improve outcomes. Instead, it amplified existing capability strengths and weaknesses.

Where analytics capability was mature, AI enhanced the speed, scale and consistency of operational decisions and actions. Where it was weak, AI generated noise, confusion and misplaced confidence. Many CIOs observed that AI-driven insights were either ignored or over-trusted, with little middle ground. Both outcomes reflected capability gaps, not model limitations.

The lesson from 2025 is that AI does not replace analytics capability in IT operations. It exposes it. Organizations lacking strong decision governance, data ownership and analytical literacy found themselves overwhelmed by AI-enabled systems they could not effectively operationalize.

Operational analytics became a leadership issue

Another defining shift in 2025 was the elevation of IT operations analytics from a technical concern to a leadership concern. In high-performing organizations, senior IT leaders became actively involved in shaping how operational insight was used, not just how it was produced.

This involvement was not about reviewing dashboards. It was about setting expectations for evidence-based operations, reinforcing analytical discipline in incident reviews and insisting that investment decisions be grounded in operational data rather than anecdote. Where leadership treated analytics as the basis for operational decisions, IT operations matured rapidly.

Conversely, where analytics remained delegated entirely to technical teams, its influence plateaued. 2025 demonstrated that analytics capability in IT operations is inseparable from leadership behavior.

From reactive optimization to systemic learning

Perhaps the most underappreciated development of 2025 was the shift from reactive optimization to systemic learning in IT operations. Traditional operational analytics often focused on fixing the last incident or improving the next response. Leading organizations used analytics to identify structural patterns such as recurring failures, architectural bottlenecks, process debt and skill constraints.

This required looking beyond individual incidents to learn from issues over time and build organizational memory. These capabilities cannot be automated. IT operations teams that invested in them moved from firefighting to foresight, using analytics not only to respond faster, but to design failures out of the IT operating environment.

In 2025, resilience became less about redundancy and more about learning velocity.

The new role of the CIO in IT operations analytics

By the end of 2025, the CIO’s role in IT operations analytics had subtly but decisively changed. AI forced a shift from sponsorship to stewardship. The CIO was no longer simply the sponsor of tools or platforms. Increasingly, they became the architect of the organizational conditions that allow analytics to shape operations meaningfully.

This included clarifying decision hierarchies, aligning incentives with analytical outcomes, investing in analytical skills across operations teams and protecting time for reflection and improvement. CIOs who embraced this role saw analytics scale naturally across IT operations. Those who did not often saw impressive pilots fail to translate into everyday practice.

The defining lesson of 2025

Looking back, 2025 was not the year IT operations became intelligent. It was the year intelligence became operationally consequential, where analytics capability determined whether insight changed behavior or remained aspirational.

The organizations that quietly advanced their IT operations this year did so by strengthening the organizational systems that govern how insight becomes action. Operational intelligence only creates value when organizations are capable of deciding what takes precedence, when to intervene operationally and where to commit resources for the future.

What to expect in 2026: When analytics capability becomes non-optional

While 2025 marked the consolidation of analytics capability in IT operations, 2026 will likely be the year analytics capability becomes non-optional across IT operations. As AI and automation continue to advance, the gap between analytically capable IT operations teams and those where analytics capability is lacking will widen, not because of technology, but because of how effectively organizations convert intelligence into action.

Decision latency emerges as a core operational risk

By 2026, decision speed will replace operational visibility as the dominant constraint on IT operations. As analytics and AI generate richer, more frequent insights, organizations without clear decision rights, escalation thresholds and evidence standards will struggle to respond coherently. In many cases, delays and conflicting interventions will cause more disruption than technology failures themselves. Leading IT operations teams will begin treating decision latency as a measurable operational risk.

AI exposes capability gaps rather than closing them

AI adoption will continue to accelerate across IT operations in 2026, but its impact will remain uneven. Where analytics capability is strong, AI will enhance decision speed and organizational learning. Where it is weak, AI will amplify confusion or analysis paralysis. The differentiator will not be model sophistication, but the organization’s ability to govern decisions, knowing when to trust automated insight, when to challenge it and who is accountable for outcomes.

Analytics becomes a leadership discipline

In 2026, analytics in IT operations will become even more of a leadership expectation than a technical activity. CIOs and senior IT leaders will be judged less on the tools they sponsor and more on how consistently operational decisions are grounded in evidence. Incident reviews, investment prioritization and resilience planning will increasingly be evaluated by the quality of analytical reasoning applied, not just the results achieved.

Operational insight shapes system design

Leading IT operations teams will move analytics upstream in 2026, from improving response and recovery to shaping architecture and design. Longitudinal operational data will increasingly inform platform choices, sourcing decisions and resilience trade-offs across cost, risk and availability. This marks a shift from reactive optimization to evidence-led system design, where analytics capability influences how IT environments are built, not just how they are run.

The future of IT operations will not be shaped by smarter systems alone, but by organizations that can consistently turn intelligence into decisions and actions. Without analytics capability, this remains ad hoc, inconsistent and ultimately ineffective.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

2026: The year AI ROI gets real

13 January 2026 at 05:01

AI initiatives by and large have fallen short of expectations.

That’s the conclusion of most research to date, including MIT’s The GenAI Divide: State of AI in Business 2025, which found a staggering 95% failure rate for enterprise generative AI projects, defined as not having shown measurable financial returns within six months.

Moreover, tolerance for poor returns is running out, as CEOs, boards, and investors are making it clear they want to see demonstrable ROI on AI initiatives.

According to Kyndryl’s 2025 Readiness Report, 61% of the 3,700 senior business leaders and decision-makers surveyed feel more pressure to prove ROI on their AI investments now versus a year ago.

And the Vision 2026 CEO and Investor Outlook Survey, from global CEO advisory firm Teneo, noted a similar trend, writing that “as efforts shift from hype to execution, businesses are under pressure to show ROI from rising AI spend,” noting that 53% of investors expect positive ROI in six months or less.

“There is pressure on CEOs and CIOs to deliver returns, and that pressure is going to continue, and with that pressure is the question, ‘How will you use AI to make the company better?’” says Neil Dhar, global managing partner at IBM Consulting.

Laying the foundation for success

Matt Marze, CIO of New York Life Group Benefit Solutions, is confident he can deliver AI ROI in 2026 because he’s been getting positive returns all along. The key? Pursuing and prioritizing AI deployments based on the anticipated value each will produce.

“We started our AI journey with a call to action in December 2023 by the CEO, and from the start we wanted to be a technology, data, and AI company to drive unparalleled experiences for our customers, partners, and employees. So all along the value question, the ROI was very top of mind,” Marze explains.

Marze and his executive colleagues approach AI investments “the same way we think about all our investments” — that is, considering how they’d impact the company’s earnings plan. “We look at operating expense reduction, margin improvement, top-line revenue growth, customer satisfaction, and client retention, but at the end of the day it boils down to our earnings contribution,” he says.

Marze highlights practices that keeps the organization focused on ROI, such as prioritizing AI initiatives for areas that are AI-ready in terms of available data, systems, and skills; using returns from those to fund subsequent initiatives; and designing AI systems in ways that allow for reusability so that subsequent projects can get off the ground more efficiently.

“We’re doing all that very strategically,” Marze says, explaining that this approach enables the organization to select AI projects where there are realistic expectations for ROI rather than merely hopes for vague improvements.

“We want to be nimble and move with urgency, but we also want to do things the right way. And because we fund our investments out of our P&L, we think about spending. We have that P&L mindset. We don’t like to waste money,” he adds.

Marze also credits the company’s ongoing commitment to modernization as helping ensure AI projects can deliver returns. “We built a foundation, and that put us in a good position to capitalize on AI,” he says. “There is a readiness component to leveraging AI effectively and to driving AI ROI. You have to have strategic data management, modernized computing, modernized apps, and cloud-native solutions to take advantage of AI.”

Marze expects those same disciplines and approaches to continue enabling him to pick AI initiatives that deliver measurable value for the organization as his company looks to reimagine work using AI and to bring full agentic solutions into its core processes.

The payback on the various proposals vary, he notes, and the anticipated timeline for payback for some can be a few years out, but he’s confident that the positive returns will be there.

Moving from elusive to realized ROI

Others are not as confident that their AI projects will deliver ROI — or at least ROI as quickly as some would like. Some 84% of CEOs predict that positive returns from new AI initiatives will take longer than six months to achieve, according to the Teneo report.

Their perspective may be colored by the past few years, when ROI has been elusive for many reasons, say researchers, analysts, and IT execs.

Many early AI initiatives were experiments and learning opportunities with little or no relevance to the business, says Bret Greenstein, CAIO at West Monroe. They often didn’t address the organization’s needs or goals and atrophied as a result. And even when the AI projects did address real pain points or business opportunities, they often failed to deliver value because the data or technology needed to scale wasn’t there or cost more to modernize than the anticipated ROI. And while some delivered modest gains or improved experience, they were either difficult to quantify or small enough to not move the needle.

“If you go back to the early days of the web and mobile, the same thing happened, before people learned there are new metrics that mattered. It just takes time to figure those out,” Greenstein says.

Now, three years after the arrival of ChatGPT and generative AI, the enterprise has matured its understanding of AI’s potential.

“We’re clearly in the third wave where more clients understand the transformational value of AI and that it’s about new ways of working,” Greenstein says. “Those who are getting ROIs are the ones who see it as a transformation and work with the business to rethink what they’re doing and to get people to work differently. They know transformation work is required to see an ROI.”

To ensure AI projects deliver ROI, Palo Alto Networks CIO Meerah Rajavel selects initiatives that deliver velocity (“Speed is the name of the game,” she says), efficiency (“Can I do more with less?”), and improved experience. “This forces us to reimagine experiences and processes, and it absolutely changes the game,” she says.

Rajavel assesses each AI initiative’s success on the outcomes it produces in those categories, noting that her company has adopted that focus all along and continues to use it to determine which AI investments to make.

As a case in point, she cites a current project that uses AI to automate 90% of IT operations — a project that is already delivering gains in velocity, efficiency, and experience. Rajavel says automated IT operations jumped from 12% when the project started in early 2024 to 75% as of late 2025 — an improvement that has halved the costs of IT operations.

Metrics and targets

Many organizations haven’t taken a strategic approach when deciding where to implement AI, which helps explain why AI ROI has been so elusive, says IBM’s Dhar. “Some sprayed and prayed rather than systematically asking, ‘How will the technology make my company better?’” he adds.

But top management teams are increasingly looking at AI “as a way to transform — and to transform their businesses dramatically,” he says. “They’re reinventing all their functions, and they’re transforming functions to make them better, stronger, and cheaper, and in some cases they’re also getting top-line growth. Two years ago, there was a lot of experimentation, proofs of concept; now it is transformation, with the most sophisticated management teams looking for returns within 12 months.”

Linh Lam, CIO of Jamf, had been deploying AI to solve pain points but is now using AI “to rethink how we do things.” She sees those as the opportunities to generate the biggest gains.

“I feel like we’re going to see more and more of that, where the technology forces us to rethink how we’re doing things, and that’s where the real value is,” she says.

That’s certainly the case in terms of the AI initiatives Jamf now prioritizes.

“Two years ago, there was more tolerance to say, ‘Let’s try it.’ Now we’ve moved well beyond that, so if someone is bringing something in and they have no semblance of the potential value except it’s going to make life better, we’re going to push back on that. We’re looking at the goals stakeholders have and setting metrics to measure outcomes,” she says. “I feel like the realm of possibility with what you can do with AI and AI agents almost feels limitless. But you’re still running a business, and you want to make decisions in a logical, smart way. So we have to make sure we’re bringing the right value.”

Turning IT challenges into a virtuous cycle for AI transformation

There are challenges, of course, to getting positive returns on AI initiatives — even when they’re carefully selected for their potential, says Jennifer Fernandes, lead of the AI and technology transformation unit at Tata Consultancy Services in North America.

According to Fernandes, many organizations are stymied by legacy technology, process debt, and data debt that keeps them from being able to scale AI projects and see measurable value.

And they won’t be able to scale their AI ambitions and see impactful returns until they pay off that debt, she adds.

Cisco’s AI Readiness Index found that only 32% of organizations rate their IT infrastructure as being fully AI ready, only 34% rated their data preparedness as such, and just 23% considered their governance processes primed for AI.

Fernandes advises CIOs to tackle that debt strategically and use AI to pay it down. Moreover, using AI to modernize IT will bring efficiencies to IT operations while also building IT’s capacity to support more AI use cases and addressing deficits in the organization’s data layer, she says.

The increased efficiency produces returns that can be reinvested in other AI projects, which will be more likely to produce ROI due to the modernization that resulted from the earlier AI project, Fernandes explains.

Moreover, this self-funding model not only helps build the modern tech stack and data program needed to power AI in IT and other business units but also focuses attention on ROI from the start, helping ensure CIOs and their business peers pursue AI initiatives that generate positive returns.

“You’re generating enough savings to pay down your debt, and you’re building incrementally, you’re transforming as you go,” Fernandes says. “And with this [approach], CIOs don’t have to go and say, ‘Give me money to fix these things.’ Instead they can say, ‘I have this model, and if we bring AI in here, we can generate returns, and we can then reinvest to drive these other transformations. Now the CIO can say, ‘I am generating the funding for AI for you.’”

❌
❌