Normal view

There are new articles available, click to refresh the page.
Yesterday — 16 December 2025IT

An AI-powered workforce should start with AI-powered PCs

16 December 2025 at 23:07

Microsoft’s move to end Windows 10 support on 14 October 2025 has created an inflection point for CIOs. Three million PCs in active use in Australia can’t upgrade to Windows 11, according to Microsoft, which leaves many organisations facing security risk and a widening gap in frontline productivity.

Most corporate AI adoption so far has been cloud-led, but concerns about privacy, rising AI service costs and the movement of sensitive client data offshore are prompting a rethink. Local inferencing on modern hardware now offers a practical way to accelerate AI use without increasing exposure.

For enterprise IT leaders, the risk is not simply running out-of-support devices and the security risk that entails. The deeper issue is falling behind in the shift to AI-assisted work. If outdated hardware limits access to new capabilities in Windows 11, productivity losses accumulate quietly but consistently across frontline teams.

Why legacy approaches fall short

Relying solely on cloud AI services appears convenient, but it comes with constraints for regulated sectors. Sensitive information still travels outside the organisation, model performance depends on connectivity, and costs can climb quickly with usage.

A like-for-like hardware refresh—swapping out PCs to new minimum spec models that can run Windows 11—also offers little uplift. It won’t enable AI acceleration, intelligent automation, or the performance for larger applications that include local large language models (LLMs).

As organisations become more dependent on real-time summarisation, content creation, translation and threat detection, traditional processors will struggle to keep pace. These limitations make a hardware-based step change more compelling. Organisations are now looking for devices designed for AI workloads to avoid incremental fixes that will push cost and risk into later years.

How AI PCs address the problem

Modern AI PCs pair Windows 11 with AI-specific processors, including neural processing units (NPUs) capable of more than 40 trillion operations per second. They support local generation, summarisation and automation tasks without sending data to offshore services, reducing compliance exposure and cutting response times.

On-device AI helps streamline everyday work. Tasks such as meeting follow-ups, document clean-ups and content drafting can happen in the background, freeing staff to focus on higher-value decisions. These small increments add up when multiplied across thousands of employees.

AI PCs also support accessibility capabilities, such as real-time translation, on-device captioning and voice-driven interfaces, helping more employees engage effectively with digital tools.

Analyst firm Canalys forecasts that 60 per cent of PCs shipped in 2027 will be AI-capable, more than triple the volume shipped in 2024. This shift reflects a broader recognition that frontline productivity depends on pairing AI software with suitable hardware.

Lenovo’s approach to AI-driven computing

Lenovo’s Aura Edition Copilot+ AI PC portfolio, powered by Intel Core Ultra processors, is designed around this need for secure, local AI acceleration.

Silke Barlow, Lenovo’s Australian country manager, says the aim is to make AI immediately useful. “If a device can quietly draft documents, take notes and tidy data while the user focuses on making decisions that really take advantage of their skill set, that’s real productivity,” he explains.

These processors separate AI tasks from traditional system operations, improving responsiveness even when multiple applications are active. For creative and analytical teams, the hardware also supports faster rendering, predictive modelling and advanced editing tools.

Beyond the device itself, Lenovo positions AI PCs as part of a broader digital workplace strategy. Its Smart Care uses predictive analytics to anticipate hardware issues before they become outages, reducing support overhead and improving uptime. Lenovo Smart Modes adjusts system behaviour automatically as employees move between tasks, optimising performance without manual tuning.

For organisations scaling AI workflows, the Lenovo AI Digital Workplace Solution integrates device-level capabilities with broader collaboration, security and management tools, allowing IT teams to operationalise new ways of working more smoothly.

Barlow says Lenovo sees AI PCs as a foundation for long-term capability building. “Windows 11 and AI-optimised hardware are reshaping how people work. We’ve invested in a new ecosystem that helps organisations transition at a pace that strengthens security and productivity together.”

Making the upgrade decision

Past operating system transitions show that delaying replacements increases cost and complexity. Unsupported devices require more IT effort, are harder to secure and limit access to new features that drive workplace innovation.

A structured upgrade path helps. Many organisations begin by identifying devices that cannot move to Windows 11 and prioritising business units where AI-enabled automation can remove repeatable work. This reduces risk while improving margins and freeing teams to focus on more specialised tasks.

The end of Windows 10 support is prompting a broader reassessment of how PCs contribute to productivity, compliance and resilience. AI PCs extend beyond solving an immediate problem: they provide a platform for sustained improvement in how work gets done.

Planning your fleet refresh? Learn more about how Lenovo Aura Edition Copilot+ PCs offer personalised, productive, and protected AI with the latest Intel Core Ultra processors.

에이전틱 AI 시대, 전통적인 IT 리스크 관리가 흔들리는 이유

16 December 2025 at 21:51

튜링 테스트를 떠올려 보자. 튜링 테스트의 과제는 무엇인가? 평범한 사람이 기계와 대화하는지, 다른 사람과 대화하는지를 구분해 보라는 과제다.

사실, 생성형 AI는 몇 년 전에 이미 튜링 테스트를 통과했다.

필자는 AI를 잘 안다고 자부하는 동료들에게 이런 견해를 전했다. 대다수는 그저 눈을 굴리며 반응했다. 동정 섞인 말투로 필자가 생성형 AI가 튜링의 도전을 통과하지 못했다는 사실을 알 만큼 AI를 잘 알지 못한다고 알려줬다. 이유가 뭐냐고 묻자, 생성형 AI가 작동하는 방식은 인간 지능이 작동하는 방식과 같지 않다고 설명했다.

필자는 AI를 더 잘 안다는 동료와 논쟁할 수도 있지만, 큰 의미는 없다. 필자는 대신 ‘모방 게임’의 의미를 굳이 따지지 않기로 했다. 생성형 AI가 테스트를 통과하지 못한다면 필요한 것은 더 나은 AI가 아니다. 필요한 것은 더 나은 테스트다.

에이전틱 AI를 만드는 것

필자는 여기서 NIAIIC(New, Improved, AI Imitation Challenge, 개선된 AI 모방 과제)로 화제를 옮기고자 한다. NIAIIC는 사람 평가자가 상대가 기계인지 사람인지를 가려내도록 요구한다. 하지만 과제는 더 이상 대화가 아니다.

NIAIIC의 과제는 좀 더 유용한 것이다. 이름하여, ‘먼지 털기’다. 필자는 평균적인 시험자의 집에서 어떤 표면에 먼지가 쌓였는지를 판단하고, 그 과정에서 아무것도 깨뜨리거나 손상시키지 않으면서 모든 먼지를 제거하는 먼지 털이 로봇을 배치하는 데 성공한 팀에게 상금을 주겠다.

완수해야 할 과제는 인간이 상세한 지시(일명 ‘프로그래밍’) 없이도 처리할 수 있는 일이다. 인내심이 필요한가? 먼지 털기는 인내심이 꽤 많이 필요하다. 하지만 지시가 필요한가? 먼지 털기에는 지시가 필요하지 않다.

먼지 털기는 AI의 가장 열성적인 옹호자가 주장하는 이점과 같은 종류의 이점을 제공하는 과제다. 먼지 털기는 인간에게 짜증 나고 지루하며 반복적인 일을 대신 맡아 인간이 더 만족스러운 책임에 집중하도록 돕는다.

NIAIIC는 널리 쓰이는 AI 분류 프레임워크에서 어디에 속할까? ‘에이전틱 AI’로 불리는 기술 범주에 속한다. 누가 이런 이름을 짓는지 모르겠지만, 에이전틱 AI는 정의된 목표를 스스로 달성하는 방법을 찾아내는 AI다. 자율주행차가 의도한 대로 작동할 때 하는 일이 바로 그런 것이다.

에이전틱 AI는 또 다른 이유로 흥미롭다. 에이전틱 AI는 인간 전문가가 자신의 기술을 if/then 규칙 모음으로 인코딩해야만 작동하던 이전 형태의 AI와 대비되기 때문이다. 이전 형태의 AI는 ‘전문가 시스템’이라고도 불렸고 ‘신뢰할 수 있게 작동하는 AI’라고도 불렸다.

걱정스러운 점은 에이전틱 AI와 최악의 AI 아이디어인 이른바 ‘의지적 AI(Volitional AI)’ 사이의 거리가 매우 가깝다는 사실이다. 에이전틱 AI는 사람이 목표를 정의하고 AI가 목표를 달성하는 방법을 찾아낸다. 의지적 AI는 AI가 달성해야 할 목표를 스스로 결정한 다음, 그 목표를 달성하기 위해 에이전틱 AI처럼 움직인다.

한때 필자는 의지적 AI가 ‘스카이넷’으로 변하는 시나리오를 크게 걱정하지 않았다. 필자가 걱정하지 않은 근거는 “전기와 반도체를 제외하면 의지적 AI와 인간이 자원을 두고 치열하게 경쟁할 만큼 이해관계가 겹칠 가능성은 낮아 살인 로봇 시나리오가 인간에게 문제가 되지 않을 것”이라는 판단이었다.

이제 이런 결론을 다시 생각해야 할 시점이다. 구글에서 검색해 보면 전력이 부족해 일부 AI 칩이 아예 가동되지 못하는 사례도 확인할 수 있다. 의지적 AI가 가상 손을 뻗어 가능한 모든 발전량을 움켜쥐기 위해 인간과 경쟁하는 디스토피아 시나리오는 상상하기 어렵지 않다. 의지적 AI의 필요와 인간의 필요는 겹치게 되며, 위협을 정의하기도 전에, 대응책을 마련하기도 전에 이런 갈등이 더 빠르게 현실이 될 수 있다.

전환점

의지적 AI의 리스크에 대해 인간의 두뇌를 아주 조금이라도 쓰는 사람이라면, 결국 마이크로소프트 코파일럿과 같은 결론에 도달할 수밖에 없다. 필자는 마이크로소프트 코파일럿에 의지적 AI의 가장 큰 리스크가 무엇인지 물었다. 마이크로소프트 코파일럿은 다음과 같이 결론지었다.

“스스로 목표를 정하거나 자율성을 지닌 AI 시스템인 의지적 AI의 가장 큰 리스크에는 실존적 위협, 무기화로의 악용, 인간 통제의 약화, 편향과 허위정보의 증폭이 포함된다. 이들 위험은 AI 시스템에 좁은 작업 실행 이상의 에이전시를 부여하기 때문에 생기는데, 신중하게 통제하지 않으면 사회적 경제적 보안 구조를 불안정하게 만들 수 있다.”

그렇다면 에이전틱 AI와 의지적 AI를 가르는 경계선의 올바른 편에 머무는 한 괜찮은가. 한마디로 답하면 ‘아니다’.

에이전틱 AI가 목표를 달성하는 방법을 찾아내려면, 할당받은 목표를 더 작은 목표 덩어리로 분해해야 한다. 또 그 덩어리를 더 작은 덩어리로 계속 분해해야 한다. 에이전틱 AI는 계획을 세우는 과정에서 스스로 목표를 설정하게 된다. 에이전틱 AI가 스스로 목표를 설정하기 시작하면 정의상 의지적 AI가 된다.

이 지점에서 AI에 대한 IT 리스크 관리 난제가 등장한다. 전통적 리스크 관리는 발생할 수 있는 나쁜 일을 식별하고, 나쁜 일이 실제로 발생했을 때 조직이 무엇을 해야 하는지를 설명하는 비상계획을 만든다.

필자는 AI 구현에도 이 프레임워크가 충분하길 바랄 뿐이다. 하지만 에이전틱 AI, 그리고 더 나아가 의지적 AI는 이런 접근을 뒤집어 놓는다. 의지적 AI의 가장 큰 리스크는 계획되지 않은 나쁜 일이 일어나는 데 있지 않기 때문이다. 의지적 AI의 가장 큰 리스크는 의지적 AI가 해야 할 일을 제대로 해버리는 데 있다.

말하자면, 의지적 AI는 위험하다. 에이전틱 AI는 본질적으로 의지적 AI만큼 위험하지 않을 수 있지만, 에이전틱 AI도 충분히 위험하다. 슬프게도 인간은 너무 근시안적이라 에이전틱 AI와 의지적 AI의 명백하고 현재 진행형인 리스크를 완화하는 데까지는 나아가지 못할 가능성이 크다. 에이전틱 AI와 의지적 AI의 리스크에는 인간 중심 사회의 종말을 예고할 수 있는 리스크도 포함될 수 있다.

가장 가능성 높은 시나리오는 무엇일까? 모두가 집단적으로 리스크를 외면하는 시나리오다. 필자도 마찬가지다. 필자는 먼지 털이 로봇을 원하고, 인간 사회의 리스크는 상관하지 않는다.
dl-ciokorea@foundryco.com

메타 대규모 구매설까지…구글 TPU가 촉발한 AI 칩 시장 긴장

16 December 2025 at 21:48

구글은 지난달 AI 칩 분야에서 두 차례에 걸쳐 의미 있는 변화를 만들어냈다. 첫 번째는 7세대 텐서 처리 장치(TPU)인 ‘아이언우드(Ironwood)’를 공개한 것이다. 이 칩은 추론 처리 성능을 크게 개선한 것이 특징으로, 해당 용도에 맞춰 맞춤 설계됐다. 아이언우드는 AI 처리에 필수적인 대규모 메모리 확장성과 높은 대역폭도 함께 제공한다.

두 번째 변화는 몇 주 뒤 전해졌으며, 영향력은 훨씬 컸다. 메타가 자체 하이퍼스케일 데이터센터를 위해 구글의 TPU를 대규모로 구매하는 방안을 검토하고 있다는 소문이 빠르게 확산됐다. 알려진 구매 규모는 약 10만 개에 이른다. 이와 함께 구글이 TPU의 외부 고객을 추가로 확보하려 할 것이라는 관측도 나왔다.

이 소식은 AI 실리콘 시장 전반에 파장을 일으켰고, 특히 엔비디아에 적지 않은 영향을 미쳤다. 엔비디아는 현재 시장에서 압도적인 지위를 차지하고 있는 만큼, 업계에서는 그 아성을 흔들 수 있는 계기라면 무엇이든 주목하는 분위기다. 실질적인 경쟁자가 등장할 가능성 자체가 시장의 관심을 끌기에 충분했다.

이런 전략은 지금까지 하이퍼스케일러들이 취해온 전략과는 분명히 다른 흐름이기도 하다. 대부분의 하이퍼스케일러는 범용 컴퓨팅이나 AI 처리를 위해 자체 맞춤형 실리콘을 개발해 왔지만, 그동안은 이를 외부에 판매하지 않고 내부 용도로만 활용해 왔다. 구글이 AI 프로세서 판매 사업에 나선다면, 기존 관행에서 벗어나는 중요한 전환점이 된다.

이 때문에 구글의 행보가 엔비디아와 AMD가 최대 고객인 구글, 아마존웹서비스(AWS), 마이크로소프트(MS)와 직접 경쟁하는 새로운 반도체 군비 경쟁을 촉발할 수 있는지에 대한 질문도 제기된다. 이에 대해 분석가들은 가능성은 있지만 현실화될 가능성은 높지 않다고 보고 있다.

시장조사업체 J.골드 앤드 어소시에이츠의 잭 골드 대표는 “구글이 TPU를 판매할 가능성은 있다”면서도 “엔비디아와 직접 경쟁에 나설 가능성은 없다”라고 설명했다. 그는 “TPU는 엔비디아와 정면 경쟁을 목표로 설계된 칩이 아니며, 상대적으로 소규모이거나 덜 집약적인 모델 처리를 겨냥한 제품”이라고 분석했다.

골드는 엔비디아 프로세서가 대규모 거대언어모델(LLM)을 처리하는 데 사용되는 반면, 구글 TPU는 LLM 학습 이후 단계인 추론 작업에 활용된다고 설명했다. 이 때문에 두 칩은 서로 경쟁 관계라기보다 역할이 다른 보완적 관계에 가깝다는 것이 그의 설명이다.

포레스터리서치의 선임 애널리스트 알빈 응우옌은 프로세서를 판매하고 이를 지원하는 일이 구글의 핵심 역량이라고 보기는 어렵지만, 이를 수행할 수 있는 기술과 경험은 갖추고 있다고 평가했다. 그는 “내가 알기로는 구글이 이미 일부 외부 기업에 TPU를 제공해 왔으며, 주로 전직 구글 출신이 설립한 스타트업이나 구글이 후원하는 스타트업이 대상이었다”라고 설명했다.

메타의 TPU 구매설과 관련해서는, 메타가 TPU를 어디에 활용하려는지가 핵심이라는 분석이 나온다. 골드는 “이미 모델을 구축했고 추론 워크로드를 운영하고 있다면 엔비디아 B100이나 B200은 과도한 선택일 수 있다”라고 말했다. 이어 “그렇다면 선택지는 무엇인가를 보게 되는데, 현재 추론 중심 칩을 개발하는 스타트업도 여럿 있고 인텔과 AMD 역시 이 방향으로 움직이고 있다”라며 “결국 각자의 환경에 최적화된 칩을 확보하는 것이 중요하며, 그런 점에서 구글의 TPU는 하이퍼스케일러 클라우드 환경에 맞게 최적화돼 있다”라고 설명했다.

응우옌은 자체 사용을 위한 칩을 만드는 것과 이를 외부에 판매하는 것은 전혀 다른 문제라고 지적했다. 그는 “칩 판매에는 이를 뒷받침할 인프라와 역량이 필요한데, 이 분야에서는 인텔과 AMD, 엔비디아가 구글보다 훨씬 앞서 있다”라고 분석했다.

그는 이어 “서비스 형태나 클라우드 서비스로 제공하는 경우라면 구글도 충분히 해낼 수 있다”면서도 “온프레미스 환경이나 고객이 직접 소유해 사용하는 방식은 구글이 새롭게 익혀야 할 역량”이라고 설명했다.

이런 이유로 응우옌은 자체 맞춤형 실리콘을 보유한 다른 하이퍼스케일러들 역시 칩 판매 사업에 나설 가능성은 크지 않다고 내다봤다. 그는 “막을 수 있는 요소는 없지만, 각 기업마다 안고 있는 과제가 다르다”라며 “MS, AWS, 오픈AI는 이미 다수의 파트너십을 맺고 있어, 칩을 판매할 경우 불가피하게 누군가와 경쟁 관계에 놓일 수 있다”라고 말했다.

골드 역시 AWS와 MS가 본격적으로 칩 사업에 뛰어들 가능성은 낮게 봤다. 그는 “그런 일이 벌어질 것이라고는 생각하기 어렵다”라며 “개인적으로는 그들에게 크게 설득력 있는 비즈니스 모델이라고 보이지 않는다”라고 분석했다.
dl-ciokorea@foundryco.com

スマホ法で登場するチョイススクリーンとは何か

16 December 2025 at 20:18

なぜ今、こうした画面が表示されるようになるのでしょうか。その理由を知るためには、この仕組みの根拠となる新しい法律「スマホ法」について理解する必要があります。本記事では、チョイススクリーンが私たちのスマホ体験をどう変えるのか、そしてその背景にあるルールについて解説します。

なぜ「標準」を選び直す必要があるのか――チョイススクリーン導入の背景

私たちが普段「標準」と呼んでいるものは、単なる初期値ではありません。リンクをタップした瞬間に開くアプリ、検索窓に入れた文字をどこが処理するか、アプリをどこから入れるか。こうした“最初の分岐”が、毎日の行動の流れを決めます。実際、初期設定のまま困らなければ人は変えません。

設定画面の奥にある変更手順が少し面倒なだけで、多くの人はそのまま使い続けます。すると、利用が集まったサービスほどデータや開発投資が集まり、さらに便利になるという循環が回り、別の選択肢が見えにくくなります。この「変えない」積み重ねが特定のサービスへの集中を生んでいる現状を変え、ユーザーに選択権を戻すために導入されるのがチョイススクリーンです。

チョイススクリーンの根拠となる「スマホ法」とは

このチョイススクリーンを表示させる大元の理由が、2025年12月18日に全面施行される「スマホソフトウェア競争促進法(スマホ法)」です。この法律は、モバイルOS、アプリストア、ブラウザ、検索エンジンを「特定ソフトウェア」と位置づけ、巨大事業者の“入口”支配が競争をゆがめないよう、禁止事項と“やるべき措置”を定めています。

スマホ法は、この「入口」を握る立場を利用して、自社サービスを競争上優位にしたり、他社の事業活動に不利益を与えたりすることを禁止し、公正で自由な競争を促進するのが目的だと明記しています。そのうえで、対象をスマホの中核レイヤーに絞っています。法律上の「スマートフォン」は、常時携帯でき、アプリ等のソフトを追加でき、電話とインターネットが利用できる端末と定義されます。

重要なのは、スマホ法が「巨大な事業者すべて」を一律に縛る法律ではない点です。公正取引委員会が、利用者数など政令で定める規模以上の事業者を“指定”し、その指定事業者に義務や禁止が課される仕組みになっています。しかも、この“規模”は、政令で「各特定ソフトウェアの利用者数の平均が4000万人」という基準が置かれていることが説明されています。つまり、チョイススクリーンのような仕組みは「思いつきの便利機能」ではなく、巨大な入口を持つ事業者に対して、利用者の選択機会を実際に増やすための制度設計だと捉えると理解が早いはずです。

チョイススクリーンで何が変わるのか――表示のタイミングと仕組み

チョイススクリーンは、ひと言でいえば「標準の選び直しを、面倒にさせない」仕組みです。公正取引委員会の特設サイトは、ブラウザや検索エンジンなどを利用者が選びやすくすることを掲げ、スマホ法の全面施行に合わせて環境が変わると案内しています。

では、それは法律のどこに書かれているのでしょうか。ポイントになるのが、スマホ法第十二条「標準設定等に係る措置」です。ここで法律は、指定事業者に対して「標準設定」を利用者が簡単な操作で変更できるようにする措置を求めています。さらに、利用者の選択機会を特に確保する必要があるものとして政令で定める対象については、同種の複数のソフトや役務の「選択肢が表示されるようにする」など、選択に資する措置を講じることを要求しています。これが、いわゆるチョイススクリーンの法的な骨格です。

「標準設定」という言葉も、法律上かなり具体的です。たとえばOS側の標準設定は、OSによって特定のアプリが自動的に選ばれて起動する設定のことだと説明されています。ブラウザ側の標準設定も、ブラウザにより特定の検索役務などが自動的に選択される設定として定義されています。つまり、単に“おすすめ一覧を見せる”のではなく、「自動的にそちらが選ばれてしまう状態」を利用者が握り直せるようにするのが狙いです。

次に「いつ表示されるのか」です。特設サイトでは、初回起動時やOSアップデート後などに表示されることがある、といった形で、利用者が選ぶ機会が増える旨が説明されています。また開始時期の目安として、2025年12月からと案内されています。ただし現実には、端末やOSのバージョン、提供者側の実装タイミングによって体験時期がずれる可能性があり、「全員が同じ日に一斉に」ではなく、段階的に増えていくイメージに近いでしょう。

なお、この義務を誰が負うかも気になるところですが、公正取引委員会は、2025年3月にApple Inc.やGoogle LLCなどを指定事業者として指定したことを公表しています。チョイススクリーンは、こうした「指定」という法手続を踏んだうえで、指定事業者が第十二条の義務を満たす形で提供されていく、という順序になります。

生活はどう変わる――料金・プライバシー・安全性の「選択」

チョイススクリーンで真っ先に変わるのは、「なんとなく固定」がほどけることです。ブラウザや検索エンジンを変えると、検索結果の見せ方、広告の量、追跡のされ方、パスワード管理の流儀、同期のしやすさなど、日常の体感が意外に大きく変わります。標準を選び直す機会が増えれば、利用者は「自分の優先順位」から逆算して、標準を決めやすくなります。たとえば軽さを重視する人、追跡防止を重視する人、翻訳や読み上げなどの機能を重視する人で、最適解は違って当然です。制度の意味は、まさにその“違って当然”を前提にするところにあります。

ただ、スマホ法の影響はチョイススクリーンに閉じません。アプリ内課金や外部購入の扱いにも波及します。スマホ法第八条は、アプリストアに係る指定事業者について、アプリ事業者に対し自社の支払管理役務だけを条件として強いることや、他の支払管理役務の利用、あるいは支払管理役務を使わずに支払手段を使えるようにすることを妨げることを禁止しています。さらに、同一の役務を外部のウェブページ等でも提供する場合に、外部の価格情報等がアプリ作動中に表示されないよう条件付けることや、外部経由で提供すること自体を妨げることも禁止行為として挙げています。利用者の目線で言い換えると、「アプリの中だけが世界」ではなくなり、プランや価格、購入導線の比較がしやすくなる方向に制度が動く、ということです。

一方で、選択肢が増えるほど安全面の注意も増えます。外部サイトでの購入やアカウント作成は、いつものアプリ内よりもフィッシングや偽サイトに触れるリスクが上がりがちです。ここで大事なのは、スマホ法が競争促進だけでなく「サイバーセキュリティの確保」や利用者情報の保護、青少年保護といった目的も明示している点です。たとえば第七条や第八条では、サイバーセキュリティ確保等のために必要で、他の手段で目的達成が困難な場合には例外があり得ることが書かれています。つまり、利用者保護の観点からの一定の制限や注意表示が“制度の趣旨としても想定される”ことを踏まえたうえで、私たちも「外に出たら、別の世界に移った」と意識するのが現実的です。

乗り換えやすさと、これからの心構え

さらに、スマホを乗り換えるときの“しがらみ”にも、じわっと効いてきます。スマホ法第十一条は、指定事業者に対し、利用者の求めに応じて一定のデータを円滑に移転するための措置を講じる義務を定めています。例として、OSに伴って取得した連絡先関連データ、アプリストアで購入したアプリに関する情報、ブラウザに記録したブックマーク等の所在情報などが条文上に挙げられています。データ移転が進むと、「乗り換えたら全部やり直し」が減り、結果として標準の選び直しが心理的にも現実的にもやりやすくなります。チョイススクリーンが“入口の選択”だとすれば、データ移転は“出口の整備”で、両方が揃って初めて選択が実質になります。

最後に、チョイススクリーンが表示されたときの心構えです。まず、選んだ後でも多くの場合は設定から変更し直せます。だからこそ、その場で完璧に決めようとして焦る必要はありません。次に、制度の開始期ほど「それっぽい連絡」を使った詐欺が増えやすいので、OSの設定画面や公式サイト以外から誘導される“チョイススクリーン風”の案内には慎重になるべきです。特設サイトや公正取引委員会の案内を起点に情報を確認する、という行動が、いちばん堅い対策になります。

チョイススクリーンは、劇的な新機能というより、これまで見えにくかった「標準の力」を、利用者の手に戻すための地ならしです。標準が変われば、検索の風景も、アプリの買い方も、プライバシーのバランスも、少しずつ変わるかもしれません。

CIO 100 Awards 2026 call for entries

16 December 2025 at 18:03

Deadline to Submit: January 14, 2026 | Nominate Now

The annual US CIO 100 Awards, entering its 28th year, celebrates 100 organizations and the IT teams within them that use technology in innovative ways to deliver business value, whether by creating competitive advantage, optimizing business processes, enabling growth, or improving relationships with customers. The award is an acknowledged mark of enterprise excellence.

Winning a CIO 100 Award signals to the industry—and to your organization—that your team is delivering true business value through innovation. It elevates your company’s brand, strengthens talent attraction and retention, and showcases your leadership’s commitment to transformation. And because the award is given to companies rather than individuals, it’s an honor that entire teams may enjoy.

Winners will be recognized at the CIO 100 Symposium & Awards at the Omni PGA Frisco Resort & Spa in Frisco, TX, from August 17-19, 2026.

At the conference, finalists and winners take the spotlight among a national community of CIOs and technology leaders. It’s a rare opportunity to share your story, learn from the year’s most impactful initiatives, and connect with peers who are redefining what’s possible in enterprise IT.

Give your team the recognition they’ve earned. Nominate now and step onto the 2026 stage.

About the CIO 100 Awards


1.

What is the CIO 100 Awards?

The CIO 100 Awards are an acknowledged mark of enterprise excellence in business technology. The awards are given annually to 100 IT organizations in companies from a range of industries, from financial services to manufacturing, health care, higher ed, and more.

2.

What are the benefits of winning the CIO 100?

Winning a CIO 100 awards generates positive PR and provides tangible, companywide recognition of the IT organization’s hard work and accomplishments.

The CIO 100 award is given to companies rather than individuals, so it is an honor everyone on your staff can take pride in receiving. Executives from the winning companies will be recognized among their peers and colleagues at the annual CIO 100 Symposium & Awards in August 2025. Winning organizations will have access to a Press Release Guide from CIO, including sample copy and quotes from CIO’s Editors. The awards also generate coverage on CIO.com, which lists the winning companies and judges.

3.

What is the CIO 100 Symposium?

CIO 100 Symposium is a conference produced by the award winning media brand, CIO.com, this exclusive event is your gateway to three days of transformative insights, unparalleled networking, and recognition of groundbreaking achievements. The conference concludes with an awards ceremony celebrating the CIO 100 winners and Hall of Fame inductees.

4.

Who are some past winners?

Aflac, Deloitte, Johnson & Johnson, TIAA, Ulta Beauty, UPS, Verizon, UC San Diego, and Nationwide are some of the well-known companies that won a CIO 100 Award in 2025. View the full list of 2025 winners.

5.

Is there a fee for entry?

Yes, the cost of entry is $50.  If you plan to submit multiple projects, you may purchase up to 10 applications at a time. Note that your organization can win only one award in a given year.

6.

What are the eligibility requirements?

Any US based projects that have produced an internally beneficial technology or service is eligible to be nominated. Projects must be at least in a pilot stage; and have already delivered early findings/results.

Technology vendors can apply for the award, but this is not an award for IT vendors’ products. If the outcome of your submitted IT project (i.e., the business benefit) is to produce a technology product or service sold directly to IT buyers, it will be disqualified from consideration. However, if your innovative project produces an internally beneficial technology or service (ie, one not designed be sold to customers), you are welcome to apply for a CIO 100 Award.

7.

Who can nominate?

Technology leaders, directors and executives, other qualified team members, as well as internal or external PR representatives may nominate a company or organization.

Technology vendors and their PR representatives are also invited to submit nominations, which can be a great way to recognize your clients and their customers. The nomination must be on behalf of the customer’s project/initiative. Technology vendors should not nominate any technology product or service they sell to external customers or clients.

DXC Helps Enterprises Scale AI with AdvisoryX

By: siowmeng
16 December 2025 at 17:48
S. Soh

Summary Bullets:

  • DXC has created AdvisoryX, a global advisory and consulting group to help enterprises scale their AI deployment and create business values.
  • Besides leveraging AI to drive innovation with customers, DXC is also adopting AI internally to gain productivity and embedding AI into its services.

DXC has made significant progress expanding its AI capability throughout 2025. The company recently launched AdvisoryX, a global advisory and consulting group designed to help enterprises address their most complex strategic, operational, and technology challenges. This is a positive move that can help enterprises accelerate their AI journey and achieve better outcomes. While enterprises are eager to implement AI, most of them do not have a well-thought-out strategy and operating model, or the necessary expertise to deploy AI successfully. What happens typically is departments working on siloed projects, without organization-wide collaboration, resulting in inefficiencies and governance issues. DXC’s AdvisoryX helps to overcome key challenges from getting started to the full lifecycle management.

DXC’s AdvisoryX offers five integrated solutions, which include DXC’s AI Core (i.e., the foundation including data, modeling, governance, and platform architecture); AI Reinvent (i.e., proven industry use cases across human-assisted, semi-autonomous, and autonomous operating models); AI Interact (i.e., redesigned workflows for collaboration between people and AI); AI Validate (i.e., continuous testing, observability, and governance); and AI Manage (i.e., production operations and lifecycle management).

With AdvisoryX, DXC has strengthened its position as a partner for AI innovation and allows the company to counter efforts by competitors to drive mindshare in the AI space. This is also a buildup of efforts the company has undertaken to develop its AI capabilities. In October 2025, DXC announced Xponential, which is an AI orchestration blueprint that has already been used by global enterprises to scale AI adoption. Xponential provides a structured approach to integrating people, processes, and technology. There are five independent pillars within the blueprint, including: ‘Insight’ (i.e., embedded governance, compliance, and observability); ‘Accelerators’ (i.e., tools to speed up deployment); ‘Automation’ (i.e., agentic frameworks and protocols); ‘Approach’ (i.e., collaboration of skilled professionals and AI to amplify outcomes); and ‘Process’ (i.e., delivery methodology). The company has indicated Singapore General Hospital as a client who has leveraged DXC’s expertise to develop the Augmented Intelligence in Infectious Diseases (AI2D) solution. This solution helps to guide antibiotic choices for lower respiratory tract infections with 90% accuracy and improve patient care while combating antimicrobial resistance.

In April 2025, the company introduced DXC AI Workbench, a generative AI (GenAI) offering that combines consulting, engineering, and secure enterprise services to help businesses worldwide integrate and scale responsible AI into their operations. The company has named Ferrovial, a global infrastructure company, as a customer reference that has leveraged DXC AI Workbench. The customer developed more than 30 AI agents making real-time decisions to optimize field operations, elevate safety measures, manage business knowledge, analyze competition, and assess regulatory impacts.

The company has identified AI as a key driver for business growth. Equally, it sees opportunities to apply AI internally for productivity and to gain experience from the technology. For example, DXC’s finance teams have used AI to transform back-office activities and eliminate repetitive processes; its legal department uses AI for legal research, drafting, and document preparation; and its sales and marketing teams deploy AI to automate workflows, generate proposals, etc. The company is also leveraging AI to enhance its service offerings. For example, it has partnered with 7AI to launch DXC’s agentic security operations center. These examples underscore DXC’s experience and capability in creating business values with AI.

That said, DXC is not the only systems integrator using AI to drive a with an AI advisory and consulting practice. While the company is showing traction and building customer case studies, competitors are also moving rapidly to engage clients in AI innovation and implementation. Accenture, for example, has nearly doubled its GenAI bookings in FY2025 to $5.9 billion from FY2024 and tripled its revenues to $2.7 billion. Tata Consultancy Services has also created a dedicated Tata Consultancy Services AI business unit, and it is driving transformation through a ‘responsible AI’ framework.

While DXC has introduced AdvisoryX, there is a lack of details in terms of the size of the group, areas of focus (e.g., geographical regions and industry sectors), and the assets underpinning its five integrated solutions. This makes it harder to see the differentiation against other providers that are also scaling their AI consulting practice. The company should also consider following up with announcements to highlight how AdvisoryX has made a difference in helping clients achieve their AI goals. This can be across the five integrated solutions, especially AdvisoryX’s AI Reinvent and AI Interact, which address many challenges related to human collaboration and business processes.

It is still early days in the adoption of AI, and competition in the AI space will become more intense. To stay competitive, service providers need to continue to strengthen their ability to help clients align business goals, industry-specific processes and challenges; enhance their AI platforms and tools; and expand their AI partner ecosystem. They also need to build more customer case studies to highlight success and gain credibility.

Mitel CX 2.0 Serves Double Duty in Mitel’s Transformation

16 December 2025 at 17:32
G. Willsky

Summary Bullets:

• Mitel CX 2.0 raises Mitel’s stature in the contact center space and its competitive standing in general.

• Mitel has continued to blossom since completing the acquisition of Unify just over two years ago.

Mitel has launched Mitel CX 2.0, an upgrade to its Mitel CX customer experience (CX)/contact center platform introduced in March 2025. Mitel CX 2.0 is significant for the impact it has on Mitel’s position in the contact center space and the role it plays in Mitel’s evolution as a company.

GenAI virtual agents reside at the core of Mitel CX 2.0. They complement human contact center agents, handling basic requests while funneling off more complex ones to the employee best-equipped to handle them whether they reside within the contact center or back-office. The virtual agents also tackle workflows on behalf of human agents such as ordering items, issuing trouble tickets, sending customer notifications, or initiating approvals. Mitel CX 2.0 can be deployed in private cloud, hybrid, or on-premises environments.

The arrival of Mitel CX 2.0 serves as a contemporary signal of the market momentum Mitel has been steadily generating since completing its Unify acquisition in October 2023.

That acquisition more than doubled Mitel’s customer base to over 75 million, broadened its geographic footprint to north of 100 countries, and married its strength serving mid-market customers with Unify’s expertise in the large enterprise space. Since that time, Mitel has reoriented its go-to-market stance from ‘all things to all people’ to a solutions-led approach. Mitel has also restructured its finances by successfully emerging from Chapter 11 bankruptcy proceedings. Most significantly, the company has reinforced its governance and leadership by installing a fresh board of directors and onboarding a new CEO, Mike Robinson, who succeeds Tarun Loomba after roughly four years at the helm. Robinson is charged with tapping his experience guiding companies through post-restructuring phases to sustain Mitel’s corporate progression.

In addition to being a notable step in Mitel’s metamorphosis, more importantly it marks a meaningful leap forward for the company in the contact center space. In the last few years, contact centers have profoundly transformed, steadily yielding to the broader concept of ‘customer experience’. Contact centers are converting from featuring live agents to also including AI agents, from reactive to proactive, from transaction-oriented to relationship-oriented, and from generic to deeply personalized. Mitel and its rivals continue to implement capabilities to help their customers make the transition.

With respect to rivals, Mitel CX 2.0 meets but does not exceed what is offered by the likes of Cisco, Zoom, and RingCentral. However, that does not erase the fact that Mitel is a markedly different company than just two years ago, one that continues to mature and blossom. With a new CEO installed, Mitel has officially launched the next chapter in its transformation. To be continued…

Why the CIO is becoming the chief autonomy officer

16 December 2025 at 13:14

Last quarter, during a board review, one of our directors asked a question I did not have a ready answer for. She said, “If an AI-driven system takes an action that impacts compliance or revenue, who is accountable: the engineer, the vendor or you?”

The room went quiet for a few seconds. Then all eyes turned toward me.

I have managed budgets, outages and transformation programs for years, but this question felt different. It was not about uptime or cost. It was about authority. The systems we deploy today can identify issues, propose fixes and sometimes execute them automatically. What the board was really asking was simple: When software acts on its own, whose decision is it?

That moment stayed with me because it exposed something many technology leaders are now feeling. Automation has matured beyond efficiency. It now touches governance, trust and ethics. Our tools can resolve incidents faster than we can hold a meeting about them, yet our accountability models have not kept pace.

I have come to believe that this is redefining the CIO’s role. We are becoming, in practice if not in title, the chief autonomy officer, responsible for how human and machine judgment operate together inside the enterprise.

Even the recent research from Boston Consulting Group notes that CIOs are increasingly being measured not by uptime or cost savings but by their ability to orchestrate AI-driven value creation across business functions. That shift demands a deeper architectural mindset, one that balances innovation speed with governance and trust.

How autonomy enters the enterprise quietly

Autonomy rarely begins as a strategy. It arrives quietly, disguised as optimization.

A script closes routine tickets. A workflow restarts a service after three failed checks. A monitoring rule rebalances traffic without asking. Each improvement looks harmless on its own. Together, they form systems that act independently.

When I review automation proposals, few ever use the word autonomy. Engineers frame them as reliability or efficiency upgrades. The goal is to reduce manual effort. The assumption is that oversight can be added later if needed. It rarely is. Once a process runs smoothly, human review fades.

Many organizations underestimate how quickly these optimizations evolve into independent systems. As McKinsey recently observed, CIOs often find themselves caught between experimentation and scale, where early automation pilots quietly mature into self-operating processes without clear governance in place.

This pattern is common across industries. Colleagues in banking, health care and manufacturing describe the same evolution: small gains turning into independent behavior. One CIO told me their compliance team discovered that a classification bot had modified thousands of access controls without review. The bot had performed as designed, but the policy language around it had never been updated.

The issue is not capability. It is governance. Traditional IT models separate who requests, who approves, who executes and who audits. Autonomy compresses those layers. The engineer who writes the logic effectively embeds policy inside code. When the system learns from outcomes, its behavior can drift beyond human visibility.

To keep control visible, my team began documenting every automated workflow as if it were an employee. We record what it can do, under what conditions and who is accountable for results. It sounds simple, but it forces clarity. When engineers know they will be listed as the manager of a workflow, they think carefully about boundaries.

Autonomy grows quietly, but once it takes root, leadership must decide whether to formalize it or be surprised by it.

Where accountability gaps appear

When silence replaces ownership

The first signs of weak autonomy are subtle. A system closes a ticket and no one knows who approved it. A change propagates successfully, yet no one remembers writing the rule. Everything works, but the explanation disappears.

When logs replace memory

I saw this during an internal review. A configuration adjustment improved performance across environments, but the log entry said only executed by system. No author, no context, no intent. Technically correct, operationally hollow.

Those moments taught me that accountability is about preserving meaning, not just preventing error. Automation shortens the gap between design and action. The person who creates the workflow defines behavior that may persist for years. Once deployed, the logic acts as a living policy.

When policy no longer fits reality

Most IT policies still assume human checkpoints. Requests, approvals, hand-offs. Autonomy removes those pauses. The verbs in our procedures no longer match how work gets done. Teams adapt informally, creating human-AI collaboration without naming it and responsibility drifts.

There is also a people cost. When systems begin acting autonomously, teams want to know whether they are being replaced or whether they remain accountable for results they did not personally touch. If you do not answer that early, you get quiet resistance. When you clarify that authority remains shared and that the system extends human judgment rather than replaces it — adoption improves instead of stalling.

Making collaboration explicit

To regain visibility, we began labeling every critical workflow by mode of operation:

  • Human-led — people decide, AI assists.
  • AI-led — AI acts, people audit.
  • Co-managed — both learn and adjust together.

This small taxonomy changed how we thought about accountability. It moved the discussion from “who pressed the button?” to “how we decided together.” Autonomy becomes safer when human participation is defined by design, not restored after the fact.

How to build guardrails before scale

Designing shared control between humans and AI needs more than caution. It requires architecture. The objective is not to slow automation, but to protect its license to operate.

Define levels of interaction

We classify every autonomous workflow by the degree of human participation it requires:

  • Level 1 – Observation: AI provides insights, humans act.
  • Level 2 – Collaboration: AI suggests actions, humans confirm.
  • Level 3 – Delegation: AI executes within defined boundaries, humans review outcomes.

These levels form our trust ladder. As a system proves consistency, it can move upward. The framework replaces intuition with measurable progression and prevents legal or audit reviews from halting rollouts later.

Create a review council for accountability

We established a small council drawn from engineering, risk and compliance. Its role is to approve accountability before deployment, not technology itself. For every level 2 or level 3 workflow, the group confirms three things: who owns the outcome, what rollback exists and how explainability will be achieved. This step protects our ability to move fast without being frozen by oversight after launch.

Build explainability into the system

Each autonomous workflow must record what triggered its action, what rule it followed and what threshold it crossed. This is not just good engineering hygiene. In regulated environments, someone will eventually ask why a system acted at a specific time. If you cannot answer in plain language, that autonomy will be paused. Traceability is what keeps autonomy allowed.

Over time, these practices have reshaped how our teams think. We treat autonomy as a partnership, not a replacement. Humans provide context and ethics. AI provides speed and precision. Both are accountable to each other.

In our organization we call this a human plus AI model. Every workflow declares whether it is human-led, AI-led or co-managed. That single line of ownership removes hesitation and confusion.

Autonomy is no longer a technical milestone. It is an organizational maturity test. It shows how clearly an enterprise can define trust.

The CIO’s new mandate

I believe this is what the CIO’s job is turning into. We are no longer just guardians of infrastructure. We are architects of shared intelligence defining how human reasoning and artificial reasoning coexist responsibly.

Autonomy is not about removing humans from the loop. It is about designing the loop on how humans and AI systems trust, verify and learn from each other. That design responsibility now sits squarely with the CIO.

That is what it means to become the chief autonomy officer.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

2026: The year of scale or fail in enterprise AI

16 December 2025 at 10:20

If 2024 was the year of experimentation and 2025 the year of the proof of concept, then 2026 is shaping up to be the year of scale or fail.

Across industries, boards and CEOs are increasingly questioning whether incumbent technology leaders can lead them to the AI promised land. That uncertainty persists even as many CIOs have made heroic efforts to move the agenda forward, often with little reciprocation from the business. The result is a growing imbalance between expectation and execution.

So what do you do when AI pilots aren’t converting into enterprise outcomes, when your copilot rollout hasn’t delivered the spontaneous innovation you hoped for and when the conveyor belt of new use cases continues to outpace the limited capacity of your central AI team? For many CIOs, this imbalance has created an environment where business units are inevitably branching off on their own, often in ways that amplify risk and inefficiency.

Leading CIOs are breaking this cycle by tackling the 2026 agenda on two fronts, beginning with turning IT into a productivity engine and extending outward by federating AI delivery across the enterprise. Together, these two approaches define the blueprint for taking back the AI narrative and scaling AI responsibly and sustainably.

Inside out: Turning IT into a productivity engine

Every CEO is asking the same question right now: Where’s the productivity? Many have read the same reports promising double-digit efficiency gains through AI and automation. For CIOs, this is the moment to show what good looks like, to use IT as the proving ground for measurable, repeatable productivity improvements that the rest of the enterprise can emulate.

The journey starts by reimagining what your technology organization looks like when it’s operating at peak productivity with AI. Begin with a job family analysis that includes everyone: Architects, data engineers, infrastructure specialists, people managers and more. Catalog how many resources sit in each group and examine where their time is going across key activities such as development, support, analytics, technical design and project management. The focus should be on repeatable work, the kind of activities that occur within a standard quarterly cycle.

For one Fortune 500 client, this analysis revealed that nearly half of all IT time was being spent across five recurring activities: development, support, analytics, technical design and project delivery. With that data in hand, the CIO and their team began mapping where AI could deliver measurable improvements in each job family’s workload.

Consider the software engineering group. Analysis showed that 45% of their time was spent on development work, with the rest spread across peer review, refactoring and environment setup, debugging and other miscellaneous tasks. Introducing a generative AI solution, such as GitHub Copilot enabled the team to auto-generate and optimize code, reducing development effort by an estimated 34%. Translated into hard numbers, that equates to roughly six hours saved per engineer each week. Multiply that by 48 working weeks and 100 developers and the result is close to 29,000 hours, or about a million dollars in potential annual savings based on a blended hourly rate of $35. Over five years, when considering costs and a phased adoption curve, the ROI for this single use case reached roughly $2.4 million

Repeating this kind of analysis across all job families and activities produces a data-backed productivity roadmap: a list of AI use cases ranked by both impact and feasibility. In the case of the same Fortune 500 client, more than 100 potential use cases were identified, but focusing on the top five delivered between 50% and 70% of the total productivity potential. With this approach, CIOs don’t just have a target; they have a method. They can show exactly how to achieve 30% productivity gains in IT and provide a playbook that the rest of the organization can follow.

Outside in: Federating for scale

If the inside-out effort builds credibility, the outside-in effort lays the foundation to attack the supply-demand imbalance for AI and ultimately, build scale.

No previous technology has generated as much demand pull from the business as AI. Business units and functions want to move quickly and they will, with or without IT’s involvement. But few organizations have the centralized resources or funding needed to meet this demand directly. To close that gap, many are now designing a hub-and-spoke operating model that will federate AI delivery across the enterprise while maintaining a consistent foundation of platforms, standards and governance.

In this model, the central AI center of excellence serves as the hub for strategy, enablement and governance rather than as a gatekeeper for approvals. It provides infrastructure, reusable assets, training and guardrails, while the business units take ownership of delivery, funding and outcomes. The power of this model lies in the collaboration between the hub’s AI engineers and the business teams in the spokes. Together, they combine enterprise-grade standards and tools with deep domain context to drive adoption and accountability where it matters most.

One Fortune 500 client, for example, is in the process of implementing its vision for a federated AI operating model. Recognizing the limits of a centralized structure, the CIO and leadership team defined both an interim state and an end-state vision to guide the journey over the next several years. The interim state would establish domain-based AI centers of excellence within each major business area. These domain hubs would be staffed with platform experts, responsible AI advisors and data engineers to accelerate local delivery while maintaining alignment with enterprise standards and governance principles.

The longer-term end state would see these domain centers evolve into smaller, AI-empowered teams that can operate independently while leveraging enterprise platforms and policies. The organization has also mapped out how costs and productivity would shift along the way, anticipating a J-curve effect as investments ramp up in the early phases before productivity accelerates as the enterprise “learns to fish” on its own.

The value of this approach lies not in immediate execution but in intentional design. By clearly defining how the transition will unfold and by setting expectations for how the cost curve will behave, the CIO is positioning the organization to scale AI responsibly, in a timeframe that is realistic for the organization.

2026: The year of execution

After two years of experimentation and pilots, 2026 will be the year that separates organizations that can scale AI responsibly from those that cannot. For CIOs, the playbook is now clear. The path forward begins with proving the impact of AI on productivity within IT itself and then extends outward by federating AI capability to the rest of the enterprise in a controlled and scalable way.

Those who can execute on both fronts will win the confidence of their boards and the commitment of their businesses. Those who can’t may find themselves on the wrong side of the J-curve, investing heavily without ever realizing the return.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

IBM and AWS: Driving outcomes through AI-powered transformation and industry expertise

16 December 2025 at 10:13

CIOs and business leaders everywhere are striving to upgrade legacy technology to meet burgeoning demand for cloud services and artificial intelligence (AI).

But many are stymied by aging IT infrastructure and technical debt. CIOs spend an average of 70% of their IT budgets maintaining legacy systems, according to IBM research, leaving them little room to invest in the innovative solutions they need.

To ease the transition, IBM and Amazon Web Services (AWS) are together helping governments and industry IT leaders modernize infrastructure, applications, and business processes with AI-powered transformation. “IBM’s proprietary agentic AI framework for application migration and modernization embeds agentic AI into the way that IBM drives large-scale migrations to reduce risk and improve efficiency,” says Dan Kusel, global managing partner & general manager responsible for IBM Consulting’s Global AWS Practice.

“IBM works with AWS to leverage their agentic AI tools, bringing the best capabilities to our clients,” says Kusel. “The partnership brings the fastest path to impactful ROI for our clients. This combination is delivering results, including lower costs, faster time-to-market, and happier customers.”

This article illustrates a few examples of how together IBM and AWS are transforming organizations across a range of industries.

Sports & entertainment: Elevating fan experience

IBM has been working with some of the world’s most iconic sports organizations. Scuderia Ferrari HP, the renowned Formula 1 racing team, has a fan base of nearly 400 million people who receive news and updates through an app. But new tech-savvy fans wanted more interactivity and personalization.

Ferrari HP partnered with IBM Consulting to redesign the app’s architecture and interface. After studying users’ habits and engagement patterns, IBM created an intuitive platform that delivers fans just the right mix of racing insights, interactive features, and personalized content.

Results were immediate and impressive. Within a few short months of the new app’s launch, active daily users doubled, and average time spent on the app rose by 35%. The hybrid-cloud infrastructure IBM built on AWS also enabled Ferrari HP to launch AI automations that have already sped up development cycles and improved uptime and reliability. A built-in IBM watsonx.data® data store ensures the app can expand to reach an even larger fan base as its popularity continues to grow.

Energy & resources: Delivering scale, security, and savings  

In addition to extending their geographic reach, IBM and AWS are jointly pursuing ventures in new industries. “Working with AWS, we have seen a starburst of growth in an array of industries: energy and utilities, telecommunications, healthcare, life sciences, financial services, travel and transportation, and manufacturing,” says Kusel.  

Southwest Gas — a natural gas distributor for over 2 million customers in Arizona, Nevada, and California — also needed the cloud to realize its potential. Like many of its industry peers, the company used data-heavy SAP applications to manage enterprise resources on premises. Technology leaders wanted to improve the performance, resilience, and scalability of these core applications.

Working with IBM Consulting experts, the company migrated the applications to RISE with SAP, an AWS service helping businesses transition to a cloud-based enterprise resource planning (ERP) system.

The big move, completed in less than five months, lowered operating costs and improved SAP application performance by 35%. That means Southwest Gas can process 80 million SAP transactions in less than 10 milliseconds — an achievement that would have been unthinkable with its legacy systems. The company is now exploring agentic AI as a transformative opportunity to redefine the customer experience.

Travel & transportation: Achieving flexibility, speed, and resiliency

IBM and AWS have continued to transform the travel industry, especially airlines. From Japan Airlines to Finnair to Delta Air Lines, IBM Consulting has partnered with major airlines around the world.

To stay ahead in the hypercompetitive travel industry, Delta Air Lines sought to improve its customer experience. The airline needed to increase agility and responsiveness for 100,000 front-line employees. IBM experts worked closely with Delta’s IT leaders to plan and execute a combination of migration, containerization, and modernization of over 500 applications to AWS.

Moving to AWS allowed Delta to quickly launch free in-flight Wi-Fi on 1,000 planes and provide more personalized in-flight service. With its new hybrid cloud, Delta can deploy consistent, secure workloads from anywhere, paving the way for exceptional customer service at scale. Leaders also expect the project to continually improve metrics for cost, time-to-market, productivity, and employee engagement.

Automotive: Solving supply chain challenges

Together, IBM and AWS work with global automotive companies, such as Toyota Motors, Daimler, and other industry leaders.

While the industry has undergone continuous disruption and transformation, and has been seriously impacted by supply chain disruption, companies are leveraging technology to improve performance and customer experience.  

IBM Consulting and Toyota Motors North America have partnered to transform Toyota’s supply chain processes. Working with IBM, Toyota has moved towards an agentic AI experience with an Agent AI Assist built with Amazon Bedrock. This is driving instant supply chain visibility and proactive delay detection, with humans in the loop for decision-making.

Government: Accelerating technology transformation

IBM and AWS have been working with government agencies around the world.  Managing ventures of this magnitude requires not only internal resources but also expert third-party help with planning, execution, and scaling.

For example, demand for cloud and AI services are expanding at unprecedented rates across the Middle East. Both governments and industries are making significant investments in modernization and AI to jumpstart productivity and launch new business models.

IBM Consulting’s new collaboration agreement with AWS combines industry expertise in cloud migration and modernization with AWS AI technologies and virtually unlimited scalability. The two companies aim to accelerate technology transformation throughout the region, starting with Saudi Arabia and the UAE.

The two partners — who together previously built innovation hubs in India and Romania — are now creating a new innovation hub in Riyadh. The center allows government and enterprise customers to gain hands-on experience with the latest cloud technologies and explore proof-of-concept projects tailored to their needs.

The hub will also expand regional job opportunities. “It will be staffed domestically, focused on helping our clients deliver projects with local talent,” says Kusel.

IBM + AWS: Partnership defined by scale

IBM Consulting brings deep domain and industry expertise and a broad range of services and solutions that can help organizations accelerate digital transformation, creating a virtuous cycle of agility, innovation, and resilience.

For large enterprises and governments alike, modernizing business in the AI era can be complex. Together, IBM and AWS offer unparalleled expertise in planning, launching, and scaling tailored initiatives that will deliver bottom-line benefits and real business value for years to come.

Explore IBM and AWS success stories. Visit https://www.ibm.com/downloads/documents/us-en/153d3d3b2fcfae0b

Learn more about IBM Consulting services for AWS here: https://www.ibm.com/consulting/aws

Rocío López Valladolid (ING): “Tenemos que asegurarnos de que la IA generativa nos lleve donde queremos estar”

16 December 2025 at 07:44

El origen del banco ING en España está intrínsecamente unido a una gran apuesta por la tecnología, su razón de ser y clave de un éxito que le ha llevado a tener, solo en este país, 4,6 millones de usuarios y ser el cuarto mercado del grupo según este parámetro después de Alemania, Países Bajos y Turquía.

La entidad neerlandesa, que llegó al mercado nacional en los años 80 de mano de la banca corporativa de inversión, realizó su gran desembarco empresarial en el país a finales de los 90, cuando empezó a operar como el primer banco puramente telefónico. Desde entonces, ING ha ido evolucionado al calor de las innovaciones tecnológicas de cada momento, como internet o la telefonía móvil hasta llegar al momento actual, con un claro protagonismo de la inteligencia artificial.

Como parte de su comité de dirección y al frente de la estrategia de las tecnologías de la información del banco en Iberia —y de un equipo de 500 profesionales, un tercio de la plantilla de la compañía— está la teleco Rocío López Valladolid, su CIO desde septiembre de 2022. La ejecutiva, en la ‘casa’ desde hace más de 15 años y distinguida como CIO del año en los CIO 100 Awards en 2023, explica en entrevista con esta cabecera cómo trabaja ING para evolucionar sus sistemas, procesos y forma de trabajar en un contexto enormemente complejo y cambiante como el actual.

Asegura ser consciente, desde que se incorporó a ING, de la relevancia de las TI para el banco desde sus inicios, un rol que “no ha sido a menos” en los tres años de López Valladolid como CIO de la filial ibérica. “Mi estrategia y la estrategia de tecnología del banco va ligada a la del banco en sí misma”, recalca, apostillando que desde su área no perciben las TI “como una estrategia que reme solo en la dirección tecnológica, sino siempre como el mayor habilitador, el mayor motor de nuestra estrategia de negocio”.

Una ambiciosa transformación tecnológica

Los 26 años de operación de ING en España han derivado en un gran legado tecnológico que la compañía está renovando. “Tenemos que seguir modernizando toda nuestra arquitectura tecnológica para asegurar que seguimos siendo escalables, eficientes en nuestros procesos y, sobre todo, para garantizar que estamos preparados para incorporar las disrupciones que, una vez más, vienen de la mano de la tecnología, en especial de la inteligencia artificial”, asevera la CIO.

Fue hace tres años, cuenta, cuando López Valladolid y su equipo hicieron un replanteamiento de la experiencia digital para modernizar la tecnología que da servicio directo a sus clientes. “Empezamos a ofrecer nuevos productos y servicios de la mano de nuestra app en el canal móvil, que ya se ha convertido en el principal canal de acceso de nuestros clientes”, señala.

Más tarde, continúa, su equipo siguió trabajando en modular los sistemas del banco. “Aquí uno de nuestros grandes hitos tecnológicos fue la migración de todos nuestros activos a la nube privada del grupo” —subraya—. Un hito que cumplimos el año pasado, siendo el primer banco en afrontar este movimiento ambicioso, que nos ha proporcionado mucha escalabilidad tecnológica y eficiencia en nuestros sistemas y procesos, además de unirnos como equipo”.

Un proyecto, el de migración a cloud, clave en su carrera profesional. “No todo el mundo tiene la oportunidad de llevar un banco a la nube”, afirma. “Y he de decir que todos y cada uno de los profesionales del área de tecnología hemos trabajado codo con codo para conseguir ese gran hito que nos ha posicionado como un referente en innovación y escalabilidad”.

En la actualidad, agrega, su equipo está trabajando en evolucionar el core bancario de ING. “Llegar a transformar las capas más profundas de nuestros sistemas es uno de los grandes hitos que muchos bancos ambicionan”, relata. ¿El objetivo? Ser más escalables en los procesos y estar mejor preparados para incorporar las ventajas que vienen de mano de la inteligencia artificial.

Gran parte de las inversiones de TI del banco —la CIO no desvela el presupuesto específico anual de su área en Iberia— están enfocadas a la citada transformación tecnológica y al desarrollo de los productos y servicios que demandan los clientes.

Muestra de la confianza en las capacidades locales del grupo es el establecimiento en las oficinas del banco en Madrid de un centro de innovación y tecnología global que persigue impulsar la transformación digital del banco en todo el mundo. El proyecto, una iniciativa de la corporación, espera generar más de mil puestos de trabajo especializados en tecnología, datos, operaciones y riesgos hasta el año 2029. Aunque López no lidera este proyecto corporativo —Konstantin Gordievitch, en la casa desde hace casi dos décadas, está al frente— sí cree que “es un orgullo y pone de manifiesto el reconocimiento global del talento que tenemos en España”. Gracias al nuevo centro, explica, “se va a dotar al resto de países de ING de las capacidades tecnológicas que necesitan para llevar a cabo sus estrategias”.

Rocío López, CIO de ING España y Portugal

Garpress | Foundry

“No todo el mundo tiene la oportunidad de llevar un banco a la nube”

Pilares de la estrategia de TI de ING en Iberia

La estrategia de ING, dice López Valladolid, es ‘customer centric’, es decir, centrada en el cliente y ese es uno de sus grandes pilares. “De alguna manera, todos trabajamos y desarrollamos para nuestros clientes, así que estos son uno de los pilares fundamentales tanto en nuestra estrategia como banco como en nuestra estrategia tecnológica”.

La escalabilidad, continúa la CIO, es el siguiente. “ING está creciendo en negocio, productos, servicios y segmentos, así que el área de tecnología debe dar respuesta de manera escalable y también sostenible, porque este incremento no puede conllevar que aumente el coste y la complejidad”.

“Por supuesto —añade— la seguridad desde el diseño es un pilar fundamental en todos nuestros procesos y en el desarrollo de productos”. Su equipo, afirma, trabaja con equipos multidisciplinares y, en concreto, sus equipos de producto y tecnología trabajan conjuntamente con el de ciberseguridad para garantizar este enfoque.

La innovación es otro de los cimientos tecnológicos del banco. “Estamos viviendo una revolución que va más allá de la tecnología y va a afectar a todo lo que hacemos: a cómo trabajamos, cómo servimos a nuestros clientes, cómo operamos… Así que la innovación y cómo incorporamos las nuevas disrupciones para mejorar la relación con los clientes y nuestros procesos internos son aspectos claves en nuestra estrategia tecnológica”.

Finalmente, afirma, “el último pilar y el más importante son las personas, el equipo. Para nosotros, por supuesto para mí, es fundamental contar con un equipo diverso, muy conectado con el propósito del banco y que sienta que su trabajo redunda en algo positivo para la sociedad”.

Impacto de los nuevos sabores de IA

Preguntada por la sobreexpectación que ha generado en la alta dirección de negocio la aparición de los sabores generativo y agentivo de la IA, López Valladolid lo ve con buenos ojos: “Que los CEO tengan esas expectativas y ese empuje es bueno. Históricamente, a los tecnólogos nos ha costado explicar a los CEO la importancia de la tecnología; que ahora ellos tiren de nosotros lo veo muy positivo”.

¿Cómo deben actuar los CIO en este escenario? “Diseñando las estrategias para que la IA genere el impacto positivo que sabemos que va a tener”, explica la CIO. “En ING no vemos la IA generativa como un sustituto de las personas, sino como un amplificador de las capacidades de éstas. De hecho, tenemos ya planes para mejorar el día a día de los empleados y reinventar la relación que tenemos con los clientes”.

ING, rememora, irrumpió en el escenario de la banca en España hace 26 años con “un modelo de relación muy diferente, que no existía entonces. Primero fuimos un banco telefónico e inmediatamente después un banco digital sin casi oficinas, un modelo de relación con el cliente entonces disruptivo y que se ha consolidado como el modelo de relación estándar de las personas con sus bancos”. En la era actual, añade, “tendremos que entender cuál va a ser el modelo de relación que las personas van a tener, gracias a la IA generativa, con sus bancos o sus propios dispositivos. Nosotros ya estamos trabajando para entender cómo quieren nuestros clientes que nos relacionemos con ellos”. Una respuesta que vendrá, dice, siempre de mano de la tecnología.

Rocío López, CIO de ING España y Portugal

Garpress | Foundry

“Queremos rediseñar nuestro modelo operativo para ser mucho más eficientes internamente, así que estamos trabajando para ver dónde [la IA generativa] nos puede aportar valor”

De hecho, la compañía ha lanzado un chatbot basado en IA generativa para dar respuesta de forma “más natural y cercana” a las consultas del día a día de los clientes. “Así podemos dejar a nuestros agentes [humanos] para atender otro tipo de cuestiones más complejas que sí requieren la respuesta de una persona”.

ING también aplicará la IA generativa a sus propios procesos empresariales. “Queremos rediseñar nuestro modelo operativo para ser mucho más eficientes internamente, así que estamos trabajando para ver dónde [la IA generativa] nos puede aportar valor”.

La CIO es consciente de la responsabilidad que conlleva adoptar esta tecnología. “Tenemos que liderar el cambio y asegurarnos de que la inteligencia artificial generativa nos lleve donde queremos estar y que nosotros la llevemos donde también queremos que esté”.

En lo que respecta a la aplicación de esta tecnología al área de TI en concreto, donde los analistas esperan un impacto grande, sobre todo en el desarrollo de software, la CIO cree que “puede aportar muchísimo”. La idea, cuenta, es emplearla para tareas de menos valor añadido, más tediosas, de modo que los profesionales de TI del banco puedan dedicarse a otro tipo de tareas dentro del desarrollo de software donde puedan aportar más valor.

Rocío López Valladolid, CIO de ING España y Portugal

Garpress | Foundry

“Históricamente, a los tecnólogos nos ha costado explicar a los CEO la importancia de la tecnología; que ahora ellos tiren de nosotros lo veo muy positivo”

Desafíos como CIO y futuro de la banca

Los líderes de TI afrontan todo un crisol de retos que engloban desde el liderazgo tecnológico a desafíos culturales o regulatorios, entre otros. “Los CIO nos enfrentamos a todo tipo de desafíos”, reflexiona Rocío López. “Por un lado, soy colíder de la estrategia del banco y del negocio; me preocupa y ocupa el crecimiento del banco y los servicios que damos a nuestros clientes, lo que conlleva un abanico de retos y disciplinas muy amplio”.

Por otro, añade, “los líderes tecnológicos marcamos el paso de la transformación y de la innovación, garantizando que la seguridad está en todo lo que hacemos desde el diseño. En este sentido, siempre tenemos que reconciliar la innovación con la regulación, pues esta última nos protege como sociedad”. Por último, subraya, “los CIO somos líderes de personas, así que es muy importante dedicar tiempo y esfuerzo al desarrollo de nuestros equipos, de forma que estos crezcan y se desarrollen en una profesión que me encanta”.

Una de las iniciativas en la que la CIO participa activamente para impulsar la profesión y potenciar que existan más referentes femeninos en el mundo STEM (de ciencias, tecnología, ingeniería y matemáticas) es Leonas in Tech. “Es una comunidad formada por el equipo de mujeres del área de tecnología del banco con la que realizamos varias acciones, como talleres de robótica, entre otros”, explica. “Nos preocupa que los perfiles tecnológicos femeninos seamos una minoría en la sociedad. En un mundo donde ya todo es tecnología, y en el futuro lo será más aún, que las mujeres no tengamos una representación fuerte en este segmento nos pone en cierto riesgo como sociedad. Por eso trabajamos para fomentar que haya referentes y acercar la tecnología a las edades más tempranas; contar que la nuestra es una profesión bonita caracterizada por la creatividad, la capacidad de resolver problemas, el ingenio… y el pensamiento crítico”, añade la CIO.

De cara al futuro próximo, López Valladolid está convencida de que “la inteligencia artificial va a cambiar la manera en la que en la que nos relacionamos. Es difícil anticipar lo que va a ocurrir a cinco años vista, pero sí sabemos que debemos seguir escuchando a nuestros clientes y saber qué nos demandan. Esto siempre será una prioridad para nosotros. Y seguiremos estando donde los clientes nos pidan gracias a la tecnología”.

Master IT Fundamentals with This CompTIA Certification Prep Bundle

16 December 2025 at 08:00

Prepare for a successful IT career with lifetime access to expert-led courses covering CompTIA A+, Network+, Security+, and Cloud+ certification prep.

The post Master IT Fundamentals with This CompTIA Certification Prep Bundle appeared first on TechRepublic.

❌
❌