Reading view

There are new articles available, click to refresh the page.

CIOs take note: talent will walk without real training and leadership

Tech talent, especially with advanced and specialized skills, remains elusive. Findings from a recent IT global HR trends report by Gi Group show a 47% enterprise average struggles with sourcing and retaining talent. As a consequence, turnover remains high.

Another international study by Cegos highlights that 53% of 200 directors or managers of information systems in Italy alone say the difficulty of attracting and retaining IT talent is something they face daily. Cybersecurity is the most relevant IT problem but a majority, albeit slight, feels confident of tackling it. Conversely, however, only 8% think they’ll be able to solve the IT talent problem. IT team skills development and talent retention are the next biggest issues facing CIOs in Italy, and only 24% and 9%, respectively, think they can successfully address it.

“Talents aren’t rare,” says Cecilia Colasanti, CIO of Istat, the National Institute of Statistics. “They’re there but they’re not valued. That’s why, more often, they prefer to go abroad. For me, talent is the right person in the right place. Managers, including CIOs, must have the ability to recognize talents, make them understand they’ve been identified, and enhance them with the right opportunities.”

The CIO as protagonist of talent management

Colasanti has very clear ideas on how to manage her talents to create a cohesive and motivated group. “The goal I set myself as CIO was to release increasingly high-quality products for statistical users, both internal and external,” she says. “I want to be concrete and close the projects we’ve opened, to ensure the institution continues to improve with the contribution of IT, which is a driver of statistical production. I have the task of improving the IT function, the quality of the products released, the relevance of the management, and the well-being of people.”

Istat’s IT department currently has 195 people, and represents about 10% of the institute’s entire staff. Colasanti’s first step after her CIO appointment in October 2023 was to personally meet with all the resources assigned to management for an interview.

“I’ve been working at Istat since 2001 and almost everyone knows each other,” she says. “I’ve held various roles in the IT department, and in my latest role as CIO, I want to listen to everyone to gather every possible viewpoint. Because how well we know each other, I feel my colleagues have a high expectation of our work together. That’s why I try to establish a frank dialogue and avoid ambiguity. But I make it clear that listening doesn’t mean delegating responsibility. I accept some proposals, reject others, and try to justify choices.”

Another move was to reinstate the two problems, two solutions initiative launched in Istat many years ago. Colasanti asked staff, on a voluntary basis, to identify two problems and propose two solutions. She then processed the material and shared the results in face-to-face meetings, commenting on the proposals, and evaluating those to be followed up.

“I’ve been very vocal about this initiative,” she says, “But I also believe it’s been an effective way to cement the relationship of trust with my colleagues.”

Some of the inquiries related to career opportunities and technical issues, but the most frequent pain points that emerged were internal communication and staff shortages. Colasanti spoke with everyone, clarifying which points she could or couldn’t act on. Career paths and hiring in the public sector, for example, follow precise procedures where little could be influenced.

“I tried to address all the issues from a proactive perspective,” she says. “Where I perceived a generic resistance to change rather than a specific problem, I tried to focus on intrinsic motivation and people’s commitment. It’s important to explain the strategies of the institution and the role of each person to achieve objectives. After all, people need and have the right to know the context in which they operate, and be aware of how their work affects the bigger picture.”

Engagement must be built day by day, so Colasanti regularly meets with staff including heads of department and service managers.

Small enterprise, big concerns

The case of Istat stands out for the size of its IT department, but in SMEs, IT functions can be just a handful of people, including the CIO, and much of the work is done by external consultants and suppliers. It’s a structure that has to be worked with, dividing themselves between coordinating various resources across different projects, and the actual IT work. Outsourcing to the cloud is an additional support but CIOs would generally like to have more in-house expertise rather than depend on partners to control supplier products.

Attracting and retaining talent is a problem, so things are outsourced,” says the CIO of a small healthcare company with an IT team of three. “You offload the responsibility and free up internal resources at the risk of losing know-how in the company. But at the moment, we have no other choice. We can’t offer the salaries of a large private group, and IT talent changes jobs every two years, so keeping people motivated is difficult. We hire a candidate, go through the training, and see them grow only to see them leave. But our sector is highly specialized and the necessary skills are rare.”

The sirens of the market are tempting for those with the skills to command premium positioning, and the private sector is able to attract talent more easily than public due to its hiring flexibility and career paths.

“The public sector offers the opportunity to research, explore and deepen issues that private companies often don’t invest in because they don’t see the profit,” says Colasanti. “The public has the good of the community as its mission and can afford long-term investments.”

Training builds resource retention

To meet demand, CIOs are prioritizing hiring new IT profiles and training their teams, according to the Cegos international barometer. Offering reskilling and upskilling are effective ways to overcome the pitfalls of talent acquisition and retention.

“The market is competitive, so retaining talent requires barriers to exit,” says Emanuela Pignataro, head of business transformation and execution at Cegos Italia. “If an employer creates a stimulating and rewarding environment with sufficient benefits, people are less likely to seek other opportunities or get caught up in the competition. Many feel they’re burdened with too many tasks they can’t cope with on their own, and these are people with the most valuable skills, but who often work without much support. So if the company spends on training or onboarding new people who support these people, they create reassurance, which generates loyalty.”

In fact, Colasanti is a staunch supporter of life-long learning, and the experience that brings balance and management skills. But she doesn’t have a large budget for IT training, yet solutions in response to certain requests are within reach.

“In these cases, I want serious commitment,” she says. “The institution invests and the course must give a result. A higher budget would be useful, of course, especially for an ever-evolving subject like cybersecurity.”

The need for leadership

CIOs also recognize the importance of following people closely, empowering them, and giving them a precise and relevant role that enhances motivation. It’s also essential to collaborate with the HR function to develop tools for welfare and well-being.

According to the Gi Group study, the factors that IT candidates in Italy consider a priority when choosing an employer are, in descending order, salary, a hybrid job offer, work-life balance, the possibility of covering roles that don’t involve high stress levels, and opportunities for career advancement and professional growth.

But there’s another aspect that helps solve the age-old issue of talent management. CIOs need to recognize more of the role of their leadership. At the moment, Italian IT directors place it at the bottom of their key qualities. In the Cegos study, technical expertise, strategic vision, and ability to innovate come first, while leadership came a distant second. But the leadership of the CIO is a founding basis, even when there’s disagreement with choices.

“I believe in physical presence in the workplace,” says Colasanti. “Istat has a long tradition of applying teleworking and implementing smart working, which everyone can access if they wish. Personally, I prefer to be in the office, but I respect the need to reconcile private life and work, and I have no objection to agile working. I’m on site every day, though. My colleagues know I’m here.”

Agentic AI’s rise is making the enterprise architect role more fluid

In a previous feature about enterprise architects, gen AI had emerged, but its impact on enterprise technology hadn’t been felt. Today, gen AI has spawned a plethora of agentic AI solutions from the major SaaS providers, and enterprise architecture and the role of enterprise architect is being redrawn. So what do CIOs and their architects need to know?

Organizations, especially their CEOs, have been vocal of the need for AI to improve productivity and bring back growth, and analysts have backed the trend. Gartner, for example, forecasts that 75% of IT work will be completed by human employees using AI over the next five years, which will demand, it says, a proactive approach to identifying new value-creating IT work, like expanding into new markets, creating additional products and services, or adding features that boost margins.

If this radical change in productivity takes place, organizations will need a new plan for business processes and the tech that operates those processes. Recent history shows if organizations don’t adopt new operating models, the benefits of tech investments can’t be achieved.

As a result of agentic AI, processes will change, as well as the software used by the enterprise, and the development and implementation of the technology. Enterprise architects, therefore, are at the forefront of planning and changing the way software is developed, customized, and implemented.

In some quarters of the tech industry, gen AI is seen as a radical change to enterprise software, and to its large, well-known vendors. “To say AI unleashed will destroy the software industry is absurd, as it would require an AI perfection that even the most optimistic couldn’t agree to,” says Diego Lo Giudice, principal analyst at Forrester. Speaking at the One Conference in the fall, Lo Giudice reminded 4,000 business technology leaders that change is taking place, but it’s built on the foundations of recent successes.

“Agile has given better alignment, and DevOps has torn down the wall between developers and operations,” he said. “They’re all trying to do the same thing, reduce the gap between an idea and implementation.” He’s not denying AI will change the development of enterprise software, but like Agile and DevOps, AI will improve the lifecycle of software development and, therefore, the enterprise architecture. The difference is the speed of change. “In the history of development, there’s never been anything like this,” adds Phil Whittaker, AI staff engineer at content management software provider Umbraco.

Complexity and process change

As the software development and customization cycle changes, and agentic applications become commonplace, enterprise architects will need to plan for increased complexity and new business processes. Existing business processes can’t continue if agentic AI is taking on tasks currently done manually by staff.

Again, Lo Giudice adds some levity to a debate that can often become heated, especially in the wake of major redundancies by AI leaders such as AWS. “The view that everyone will get a bot that helps them do their job is naïve,” he said at the One Conference. “Organizations will need to carry out a thorough analysis of roles and business processes to ensure they spend money and resources on deploying the right agents to the right tasks. Failure to do so will lead to agentic technology being deployed that’s not needed, can’t cope with complex tasks, and increases the cloud costs of the business.

“It’s easy to build an agent that has access to really important information,” says Tiago Azevedo, CIO for AI-powered low-code platform provider OutSystems. “You need segregation of data. When you publish an agent, you need to be able to control it, and there’ll be many agents, so costs will grow.”

The big difference, though, is deterministic and non-deterministic, says Whittaker. So non-deterministic requires guardrails of deterministic agents that produce the same output every time over the more random outcomes of non-deterministic agents. Defining business outcomes by deterministic and non-deterministic is a clear role for enterprise architecture. He adds that this is where AI can help organizations fill in gaps. Whittaker, who’s been an enterprise architect, says it’ll be vital for organizations to experiment with AI to see how it can benefit their architecture and, ultimately, business outcomes.

“The path to greatness lies not in chasing hype or dismissing AI’s potential, but in finding the golden middle ground where value is truly captured,” write Gartner analysts Daryl Plummer and Alicia Mullery. “AI’s promise is undeniable, but realizing its full value is far from guaranteed. Our research reveals the sobering odds that only one in five AI initiatives achieve ROI, and just one in 50 deliver true transformation.” Further research also finds just 32% of employees trust the organization’s leadership to drive transformation. “Agents bring an additional component of complexity to architecture that makes the role so relevant,” Azevedo adds.

In the past, enterprise architects were focused on frameworks. Whittaker points out that new technology models will need to be understood and deployed by architects to manage an enterprise that comprises employees, applications, databases, and agentic AI. He cites MCP as one as it provides a standard way to connect AI models to data sources, and simplifies the current tangle of bespoke integrations and RAG implementations. AI will also help architects with this new complexity. “There are tools for planning, requirements, creating epics, user stories, code generation, documenting code, and translating it,” added Lo Giudice.

New responsibilities

Agentic AI is now a core feature of every major EA tool, says Stéphane Vanrechem, senior analyst at Forrester. “These agents automate data validation, capability mapping, and artifact creation, freeing architects to focus on strategy and transformation.” He cites the technology of Celonis, SAP Signavio, and ServiceNow for their agentic integrations. Whittaker adds that the enterprise architect has become an important human in the loop to protect the organization and be responsible for the decisions and outcomes that agentic AI delivers.

Although some enterprise architects will see this as a collapse of their specialization, Whittaker thinks it broadens the scope of the role and makes them more T-shaped. “I can go deep in different areas,” he says. “Pigeon-holing people is never a great thing to do.”

Traditionally, architecture has suggested that something is planned, built, and then exists. The rise of agentic AI in the enterprise means the role of the enterprise architect is becoming more fluid as they continue to design and oversee construction. But the role will also involve continual monitoring and adjustment to the plan. Some call this orchestration, or perhaps it’s akin to map reading. An enterprise architect may plan a route, but other factors will alter the course. And just like weather or a fallen tree, which can lead to a route deviation, so too will enterprise architects plan and then lead when business conditions change.

Again, this new way of being an enterprise architect will be impacted by technology. Lo Guidice believes there’ll be increased automation, and Azevedo sides with the orchestration view, saying agents are built and a catalogue of them is created across the organization, which is an opportunity for enterprise architects and CIOs to be orchestrators.

Whatever the job title, Whittaker says enterprise architecture is more important than ever. “More people will become enterprise architects as more software is written by AI,” he says. “Then it’s an architectural role to coordinate and conduct the agents in front of you.” He argues that as technologists allow agents and AI to do the development work for them, the responsibility of architecting how agents and processes function broadens and becomes the responsibility of many more technologists.

“AI can create code for you, but it’s your responsibility to make sure it’s secure,” he adds. Rather than developing the code, technology teams will become architecture teams, checking and accepting the technology that AI has developed, and then managing its deployment into the business processes.

With shadow AI already embedded in organizations, Whittaker’s view shows the need for a team of enterprise architects that can help business align with the AI agents they’ve deployed, and at the same time protect customer data and cybersecurity posture.

AI agents are redrawing the enterprise, and at the same time replanning the role of enterprise architects.

‘AI는 제2의 PC 능력’···IT 채용 78%가 AI 역량 요구

AI가 커리어에 미칠 변화를 인정하지 않는 IT 종사자라면, 다시 고민해볼 시점이다. 시스코가 이끄는 AI 워크포스 컨소시엄(AI Workforce Consortium)의 연구에 따르면, AI 확산으로 IT 채용 시장은 지금까지 경험하지 못한 변화를 겪고 있으며 AI 역량은 IT 인재에게 필수적인 핵심 역량으로 자리 잡고 있다.

이번 조사 결과는 2024년 7월부터 2025년 6월까지 캐나다, 프랑스, 독일, 이탈리아, 일본, 영국, 미국 등 G7 국가에서 인재 교육 및 관리 플랫폼 코너스톤과 채용 플랫폼 인디드의 구인 공고 데이터를 분석해 도출한 것이다.

AI는 이미 ‘기본 역량’

조사에 따르면 전체 IT 채용 공고의 78%에서 AI 역량이 명시적으로 요구되고 있었다. 또한 G7 국가에서 가장 빠르게 성장하는 IT 직무 10개 중 7개는 소프트웨어 엔지니어, AI·ML 개발자, 클라우드 엔지니어, 데이터 엔지니어 등 AI와 직접 연관된 분야였다.

동시에 소통, 협업, 리더십 같은 소프트 스킬 역시 책임 있는 AI 활용을 위해 점점 더 중요해지고 있다.

영향을 받는 것은 IT 종사자만이 아니다. 직장 내 AI 활용을 연구하는 교수 야스민 바이스는 조사 결과를 접한 뒤 “2030년쯤이 되면 AI 역량은 지금의 PC 활용 능력만큼 당연한 기본 요건이 될 것”이라고 설명했다.

바이스는 이어 “2030년에 지식 노동 직군에 지원하면서 AI 역량을 충분히 보여주지 못한다면, 오늘날 PC를 다루지 못하는 지원자만큼이나 매력적이지 않은 인재로 평가될 것”이라고 전했다.

위험한 불균형

바이스는 세계경제포럼(WEF)이 2027년까지 자동화로 8,300만 개의 일자리가 사라지고 6억 9,800만 개의 새로운 일자리가 창출될 것이라는 전망이 완전히 틀린 예측은 아니라고 본다. 선진국의 인구 구조 변화, 즉 베이비붐 세대의 은퇴 흐름을 고려하면 전체적으로는 균형이 맞는다는 설명이다.

그러나 문제는 AI나 다른 기술로 인해 일자리를 잃게 되는 사람들이 새로 생겨나는 역할을 수행할 자격을 갖추지 못한 경우가 많다는 점이라고 바이스는 지적했다. 이런 상황에서 단순한 역량 향상만으로는 부족하며, 완전히 새로운 기술을 익히는 ‘전환 교육(reskilling)’이 필요하다는 것이다.

바이스는 “예를 들어, 과거 사무직 종사자가 단기간에 사이버 포렌식 전문가로 전환할 수 있는지 현실적인 의문이 생긴다”라고 말했다.

바이스는 이어 기술적 자격뿐 아니라 사고방식의 근본적인 변화가 핵심적인 역할을 한다고 설명했다. 앞으로 사람들은 생애 전반에 걸쳐 훨씬 더 자주 서로 다른 직업적 정체성을 오가게 될 것이며, 이를 위해 적응력, 학습 능력, 변화에 대한 개방성 같은 메타 역량이 크게 강화돼야 한다는 것이다. 이러한 역량이 빠르게 변화하는 노동 시장에서 성공을 좌우하는 기반이 된다는 분석이다.

시스코 EMEA 서비스·전략·기획 총괄 크리스티안 코르프는 영어권 국가에서는 이러한 변화에 대응하기 위한 학습·교육 개발 직무 채용이 활발하게 진행 중이라고 말했다. 반면 유럽, 특히 독일은 교육·훈련 투자와 인력 전환을 위한 지원에서 여전히 뒤처져 있다고 지적했다.

바이스는 실제 강의 현장에서의 경험을 바탕으로, AI가 이러한 전환을 이끄는 도구가 될 수 있다고 말했다. 예를 들어 학생들은 챗봇 등 디지털 도구를 활용해 더욱 개인화된 방식으로 학습하고 진로를 고민할 수 있게 됐으며, 이는 과거에는 존재하지 않던 기회였다는 설명이다.

‘잃어버린 세대’가 다가오는가

다만 로펌, 소프트웨어 기업, 컨설팅 기업 등에서 저연차 채용 공고가 감소하고 있는 것은 부인할 수 없다. 이는 청년층의 일자리 전망을 약화시키고, 나아가 ‘잃어버린 세대’로 이어지는 것 아니냐는 우려로 이어지고 있다.

그러나 시스코의 크리스티안 코르프는 이러한 비관적 전망에 동의하지 않았다. 그는 기업들이 현재 신기술 도입을 빠르게 추진하기 위해 숙련 인재 확보에 집중하고 있지만, 동시에 상당수의 고연령 직원들이 은퇴 시기에 가까워지고 있다고 설명했다.

코르프는 시스코와 전체 IT 업계에서 향후 5~7년 사이에 젊은 인재를 위한 새로운 기회가 대거 열릴 것이라고 말했다. 시스코는 세대 전환을 주도적으로 준비하기 위해 주니어 프로그램과 사내 아카데미에 적극 투자하고 있다. 그는 “시스코에 입사하는 젊은 인재들은 교육 수준이 높고 헌신적이어서, 결코 ‘잃어버린 세대’라고 볼 수 없다”라고 전했다.

바이스도 로펌 등 일부 분야에서 저연차 직무가 자동화의 영향을 크게 받고 있다는 점은 인정했다. 과거 저연차 직원이 수행하던 업무 상당수가 이제 AI로 대체 가능하기 때문이다. 또한 과거 오랜 기간 축적해야 했던 도메인 지식도 AI 덕분에 훨씬 빠르게 접근할 수 있게 됐다. 따라서 저연차 직무 자체를 재정의할 필요가 있다고 바이스는 설명했다.

그는 “다만 기업들이 단순히 일자리를 감축하는 것이 아니라, 보다 복잡하고 지식 기반의 업무에 초점을 둔 새로운 역할을 이제 막 설계하기 시작한 단계”라고 덧붙였다.
dl-ciokorea@foundryco.com

78% of IT job postings already require AI skills

IT professionals reluctant to accept the impact AI will have on their careers might want to think again. According to a new study from the AI Workforce Consortium, the IT job market is undergoing an unprecedented transformation thanks to AI, and AI skills are becoming a core competency for IT pros.

The findings are based on analysis of job posting data from Cornerstone and Indeed, conducted by the Cisco-led consortium between July 2024 and June 2025 in G7 countries Canada, France, Germany, Italy, Japan, the UK, and the US.

AI is becoming a standard skill

The study revealed that AI skills are already explicitly required in 78% of advertised IT jobs. Furthermore, seven of the 10 fastest-growing IT jobs in G7 countries have a direct AI component, including software engineers, AI/ML developers, cloud engineers, and data engineers.

At the same time, soft skills such as communication, teamwork, and leadership are becoming increasingly important to ensure AI is used responsibly.

But it’s not just IT professionals who will be affected. “Looking a bit into the future, say to 2030, AI skills will be just as much a given as PC skills are today,” explains Yasmin Weiß when presented with the study results.

“Anyone applying for a knowledge worker position in 2030 who can only demonstrate insufficient AI skills will be perceived as just as uninteresting as someone applying today who can’t use a PC,” says Weiß, a professor who specializes in AI in the workplace.

Dangerous imbalance

In her view, the World Economic Forum (WEF) isn’t entirely wrong in its prediction that 83 million jobs will likely be lost to automation by 2027, while 698 million new ones will be created.

If you factor out the demographic shift in developed economies — keyword: baby boomers — it roughly balances out, says Weiß. The problem is that the employees whose jobs are being automated by AI or are replaced by other technologies, usually lack the qualifications for newly emerging roles. Simple upskilling is therefore often insufficient; ​​what is needed is reskilling — i.e., learning entirely new skills.

“This raises the question of how realistic such retraining programs are — for example, whether a former office worker can become a cyber forensic expert in a short time,” Weiß says.

In addition to technical qualifications, a profound shift in mindset also plays a crucial role, explains Weiß. In the future, people will more frequently assume different professional identities over the course of their lives. For this to happen, meta-competencies such as adaptability, learning ability, and openness to change must be significantly strengthened, as they will form the basis for success in a rapidly changing world of work.

Christian Korff, VP of services, strategy, and planning for EMEA at Cisco, points out that many vacancies for learning development positions are currently being advertised in the English-speaking world to support this transformation. In comparison, Europe, and Germany in particular, still lag behind when it comes to investing in education and training and bringing people along on this journey.

As Weiß reports from her perspective as a lecturer, AI can also act as a driver and enabler in this transformation. For example, students can now learn in a highly individualized way with digital tools — such as chatbots — and reflect on their career prospects. Such opportunities did not exist before.

Is a ‘Lost Generation’ looming?

It cannot be denied, however, that the number of job postings for entry-level professionals, particularly in law firms, software companies, or consulting firms, is declining. Does this leave young people without prospects, or does it even threaten a “lost generation”?

Cisco manager Korff disagrees with this pessimistic view. While companies are currently focusing on experienced professionals to rapidly advance new technologies, many older employees are also nearing retirement.

In his company, as well as in the entire IT sector, this will create many new opportunities for young talent over the next five to seven years. Cisco is investing heavily in junior programs and an internal academy to actively shape the generational shift.

“And incidentally,” he notes, “the people who start with us are well-educated and dedicated — so anything but a lost generation.”

Weiß acknowledges that entry-level positions, particularly in sectors like legal consulting, are currently heavily impacted by automation. Many tasks previously performed by entry-level employees can now be taken over by AI.

Domain knowledge, which used to take a long time to build up, is now more quickly accessible thanks to AI. Therefore, entry-level positions need to be rethought, she explains.

“However, companies are only just beginning to develop such new role profiles, which are more focused on complex, knowledge-based tasks, instead of simply cutting jobs,” she concedes.

6 strategies for CIOs to effectively manage shadow AI

As employees experiment with gen AI tools on their own, CIOs are facing a familiar challenge with shadow AI. Although it’s often well-intentioned innovation, it can create serious risks around data privacy, compliance, and security.

According to 1Password’s 2025 annual report, The Access-Trust Gap, shadow AI increases an organization’s risk as 43% of employees use AI apps to do work on personal devices, while 25% use unapproved AI apps at work.

Despite these risks, experts say shadow AI isn’t something to do away with completely. Rather, it’s something to understand, guide, and manage. Here are six strategies that can help CIOs encourage responsible experimentation while keeping sensitive data safe.

1. Establish clear guardrails with room to experiment

Managing shadow AI begins with getting clear on what’s allowed and what isn’t. Danny Fisher, chief technology officer at West Shore Home, recommends that CIOs classify AI tools into three simple categories: approved, restricted, and forbidden.

“Approved tools are vetted and supported,” he says. “Restricted tools can be used in a controlled space with clear limits, like only using dummy data. Forbidden tools, which are typically public or unencrypted AI systems, should be blocked at the network or API level.”

Matching each type of AI use with a safe testing space, such as an internal OpenAI workspace or a secure API proxy, lets teams experiment freely without risking company data, he adds.

Jason Taylor, principal enterprise architect at LeanIX, an SAP company, says clear rules are essential in today’s fast-moving AI world.

“Be clear which tools and platforms are approved and which ones aren’t,” he says. “Also be clear which scenarios and use cases are approved versus not, and how employees are allowed to work with company data and information when using AI like, for example, one-time upload as opposed to cut-and-paste or deeper integration.”

Taylor adds that companies should also create a clear list that explains which types of data are or aren’t safe to use, and in what situations. A modern data loss prevention tool can help by automatically finding and labeling data, and enforcing least-privilege or zero-trust rules on who can access what.

Patty Patria, CIO at Babson College, notes it’s also important for CIOs to establish specific guardrails for no-code/low-code AI tools and vibe-coding platforms.

“These tools empower employees to quickly prototype ideas and experiment with AI-driven solutions, but they also introduce unique risks when connecting to proprietary or sensitive data,” she says.

To deal with this, Patria says companies should set up security layers that let people experiment safely on their own but require extra review and approval whenever someone wants to connect an AI tool to sensitive systems.

“For example, we’ve recently developed clear internal guidance for employees outlining when to involve the security team for application review and when these tools can be used autonomously, ensuring both innovation and data protection are prioritized,” she says. “We also maintain a list of AI tools we support, and which we don’t recommend if they’re too risky.”

2. Maintain continuous visibility and inventory tracking

CIOs can’t manage what they can’t see. Experts say maintaining an accurate, up-to-date inventory of AI tools is one of the most important defenses against shadow AI.

“The most important thing is creating a culture where employees feel comfortable sharing what they use rather than hiding it,” says Fisher. His team combines quarterly surveys with a self-service registry where employees log the AI tools they use. IT then validates those entries through network scans and API monitoring.

Ari Harrison, VP of IT at branding manufacturer Bamko, says his team takes a layered approach to maintaining visibility.

“We maintain a living registry of connected applications by pulling from Google Workspace’s connected-apps view and piping those events into our SIEM [security information and event management system],” he says. “Microsoft 365 offers similar telemetry, and cloud access security broker tools can supplement visibility where needed.”

That layered approach gives Bamko a clear map of which AI tools are touching corporate data, who authorized them, and what permissions they have.

Mani Gill, SVP of product at cloud-based iPaaS Boomi, argues that manual audits are no longer enough.

“Effective inventory management requires moving beyond periodic audits to continuous, automated visibility across the entire data ecosystem,” he says, adding that good governance policies ensure all AI agents, whether approved or built into other tools, send their data in and out through one central platform. This gives organizations instant, real-time visibility into what each agent is doing, how much data it’s using, and whether it’s following the rules.

Tanium chief security advisor Tim Morris agrees that continuous discovery across every device and application is key. “AI tools can pop up overnight,” he says. “If a new AI app or browser plugin appears in your environment, you should know about it immediately.”

3. Strengthen data protection and access controls

When it comes to securing data from shadow AI exposure, experts point to the same foundation: data loss prevention (DLP), encryption, and least privilege.

“Use DLP rules to block uploads of personal information, contracts, or source code to unapproved domains,” Fisher says. He also recommends masking sensitive data before it leaves the organization, and turning on logging and audit trails to track every prompt and response in approved AI tools.

Harrison echoes that approach, noting that Bamko focuses on the security controls that matter most in practice: Outbound DLP and content inspection to prevent sensitive data from leaving; OAuth governance to keep third-party permissions to least privilege; and access limits that restrict uploads of confidential data to only approved AI connectors within its productivity suite.

In addition, the company treats broad permissions, such as read and write access to documents or email, as high-risk and requires explicit approval, while narrow, read-only permissions can move faster, Harrison adds.

“The goal is to allow safe day-to-day creativity while reducing the chance of a single click granting an AI tool more power than intended,” he says.

Taylor adds that security must be consistent across environments. “Encrypt all sensitive data at rest, in use, and in motion, employ least-privilege and zero-trust policies for data access permissions, and ensure DLP systems can scan for, tag, and protect sensitive data.”

He notes that companies should ensure these controls work the same on desktop, mobile, and web, and keep checking and updating them as new situations come up.

4. Clearly define and communicate risk tolerance

Defining risk tolerance is as much about communication as it is about control. Fisher advises CIOs to tie risk tolerance to data classification instead of opinion. His team uses a simple color-coded system: green for low-risk activities, such as marketing content; yellow for internal documents that must use approved tools; and red for customer or financial data that can’t be used with AI systems.

“Risk tolerance should be grounded in business value and regulatory obligation,” says Morris. Like Fisher, Morris recommends classifying AI use into clear categories, what’s permitted, what needs approval, and what’s prohibited, and communicating that framework through leadership briefings, onboarding, and internal portals.

Patria says Babson’s AI Governance Committee plays a key role in this process. “When potential risks emerge, we bring them to the committee for discussion and collaboratively develop mitigation strategies,” she says. “In some cases, we’ve decided to block tools for staff but permit them for classroom use. That balance helps manage risk without stifling innovation.”

5. Foster transparency and a culture of trust

Transparency is the key to managing shadow AI well. Employees need to know what’s being monitored and why.

“Transparency means employees always know what’s allowed, what’s being monitored, and why,” Fisher says. “Publish your governance approach on the company intranet and include real examples of both good and risky AI use. It’s not about catching people. You’re building confidence that utilizing AI is safe and fair.”

Taylor recommends publishing a list of officially sanctioned AI offerings and keeping it updated. “Be clear about the roadmap for delivering capabilities that aren’t yet available,” he says, and provide a process to request exceptions or new tools. That openness shows governance exists to support innovation, not hinder it.

Patria says in addition to technical controls and clear policies, establishing dedicated governance groups, like the AI Governance Committee, can greatly enhance an organization’s ability to manage shadow AI risks.

“When potential risks emerge, such as concerns about tools like DeepSeek and Fireflies.AI, we collaboratively develop mitigation strategies,” she says.

This governance group not only looks at and handles risks, but explains its decisions and the reasons behind them, helping create transparency and shared responsibility, Patria adds.

Morris agrees. “Transparency means there are no surprises. Employees should know which AI tools are approved, how decisions are made, and where to go with questions or new ideas,” he says.

6. Build continuous, role-based AI training

Training is one of the most effective ways to prevent accidental misuse of AI tools. The key is be succinct, relevant, and recurring.

“Keep training short, visual, and role-specific,” says Fisher. “Avoid long slide decks and use stories, quick demos, and clear examples instead.”

Patria says Babson integrates AI risk awareness into annual information security training, and sends periodic newsletters about new tools and emerging risks.

“Routine training sessions are offered to ensure employees understand approved AI tools and emerging risks, while departmental AI champions are encouraged to facilitate dialogue and share practical experiences, highlighting both the benefits and potential pitfalls of AI adoption,” she adds.

Taylor recommends embedding training in-browser, so employees learn best practices directly in the tools they’re using. “Cutting and pasting into a web browser or dragging and dropping a presentation seems innocuous until your sensitive data has left your ecosystem,” he says.

Gill notes that training should connect responsible use with performance outcomes.

“Employees need to understand that compliance and productivity work together,” he says. “Approved tools deliver faster results, better data accuracy, and fewer security incidents compared with shadow AI. Role-based, ongoing training can demonstrate how guardrails and governance protect both data and efficiency, ensuring that AI accelerates workflows rather than creating risk.”

Responsible AI use is good business

Ultimately, managing shadow AI isn’t just about reducing risk, it’s about supporting responsible innovation. CIOs who focus on trust, communication, and transparency can turn a potential problem into a competitive advantage.

“People generally don’t try and buck the system when the system is giving them what they’re looking for, especially when there’s more friction for the user in taking the shadow AI approach,” says Taylor.

Morris concurs. “The goal isn’t to scare people but to make them think before they act,” he says. “If they know the approved path is easy and safe, they’ll take it.”

That’s the future CIOs should work toward: a place where people can innovate safely, feel trusted to experiment, and keep data protected because responsible AI use isn’t just compliance, it’s good business.

2026년에 주목해야 할 10대 IT 기술 역량

생성형 AI가 기업의 AI 전략 재편을 이끌면서 IT 기술 인력 시장도 재편되고 있다. 기업은 AI 역량을 갖춘 지원자와 재직자에게 우선순위를 두고 있다. 인디드의 ‘2025 테크 인재 보고서(Tech Talent Report)’에 따르면 AI 관련 조직 재편의 영향을 가장 크게 받은 상위 4개 역할은 소프트웨어 엔지니어·개발자, QA 엔지니어, 프로덕트 매니저, 프로젝트 매니저였다. 현재 기업은 사이버보안, 데이터 분석·데이터 애널리틱스, AI팀 구축·관리 역량을 보유한 전문가에 예산과 노력을 집중하고 있다.

IT 역할의 우선순위가 바뀌면서 구직자가 이력서에 담아야 할 IT 기술 역량도 달라졌다. 기업은 이제 초급 IT 직무라도 최소한 기본적인 프롬프트 엔지니어링 역량을 갖추기를 기대한다. 그보다 높은 수준에서는 AI 도구와 전략을 감독하고, 도입하고, 보안 수준을 확보하고, 운영까지 책임질 수 있는 IT 전문가를 찾고 있다.

인디드 데이터에 따르면 2024년과 2025년 사이 채용 공고에 요구 조건으로 포함된 횟수를 기준으로 다음 10가지 IT 기술 역량의 선호도가 가장 크게 높아졌다.

AI

AI 역량 수요가 가장 큰 폭으로 늘어난 것은 전혀 놀라운 일이 아니다. 여러 산업과 직무에서 AI 도입 경쟁이 벌어지면서 기업은 AI를 어떻게든 활용하려 분주하게 움직이고 있다. 2024년에는 AI 역량을 요구한 채용 공고가 500만 건을 웃돌았고, 2025년에는 1년 새 400만 건 이상 늘어났다. 이제 기술 분야 밖에서 일하는 지원자라도 프롬프트 엔지니어링, 자연어 처리, 프로그래밍·코딩용 AI 활용 등 일정 수준의 AI 역량을 갖춰야 하는 상황이다.

파이썬

파이썬은 데이터 분석, 웹 개발, 소프트웨어 개발, 과학 컴퓨팅, AI·머신러닝(ML) 모델 구축 등 여러 분야에서 활용되는 프로그래밍 언어이다. 소프트웨어 개발자, 웹 개발자, 데이터 사이언티스트, 데이터 애널리스트, ML 엔지니어, 사이버보안 분석가, 클라우드 엔지니어 등 다양한 IT 직무가 폭넓게 사용하는 다목적 언어이다. 기업 환경에서의 활용 폭이 넓기 때문에 꾸준히 수요 상위권을 유지하는 기술 역량이다. 2024년에는 파이썬 역량을 요구한 채용 공고가 1,500만 건을 조금 넘었고, 2025년에는 1,800만 건에 약간 못 미치는 수준까지 늘어났다. 더 많은 조직이 코딩에 AI를 활용하고 있지만, 여전히 복잡한 코드를 직접 작성하고, AI가 생성한 코드의 프롬프트를 다듬고 품질을 검증할 수 있는 숙련 개발자 역량이 필요하다.

알고리즘

많은 기업이 코딩과 프로그래밍 효율화를 위해 AI를 도입하면서, 이런 과정을 이끌고 제어하는 기준이 되는 알고리즘에 대한 의존도도 높아졌다. 알고리즘 중심의 사고에는 데이터베이스와 프로그래밍에 대한 깊은 이해, 고차원 비판적 사고, 문제 해결 능력이 필요하다. 알고리즘 역량을 요구한 채용 공고는 2024년에는 약 18만 건에 불과했지만, 2025년에는 200만 건이 넘었다. AI가 초급 업무 상당 부분을 넘겨받으면서, 기업은 AI 시스템을 설계·지도하고 효율적인 알고리즘을 구축할 수 있는 고급 인력을 찾고 있다.

CI/CD

CI/CD 역량은 AI 도입 이후 소프트웨어 개발 라이프사이클을 효율화하는 데 필수 기술로 떠올랐다. CI/CD 역량을 갖춘 전문가는 자동화·스크립팅 도구를 구축하고, 컨테이너화, 클라우드 통합, 자동화 테스트 같은 개념을 이해하고 활용할 수 있다. 2024년에는 CI/CD 역량을 요구한 채용 공고가 700만 건에 조금 못 미쳤고, 2025년에는 900만 건을 넘는 수준까지 증가했다.

구글 클라우드

구글 클라우드는 기업의 IT 솔루션을 구축·배포·관리하기 위해 널리 사용하는 클라우드 플랫폼으로, 구글은 구글 클라우드 역량과 지식을 인증하는 여러 자격증을 제공한다. 최근 몇 년 동안 많은 조직이 업무 도구, 서비스, 데이터 저장소를 구글 클라우드 기반 서비스로 이전해 왔다. 특히 대규모 데이터를 저장하고 처리해야 하는 AI 개발 환경에서 클라우드 도구는 필수 인프라로 자리 잡았다. 2024년에는 구글 클라우드 역량을 요구한 채용 공고가 약 350만 건이었지만, 2025년에는 530만 건을 웃도는 수준까지 늘어났다.

AWS

AWS는 현재 가장 널리 쓰이는 클라우드 플랫폼이다. 여러 산업에서 클라우드 전략의 핵심 축을 담당하는 만큼, 플랫폼이 제공하는 방대한 서비스를 제대로 활용하기 위한 AWS 역량 수요도 크다. 클라우드 엔지니어, 데브옵스 엔지니어, 솔루션 아키텍트, 데이터 엔지니어, 사이버보안 분석가, 소프트웨어 개발자, 네트워크 관리자 등 수많은 IT 직무에서 공통으로 요구되는 역량이다. 2024년에도 AWS 역량은 높은 인기를 유지하며 1,200만 건이 조금 넘는 채용 공고에 요구사항으로 포함됐고, 2025년에는 1,370만 건을 넘었다.

분석 역량

AI가 초급·반복 업무 상당 부분을 대체하면서 IT 전문가에게는 더 높은 수준의 분석적 사고 역량이 요구되고 있다. AI가 내리는 판단이나 생성 결과가 항상 완벽하지 않기 때문에, 특히 숫자와 데이터 영역에서는 AI 환각과 오류를 가려낼 수 있는 인간의 시선과 분석 능력이 필요하다. 분석 역량은 이미 오래전부터 조직에 핵심으로 자리한 능력이다. 2024년에는 분석 역량을 요구한 채용 공고가 1,900만 건을 조금 넘었고, 2025년에는 2,100만 건을 웃도는 수준으로 늘어났다.

사이버 보안

AI 의존도가 높아지면서 기업의 보안 취약 지점도 함께 늘어나고 있다. 더 많은 제품과 서비스를 온라인으로 전환하고 AI를 통합하는 과정에서 공격자가 노릴 수 있는 지점이 많아진다는 의미이다. 사이버보안 역량을 요구한 채용 공고는 2024년 약 240만 건이었다가 2025년에는 400만 건을 넘는 수준까지 증가했다. AI를 보안 솔루션에 통합하든, AI를 악용한 새로운 고도화 공격을 막든, 보안은 AI 도입을 추진하는 조직이 최우선으로 고려하는 분야이다.

소프트웨어 문제 해결 역량

더 많은 조직이 기본적인 코드와 스크립트 작성에는 AI를 활용하고 있지만, 최종 결과물에서 결함, 보안 문제, 이상 징후를 찾아내는 일은 여전히 인간 IT 전문가의 몫이다. 소프트웨어 트러블슈팅 역량을 요구한 채용 공고는 2024년 900만 건이 조금 넘었고, 2025년에는 1,100만 건에 조금 못 미치는 수준까지 증가했다. 이 분야에는 소프트웨어 문제를 파악하고 고객·사용자의 문제를 해결하기 위한 커뮤니케이션 능력, 문제 해결 능력, 비판적 사고, 기술 역량이 모두 필요하다.

머신러닝

머신러닝은 AI 개발의 핵심 기술로, AI뿐만 아니라 자연어처리에 대한 높은 수준의 전문성이 요구된다. 기업은 AI 도입과 향후 확산을 뒷받침할 수 있는 ML 역량 보유 전문가를 확보하는 데 주력하고 있다. 머신러닝 역량을 요구한 채용 공고는 2024년 약 370만 건에서 2025년에는 500만 건을 넘었다. 기업이 AI 프로세스를 적극 도입하고, AI 시스템을 지원·유지할 인재를 찾는 흐름이 이어지는 한 머신러닝 역량을 갖춘 IT 전문가는 계속 높은 수요를 유지할 것으로 보인다.
dl-ciokorea@foundryco.com

The 10 hottest IT skills for 2026

Gen AI has reshaped the IT skills market as companies restructure for AI strategies, and prioritize candidates and employees with AI skills. Data from Indeed’s 2025 Tech Talent Report show that the top four roles affected by AI-related restructuring include software engineers and developers, QA engineers, product managers, and project managers. Companies are now focusing their efforts and hiring budgets on professionals with skills in cybersecurity, data analytics and analysis, and building or managing AI teams.

This reprioritization of IT roles has also created a shift in the most in-demand IT skills that jobseekers will want to have on their résumés. Organizations now expect candidates to have basic prompt engineering skills at minimum, even for entry-level IT roles. And beyond that, they’re looking for IT professionals who can help oversee, implement, secure, and manage AI tools and strategies.

Data from Indeed reveal these are the 10 IT skills that grew the most desirable between 2024 and 2025, based on how many times they appeared as a requirement in a job posting year over year.

AI

It’s no surprise that AI is at the top of the list for one of the most in-demand skills based on growth in tech job postings listed since 2024. Companies are scrambling to adopt AI as it rapidly finds its way into every industry and career path. In 2024, there were just over 5 million job postings that required AI skills, and in 2025, that number grew by more than 4 million. So candidates, even for those working outside of tech, are now expected to have some level of AI skills, whether it’s prompt engineering, natural language processing, or using AI for programming and coding.

Python

Python is a programming language used in several fields, including data analysis, web development, software programming, scientific computing, and for building AI and ML models. It’s a versatile language used by a wide range of IT professionals such as software developers, web developers, data scientists, data analysts, ML engineers, cybersecurity analysts, cloud engineers, and more. Its widespread use in the enterprise makes it a steady entry on any in-demand skill list. In 2024, there were just over 15 million job listings requiring Python skills, and that grew to just under 18 million in 2025. Although more organizations are relying on AI for coding, they still need skilled professionals who understand key programming languages to write more complex code, and to help with prompt and QA code written by AI.

Algorithms

As more companies embrace AI and its ability to streamline coding and programming, organizations are also becoming more reliant on algorithms to help guide and dictate those processes. Algorithmic thinking requires a complex understanding of databases and programming, high-value critical thinking, and problem solving. Algorithm skills were listed as a requirement on around 180,000 job postings in 2024, which jumped to over 2 million in 2025. AI has taken over more of the entry-level work, leaving organizations looking for higher-skilled professionals who can help build and guide AI systems, and who understand how to build efficient algorithms.

CI/CD

Continuous integration and continuous delivery or deployment skills have grown in demand in the wake of AI implementation to help streamline the software development lifecycle. Professionals with CI/CD skills can handle tasks such as building tools used for automation and scripting, and have a strong understanding of concepts such as containerization, cloud integration, and automated testing. In 2024, there were just under 7 million job listings that looked for CI/CD skills and that number jumped to just over 9 million in 2025.

Google Cloud

Google Cloud is a popular platform to build, deploy, and manage IT solutions for an organization, with several certifications offered by Google to certify your professional skills with, and knowledge of, Google Cloud. Organizations have adopted the cloud in recent years, moving tools, services, and data storage to solutions hosted by Google’s cloud services. Cloud tools are critical for AI development, allowing for more versatile and agile storage solutions to host the large data sets required to train and run AI tools. Google Cloud skills were a requirement for around 3.5 million job listings in 2024, but that rose to just over 5.3 million in 2025.

AWS

Amazon Web Services is the most widely used cloud platform today. Central to cloud strategies across nearly every industry, AWS skills are in high demand as organizations look to make the most of the platform’s wide range of offerings. It’s a common skill for cloud engineers, DevOps engineers, solutions architects, data engineers, cybersecurity analysts, software developers, network administrators, and many more IT roles. In 2024, AWS skills were still popular and were listed as a requirement on just over 12 million job listings, which jumped to over 13.7 million in 2025.

Analysis Skills

AI has taken a lot of entry-level and rote work off the table for IT professionals, which has created more room for higher-level skills such as analytical thinking. Since AI still doesn’t create perfect outputs with every prompt, companies need a human eye and analytical mind to catch AI hallucinations and errors, especially when it comes to numbers and data. Analysis skills have been critical for organizations for a while now; in 2024, just over 19 million job listings required analysis skills, a number that number jumped to just over 21 million in 2025.

Cybersecurity

An increased reliance on AI has created more vulnerabilities for organizations. As they take more products and services online and integrate AI, more opportunities are created for security attacks. Cybersecurity skills were a requirement on around 2.4 million job listings in 2024, which grew to just over 4 million in 2025. Whether organizations look to integrate AI into cybersecurity solutions or help prevent new sophisticated attacks that use AI to breach systems, security is a top priority for organizations as they move forward with AI.

Software troubleshooting

Although organizations are increasingly using AI to write basic codes and scripts to build software tools, organizations still need human IT professionals to identify flaws, security issues, and other potential anomalies in the final product. Software troubleshooting skills were listed as a requirement on just over 9 million job listings in 2024, but this year, that number grew to just under 11 million. It’s an area of IT that requires communication, problem-solving, critical thinking, and technical skills to identify software issues and troubleshoot problems for clients and customers.

Machine Learning

ML is fundamental to AI development and requires a strong expertise of not only AI but also natural language processing. Organizations are seeking professionals with ML skills to support AI initiatives, and the future of AI adoption in the enterprise. In 2024, there were around 3.7 million job listings that looked for ML skills, while that jumped to over 5 million in 2025. IT professionals with ML skills will continue to be in demand as companies embrace AI processes and look for professionals to help support and maintain AI systems.

How Video Translation Enhances Multilingual User Training for SSO and Access Management Systems

Learn how video translation enhances multilingual SSO and IAM training, improves security, boosts user adoption, and ensures global compliance for organizations.

The post How Video Translation Enhances Multilingual User Training for SSO and Access Management Systems appeared first on Security Boulevard.

A CIO’s 5-point checklist to drive positive AI ROI

Earlier this year, MIT made headlines with a report that found 95% of organizations are getting no return from AI — and this despite a groundbreaking $30 billion investment, or more, into US-based internal gen AI initiatives. So why do so many AI initiatives fail to deliver positive ROI? Because they often lack a clear connection to business value, says Neal Ramasamy, global CIO at Cognizant, an IT consulting firm. “This leads to projects that are technically impressive but don’t solve a real need or create a tangible benefit,” he says.

Technologists often follow the hype, diving headfirst into AI tests without considering business results. “Many start with models and pilots rather than business outcomes,” says Saket Srivastava, CIO of Asana, the project management application. “Teams run demos in isolation, without redesigning the underlying workflow or assigning a profit and loss owner.”

A combination of a lack of upfront product thinking, poor underlying data practices, nonexistent governance, and minimal cultural incentives to adopt AI can produce negative results. So to avoid poor outcomes, many of the techniques boil down to better change management. “Without process change, AI speeds today’s inefficiencies,” adds Srivastava.

Here, we review five tips to manage change within an organization that CIOs can put into practice today. By following this checklist, enterprises should start to turn the tide on negative AI ROI, learn from anti-patterns, and discover which sort of metrics validate successful company-wide AI ventures.

1. Align leadership upfront by communicating business goals and stewarding the AI initiative

AI initiatives require executive sponsorship and a clear vision for how they improve the business. “Strong leadership is essential to translate AI investments into results,” says Adam Lopez, president and lead vCIO at managed IT support provider CMIT Solutions. “Executive sponsorship and oversight of AI programs, ideally at the CEO or board level, correlates with higher ROI.”

For example, at IT services and consulting company Xebia, a subgroup of executives steers its internal AI efforts. Chaired by global CIO Smit Shanker, the team includes the global CFO, head of AI and automation, head of IT infrastructure and security, and head of business operations.

Once upper leadership is assembled, accountability becomes critical. “Start by assigning business ownership,” advises Srivastava. “Every AI use case needs an accountable leader with a target tied to objectives and key results.” He recommends standing up a cross-functional PMO to define lighthouse use cases, set success targets, enforce guardrails, and regularly communicate progress.

Still, even with leadership in place, many employees will need hands-on guidance to apply AI in their daily work. “For most individuals, even if you give them the tools in the morning, they don’t know where to start,” says Orla Daly, CIO of Skillsoft, a learning management system. She recommends identifying champions across the organization who can surface meaningful use cases and share practical tips, such as how to get more out of tools like Copilot. Those with a curiosity and a willingness to learn will make the most headway, she says.

Finally, executives must invest in infrastructure, talent, and training. “Leaders must champion a data-driven culture and promote a clear vision for how AI will solve business problems,” says Cognizant’s Ramasamy. This requires close collaboration between business leaders, data scientists, and IT to execute and measure pilot projects before scaling.

2. Evolve by shifting the talent framework and investing in upskilling

Organizations must be open to shift their talent framework and redesign roles. “CIOs should adapt their talent and management strategies to ensure successful AI adoption and ROI for the organization,” says Ramasamy. “This could involve creating new roles and career paths for AI-focused professionals, such as data scientists and prompt engineers, while upskilling existing employees.”

CIOs should also view talent as a cornerstone of any AI strategy, adds CMIT’s Lopez. “By investing in people through training, communication, and new specialist roles, CIOs can be assured that employees will embrace AI tools and drive success.” He adds that internal hackathons and training sessions often yield noticeable boosts in skills and confidence.

Upskilling, for instance, should meet employees where they are, so Asana’s Srivastava recommends tiered paths: all staff need basic prompt literacy and safety training, while power users require deeper workflow design and agent-building knowledge. “We took the approach of surveying the workforce, targeting enablement, and remeasuring to confirm that maturity moved in the right direction,” he says.

But assessing today’s talent framework goes beyond human skillsets. It also means reassessing your work to be done, and who or what performs what tasks. “It’s essential to review business processes for opportunities to refactor them, given the new capabilities that AI brings,” says Scott Wheeler, cloud practice lead at cloud consulting firm Asperitas Consulting.

For Skillsoft’s Daly, today’s AI age necessitates a modern talent management framework that artfully balances the four Bs: build, buy, borrow, and bots. In other words, leaders should view their organization as a collection of skills rather than fixed roles, and apply the right mix of in-house staff, software, partners, or automation as needed. “It’s requiring us to break things down into jobs or tasks to be done, and looking at your work in a more fragmented way,” says Daly.

For instance, her team used GitHub Copilot to quickly code a learning portal for a certain customer. The project highlighted how pairing human developers with AI assistants can dramatically accelerate delivery, raising new questions about what skills other developers need to be equally productive and efficient.

But as AI agents take over more routine work, leaders must dispel fears that AI will replace jobs outright. “Communicating the why behind AI initiatives can alleviate fears and demonstrate how these tools can augment human roles,” says Ramasamy. Srivastava agrees. “The throughline is trust,” he says, “Show people how AI removes toil and increases impact; keep humans in the decision loop and adoption will follow.”

3. Adapt organizational processes to fully capture AI benefits 

Shifting the talent framework is only the beginning. Organizations must also reengineer core processes. “Fully unlocking AI’s value often requires reengineering how the organization works,” says CMIT’s Lopez, who urges embedding AI into day-to-day operations and supporting it with continual experimentation rather than treating it as a static add-on.

To this end, one necessary adaptation is toward treating internal AI-driven workflows like products and codifying patterns across the organization, says Srivastava. “Establish product‑management rigor for intake, prioritization, and roadmapping of AI use cases, with clear owners, problem statements, and value hypotheses,” he says.

At Xebia, a governance board oversees this rigor through a three-stage tollgate process of identifying and assessing value, securing business acceptance, and then handing off to IT for monitoring and support. “A core group is responsible for organizational and functional simplification with each use case,” says Shanker. “That encourages cross-functional processes and helps break down silos.”

Similarly for Ramasamy, the biggest hurdle is organizational resistance. “Many companies underestimate the change management required for successful adoption,” he says. “The most critical shift is moving from siloed decision-making to a data-centric approach. Business processes should integrate AI outputs seamlessly, automating tasks and empowering employees with data-driven insights.”

Identifying the right areas to automate also depends on visibility. “This is where most companies fall down because they don’t have good, documented processes,” says Skillsoft’s Daly. She recommends enlisting subject-matter experts across business lines to examine workflows for optimization. “It’s important to nominate individuals within the business to ask how to drive AI into your flow of work,” she says.

Once you identify units of work common across functions that AI can streamline, the next step is to make them visible and standardize their application. Skillsoft is doing this through an agent registry that documents agentic capabilities, guardrails, and data management processes. “We’re formalizing an enterprise AI framework in which ethics and governance are part of how we manage the portfolio of use cases,” she adds.

Organizations should then anticipate roadblocks and create support structures to help users. “One strategy to achieve this is to have AI SWAT teams whose purpose is to facilitate adoption and remove obstacles,” says Asperitas’ Wheeler.

4. Measure progress to validate your return   

To evaluate ROI, CIOs must establish a pre-AI baseline and set benchmarks upfront. Leaders recommend assigning ownership around metrics such as time to value, cost savings, time savings, work handled by human agents, and new revenue opportunities generated.

“Baseline measurements should be established before initiating AI projects,” says Wheeler, who advises integrating predictive indicators from individual business units into leadership’s regular performance reviews. A common fault, he says, is only measuring technical KPIs like model accuracy, latency, or precision, and failing to link these to business outcomes, such as savings, revenue, or risk reduction.

Therefore, the next step is to define clear, measurable goals that demonstrate tangible value. “Build measurement into projects from day one,” says CMIT’s Lopez. “CIOs should define a set of relevant KPIs for each AI initiative. For example, 20% faster processing time or a 15% boost in customer satisfaction.” Start with small pilots that yield quick, quantifiable results, he adds.

One clear measurement is time savings. For instance, Eamonn O’Neill, CTO at Lemongrass, a software-enabled services provider, shares how he’s witnessed clients documenting SAP development manually, which can be an extremely time-intensive process. “Leveraging generative AI to create this documentation provides a clear reduction in human effort, which can be measured and translated to a dollar ROI quite simply,” he says.

Reduction of human labor per task is another key signal. “If the goal is to reduce the number of support desk calls handled by human agents, leaders should establish a clear metric and track it in real time,” says Ram Palaniappan, CTO at full-stack tech services provider TEKsystems. He adds that new revenue opportunities may also surface through AI adoption.

Some CIOs are monitoring multiple granular KPIs across individual use cases and adjusting strategies based on results. Asana’s Srivastava, for instance, tracks engineering efficiency by monitoring cycle time, throughput, quality, cost per transaction, and risk events. He also measures the percentage of agent-assisted runs, active users, human-in-the-loop acceptance, and exception escalations. Reviewing this data, he says, helps tune prompts and guardrails in real time.

The resounding point is to set metrics early on, and not fall into the anti-patterns of not tracking signals or value gained. “Measurement is often bolted on late, so leaders can’t prove value or decide what to scale,” says Srivastava. “The remedy is to begin with a specific mission metric, baseline it, and embed AI directly in the flow of work so people can focus on higher-value judgment.”

5. Govern your AI culture to avoid breaches and instability

Gen AI tools are now commonplace, yet many employees still lack training to use them safely. For instance, nearly one in five US-based employees has entered login credentials into AI tools, according to a 2025 study from SmallPDF. “Good leadership involves establishing governance and guardrails,” says Lopez. That includes setting policies to prevent sensitive secret sauce data from being fed into tools like ChatGPT.

Heavy AI use also widens the enterprise attack surface. Leadership must now seriously consider things like security vulnerabilities in AI-driven browsers, shadow AI use, and LLM hallucinations. As agentic AI gets more involved in business-critical processes, proper authorization and access controls are essential to prevent exposure of sensitive data or malicious entry into IT systems.

From a software development standpoint, the potential for leaking passwords, keys, and tokens through AI coding agents is very real. Engineers have jumped at MCP servers to empower AI coding agents with access to external data, tools, and APIs, yet research from Wallarm found a 270% rise in MCP-related vulnerabilities from Q2 to Q3 2025, alongside surging API vulnerabilities.

Neglecting agent identity, permissions, and audit trails is a common trap that CIOs often stumble into with enterprise AI, says Srivastava. “Introduce agent identity and access management so agents inherit the same permissions and auditability as humans, including logging and approvals,” he says.

Despite the risks, oversight remains weak. An AuditBoard report found that while 82% of organizations are deploying AI, only 25% have fully implemented governance programs. With data breaches now averaging nearly $4.5 million each, according to IBM, and IDC reporting organizations that build trustworthy AI are 60% more likely to double the ROI of AI projects, the business case for AI governance is crystal clear.

“Pair ambition with strong guardrails: clear data lifecycle and access controls, evaluation and red‑teaming, and human‑in‑the‑loop checkpoints where stakes are high,” says Srivastava. “Bake security, privacy, and data governance into the SDLC so ship and secure move together — no black boxes for data lineage or model behavior.”

It’s not magic

According to BCG, only 22% of companies have advanced their AI beyond the POC stage, and just 4% are creating substantial value. With these sobering statistics in mind, CIOs shouldn’t set unrealistic expectations for getting a return.

Finding ROI from AI will require significant upfront effort, and necessitate fundamental changes to organizational processes. As Mastercard’s CTO for operations George Maddaloni said in a recent interview with Runtime, he thinks gen AI app adoption is largely about change management and adoption.

The pitfalls with AI are nearly endless and it’s common for organizations to chase hype rather than value, launch without a clear data strategy, scale too quickly, and implement security as an afterthought. Many AI programs simply don’t have the executive sponsorship or governance to get where they need to be, either. Alternatively, it’s easy to buy into vendor hype on productivity gains and overspend, or underestimate the difficulty of integrating AI platforms with legacy IT infrastructure.


Looking ahead, to better maximize AI’s business impact, leaders recommend investing in the data infrastructure and platform capabilities needed to scale, and hone on one or two high-impact use cases that can remove human toil and clearly drive revenue or efficiency.

Grounding AI fervor in core tenets and understanding the business strategy you’re aiming for is necessary to inch toward ROI. Because, without sound leadership and clear objectives, AI is only a fascinating technology with a reward that’s just always out of reach.

Securing AI-Generated Code in Enterprise Applications: The New Frontier for AppSec Teams 

GenAI, multimodal ai, AI agents, CISO, AI, Malware, DataKrypto, Tumeryk,

AI-generated code is reshaping software development and introducing new security risks. Organizations must strengthen governance, expand testing and train developers to ensure AI-assisted coding remains secure and compliant.

The post Securing AI-Generated Code in Enterprise Applications: The New Frontier for AppSec Teams  appeared first on Security Boulevard.

The incredible shrinking shelf life of IT skills

FinOps skills are in high demand today.

With organizations fearful of AI initiatives ballooning their cloud costs, the ability to manage cloud environments in a financially efficient way is earning IT pros with FinOps skills a premium of late.

But Ankur Anand, CIO of Harvey Nash, an IT recruitment and outsourcing services provider, wonders whether those skills will be as hot in another year or two, as artificial intelligence and automation become more reliably capable of handling FinOps tasks.

The idea that demand for such skills could rise and fall so quickly is not unique to FinOps, Anand says; it’s applicable to many IT skills today.

“The shelf life of IT skills back in the ’70s or ’80s was a decade or more. Today it can be less than two years,” Anand says.

Anand is not an outlier in making such assertions. The World Economic Forum (WEF) and other thought leaders say the half-life of many workplace skills has shrunk from decades to closer to seven years. A 2023 IBM study found that executives estimate that 40% of their workforce will need to reskill as a result of implementing AI and automation over the next three years. And a 2025 WEF report says workers can expect that 39% of their existing skill sets will be transformed or become outdated between 2025 and 2030.

IT workers have seen the half-life of IT skills compressed even more dramatically, with researchers saying some skills today go from hot to not in less than two years — sometimes mere months.

It’s putting a lot of pressure on IT teams. As Anand says, “Technology is developing faster than tech workers can upskill.”

Ever-quickening churn in the IT skills market is upending more than individuals’ career plans, too. It is impacting the entire IT function and the organization as a whole. That in turn is forcing CIOs, HR leaders, and other executives to devise strategies to create an environment where workers are capable of reinvention at a rapid clip.

“IT has a transformation almost every 18 months, and the skills needed in IT are impacted by that. It doesn’t mean skills become obsolete, but it impacts how fluid IT employees need to be,” says Heather Leier-Murray, a research director in the CIO practice at Info-Tech Research Group.

Transforming which IT skills are relevant

In its IT Talent Trends 2025, Info-Tech asserted the idea that “from a technology standpoint, functional skills are becoming outdated every 2.5 years.” It noted that “mature organizations are more likely to see the need to change most if not all their skills. These organizations are also 2.5 times more likely to see AI and ML skills as critical. It will be these IT organizations that have best prepared themselves to deliver on the needs and objectives of the future.”

Furthermore, Info-Tech found that 95% of IT professionals surveyed for the report believe at least some skills will need to change by 2030, with 28% saying most skills need to change and 17% saying all skills need to change.

The pace of technology innovation, which itself has sped up over the decades, is driving the rapid turnover of needed IT skills, says James Stanger, chief technology evangelist at IT training and certification organization CompTIA.

“For example, some folks I know who work in the healthcare industry have noticed that as they create cloud-specific solutions, they’re seeing vendor tools change on an average of one month. Yes, one month,” he says.

AI and automation also have a big impact on what IT skills are needed and which become outdated, IT leaders say. AI and automation are handling a growing number of repetitive tasks that even just a year or two ago had required skilled workers to do. Looking forward, AI and automation will take on even more skilled work, further transforming which IT skills are relevant and which are no longer needed.

“Manual service desk operations, infrastructure management, and deep ERP configuration used to be core competencies and safe skills to bank on, looking out three to six years. Since automation and AI are advancing so quickly, those same skills might only be relevant for the next one to three years before they’re completely transformed by technology,” says Kellie Romack, chief digital information officer of tech company ServiceNow.

Fluid, agile, adaptive workers needed

To be clear, neither Romack nor other IT leaders are saying that IT jobs are becoming obsolete; there is and will remain a need for developers, engineers, architects, security pros, and the like. Rather, they say it’s the functional skills that they need most in their day-to-day roles that are changing faster now than ever before.

CIOs and IT advisers also say the shortening shelf life of skills is not experienced universally, as some organizations still have a lot of legacy tech in place.

Data from the 2025 Tech Salary Report from Dice, a job-searching platform for tech professionals, hints at these dual realities. The report found that skills related to AI, data, and cloud engineering saw the fastest growth in salaries. But some entries on its list of fastest growing tech salaries by skill date back decades. The skills range from natural language processing and document databases, which take the No. 1 and No. 2 spots, to COBOL at No. 7 and Ruby at No. 10.

IT leaders say they can’t predict which functional skills that are hot today might have the staying power of Ruby (created in 1993) or COBOL (created in 1959). Nor are they saying which skills will fade away months from now due to tech advancements and innovation.

Instead, they stress the need for CIOs and their teams to learn how to thrive in what the WEF called “skill instability.”

The days when an IT worker could ensure career longevity by specializing in and sticking with one skill — the Python programming language, for example — are over, CompTIA’s Stanger says.

“Certain skills will come up very quickly and then go away very quickly, so now that person has to be seen as someone who can build up skills quickly,” he adds.

Info-Tech Research Group’s Leier-Murray says CIOs must free up time for their staffers to upskill and provide more coaching to their team members to ensure they keep pace with the work demands of a modern IT shop.

She and others advise CIOs to hire workers with or cultivate in existing staffers a growth mindset.

The IT department at the University of Phoenix is taking such steps, says Ty Jones. Jones is the principal agile people leader for the university’s IT department, a role that CIO Jamie Smith recently created to help prepare staffers for whatever the future of work requires.

“The way that everybody is working is continuously being redefined,” Jones says.

She says IT and HR leaders in September rolled out a list of competencies they believe IT and data workers must have to succeed in a field where skills quickly come and go. Those competencies are creative problem-solving, leadership, ethical use of AI, adaptability, curiosity, grit, communication, technical fluency, future trends, ownership, and innovation.

IT leadership at the university is helping workers develop these competencies through coaching, Jones says, and is allotting them time during their work schedules to master new skills.

“Our engineers and technical teams need to be ready to adopt any emerging skills, and they’re going to need to continue to regenerate,” Jones says. “So we’re emphasizing the ability to adjust and be fluid. We need individuals with curiosity and the ability to learn.”

Inside the product mindset that runs 7-Eleven

In 2016, 7-Eleven began a digital transformation aimed at redefining convenience. The starting point was loyalty. “Step one was to build a product discipline, bring the technology in house, and reduce reliance on third parties,” says Scott Albert, VP and head of store and enterprise products.

Two years later, the Texas-based retailer reapplied the product playbook, now powering store systems across more than 13,000 US and Canadian locations. “We moved from projects — start date, end date — to product: continuous improvement and iteration,” Albert says. “From outputs to outcomes, co-owned with design and engineering.”

Albert knows the terrain. A company veteran who cut his teeth in operations, he led product for loyalty and now oversees digital product for store systems, fuel, restaurant concepts, and merchandising, evidence of how far the model has scaled.

Setting the foundation

The idea was straightforward but the shift wasn’t. “It was tough early on because it meant change,” Albert says. “The business was used to saying, ‘I need X.’ Often that wasn’t the real problem. Our job was to get underneath, understand the problem, design a solution for now and the future, and then iterate.”

It takes several ingredients to solve big problems, like customer research, business process knowledge, data, and technology, so it’s natural that product teams are cross-functional. But that structure can also create competing priorities if not managed correctly. While the setting is convenience retail, the lesson applies to any CIO shifting from project-based delivery to product-driven transformation. “Success depends not on org charts, but on cross-functional trust, buy-in, and commitment,” he says.

That structure set the foundation, and the real breakthroughs came from applying product thinking to their daily work.

Product thinking in action

“For me and my team, the customer is the store associate,” Albert says. That focus shaped priorities to remove low-value tasks, surface just-in-time insights, and let systems work for people, not the other way around.

The team learned this firsthand on midnight store walks. In one New York City visit, they noticed a new associate glued to her phone. “We thought she was distracted,” Albert says. “Turns out she’d recorded her trainer so she could remember.” That single observation sparked a redesign of training to move job aids and how-to videos from a back-room PCs to mobile devices on the floor, embedded in the flow of work.

The same product instinct of watching users, identifying friction, and iterating has carried into 7-Eleven’s AI initiatives. AI-assisted ordering, for example, reduced what was once up to 30 hours a week of manual work to under an hour a day, freeing up associates to focus on customers. At scale, those savings add up to more than 13 million hours reclaimed annually, and test-and-learn pilots tying the changes to about $340 million in incremental sales.

The back office has been transformed as well. After migrating store systems to the cloud with its 7-BOSS platform, 7-Eleven layered in “quick cards” that surface AI-generated insights and let associates act in three clicks or less. A clustering model identifies lookalike stores by sales mix, location type, even seasonality, and pushes tailored assortment recommendations. “With three clicks, you can add an item, forecasting kicks in, and delivery happens in days,” Albert says.

Together, these stories trace a clear pattern of observing the customer (in this case the store personnel), solving for their pain points, then amplifying the solution with data and AI. It’s product thinking at work.

Operating like a product company

Behind the scenes, the mechanics mirror digital natives. Teams run in pods with product, engineering, and design as a three-legged stool. Quarterly planning sets direction, but roadmaps flex. “Tell me everything you’ll do next year — that was the old model,” Albert says. “Now we focus on quarters, but sometimes that’s too long. We plan, then adapt.”

Release cadence has accelerated as well, from two or three big bangs a year to monthly releases.

The cultural shift is ongoing funding for work that never ends. “There’s no such thing as done in product,” he says. “We’re on the fifth iteration of our forecasting model. We’ll keep improving.”

Start small, measure hard

Albert’s advice to other tech executives: start small. “Find a problem that matters, build a cross-functional team, measure success, and validate results,” he says. “Then add a second team, a third, and you’re off.”

And above all, measure. “Pick metrics backed by data so no one can debate the results,” he adds.

Nearly 10 years after its first loyalty decision, 7-Eleven’s product mindset now extends far beyond consumer apps. The store itself has become a living product, updated monthly, informed by data, and built around the associate.

For Albert, the real measure of success is to make the system work for the associate, so they can delight customers. “It’s the same product discipline, now applied to every corner of the store, and it’s redefining what convenience looks like at scale,” he says.

Hackaday Links: November 16, 2025

Hackaday Links Column Banner

We make no claims to be an expert on anything, but we do know that rule number one of working with big, expensive, mission-critical equipment is: Don’t break the big, expensive, mission-critical equipment. Unfortunately, though, that’s just what happened to the Deep Space Network’s 70-meter dish antenna at Goldstone, California. NASA announced the outage this week, but the accident that damaged the dish occurred much earlier, in mid-September. DSS-14, as the antenna is known, is a vital part of the Deep Space Network, which uses huge antennas at three sites (Goldstone, Madrid, and Canberra) to stay in touch with satellites and probes from the Moon to the edge of the solar system. The three sites are located roughly 120 degrees apart on the globe, which gives the network full coverage of the sky regardless of the local time.

Losing the “Mars Antenna,” as DSS-14 is informally known, is a blow to the DSN, a network that was already stretched to the limit of its capabilities, and is likely to be further challenged as the race back to the Moon heats up. As for the cause of the accident, NASA explains that the antenna was “over-rotated, causing stress on the cabling and piping in the center of the structure.” It’s not clear which axis was over-rotated, but based on some specs we found that say the azimuth travel range is ±265 degrees “from wrap center,” we suspect it was the vertical axis in the base. It sounds like the azimuth went past that limit, which wrapped the swags of cables and hoses that run the antenna tightly, causing the damage. We’d have thought there would be a physical stop of some sort to prevent over-rotation, but then again, running a structure that big up against a stop would be very much an “irresistible force, immovable object” scenario. Here’s hoping they can get DSS-14 patched up quickly and back in service.

Speaking of having a bad day on the job, we have to take pity on these Russian engineers for the “demo hell” they went through while revealing the country’s first AI-powered humanoid robot. AIdol, as the bot is known, seemed to struggle from the start, doddering from behind some curtains like a nursing home patient with a couple of nervous-looking fellows flanking it. The bot paused briefly before continuing its drunk-walk, pausing again to deliver a somewhat feeble wave to the crowd before entering the terminal stumble and face-plant part of the demo. The bot’s attendants quickly dragged it away, leaving a pile of parts on the stage while more helpers tried — and failed — to deploy a curtain to hide the scene. It was a pretty sad scene to behold, made worse by the choice of walk-out music (Bill Conti’s iconic “Gonna Fly Now,” better known as the theme from Rocky).

We just noticed that pretty much everything we have to write about this week has a “bad day at work” vibe to it, so to continue on with that theme, witness this absolutely disgusting restoration of a GPU that spent way too many years in a smoker’s house. The card, an Asus 9800GT Matrix, is from 2008, so it may have spent the last 17 years getting caked with tar and nicotine, along with a fair amount of dust and perhaps cat hair, from the look of it. Having spent way too much time cleaning TVs similarly caked with grossness most foul, we couldn’t stomach watching the video of the restoration process, but it’s available in the article if you dare.

And the final entry in our “So you think your job sucks?” roundup, behold the poor saps who have to generate training data for AI-powered domestic robots. The story details the travails of Naveen Kumar, who spends his workday on simple chores such as folding towels, with the twist of doing it with a GoPro strapped to his forehead to capture all the action. The videos are then sent to a U.S. client, who uses them to develop a training model so that humanoid robots can eventually copy the surprisingly complex physical movements needed to perform such a mundane task. Training a robot is all well and good, but how about training them how to move around inside a house made for humans? That’s where it gets really creepy, as an AI startup has partnered with a big real estate company to share video footage captured from those “walk-through” videos real estate agents are so fond of. So if your house has recently been on the market, there’s a non-zero chance that it’s being used to train an army of domestic robots.

And finally, we guess this one fits the rough-day-at-work theme, but only if your job is being a European astronaut, who may someday be chowing down on protein powder made from their own urine. The product is known as Solein — sorry, but have they never seen the movie Soylent Green? — and is made via a gas fermentation process using microbes, electricity, and air. The Earth-based process uses ammonia as a nitrogen source, but in orbit or on long-duration deep-space missions, urea harvested from astronaut pee would be used instead. There’s no word on what Solein tastes like, but from the look of it, and considering the source, we’d be a bit reluctant to dig in.

❌