Reading view

There are new articles available, click to refresh the page.

How to optimize LLMs for enterprise success

Large language models (LLMs) have rapidly become a cornerstone of modern enterprise operations, powering everything from customer support chatbots to advanced analytics platforms. While these models offer unparalleled capabilities, they also pose significant challenges for organizations—mainly their size, resource demands and sometimes unpredictable behaviour. Enterprises often grapple with high operational costs, latency issues and the risk of generating inaccurate or irrelevant outputs (commonly referred to as hallucinations). To truly harness the potential of LLMs, businesses need practical strategies to optimise these models for efficiency, reliability and accuracy. One key technique that has gained traction is model distillation.

Understanding model distillation

Model distillation is a method used to transfer the knowledge and capabilities of a large, complex model (the teacher) into a smaller, more efficient model (the student). The goal is to retain the teacher’s performance while making the student model lighter, faster and less resource-intensive. Distillation works by training the student to mimic the outputs or internal representations of the teacher, essentially “distilling” the essence of the larger model into a compact form.

Why is this important for enterprises? Running massive LLMs can be costly and slow, especially in environments where quick responses and scalability are crucial. Model distillation provides a means to deploy powerful AI solutions without the heavy infrastructure burden, making it a practical choice for businesses seeking to strike a balance between performance and efficiency.

How model distillation works

  • Train the trainer/teacher model: Begin with a large, pre-trained language model that performs well on your target tasks.
  • Prepare the student model: Design a smaller, more efficient model architecture that will learn from the teacher.
How LLM model distillation works

Magesh Kasthuri

  • Distillation training: The student is trained using the teacher’s outputs or “soft labels,” learning to replicate its behaviour as closely as possible.
  • Evaluation and fine-tuning: Assess the student’s performance and, if necessary, fine-tune it to ensure it meets accuracy and reliability requirements.

Throughout this process, the student model becomes adept at handling enterprise tasks with far less computational overhead, making it ideal for real-time applications.

Model distillation in practice

Imagine a financial services company that uses an LLM to generate investment reports. The original model is highly accurate but slow and expensive to run. By applying model distillation, the company trains a smaller student model that produces nearly identical reports with a fraction of the resources. This distilled model can now deliver insights in real-time, enabling analysts to make faster decisions while cutting operational costs.

In another scenario, a healthcare provider deploys an LLM-based assistant to help doctors access patient information and medical guidelines. The full-scale model offers excellent recommendations but struggles with latency on edge devices. After distillation, the student model fits comfortably on hospital servers, providing instant responses and maintaining data privacy.

Industrial use cases: Real-time scenarios across sectors

  • Financial services: Distilled models power fraud detection systems, delivering rapid alerts without draining computational resources.
  • Healthcare: Hospitals use distilled LLMs for triaging patient queries and supporting clinical decisions at the point of care.
  • Customer service: Call centres deploy compact chatbots trained via distillation to handle large volumes of queries efficiently.
  • Retail: E-commerce platforms run product recommendation engines using distilled models to personalise shopping experiences in real time.

Framework for model distillation: Optimizing LLMs for enterprises

To systematically optimise LLMs for enterprise use, a robust framework for model distillation is essential. Here’s a stepwise approach designed for IT professionals:

  • Assessment: Identify the target tasks and performance benchmarks required for your business operations.
  • Teacher model selection: Choose a high-performing LLM as your teacher, ensuring it excels at your chosen tasks.
  • Student model design: Architect a smaller model that can be trained efficiently while retaining core capabilities.
  • Distillation training: Use the teacher’s outputs to guide the student, focusing on both output accuracy and internal representations.
How LLM model distillation works

Magesh Kasthuri

  • Validation: Rigorously test the student model against real-world data to spot hallucinations and inaccuracies.
  • Iterative fine-tuning: Continuously improve the student model by refining its training data and adjusting its architecture as needed.
  • Deployment: Integrate the distilled model into your enterprise systems, monitoring performance and updating as required.

How the framework reduces hallucinations and improves accuracy

A key challenge with LLMs is their tendency to “hallucinate”—generating plausible-sounding but incorrect information. The distillation framework addresses this by incorporating validation steps that test the student model against curated datasets and real-world scenarios. By exposing the student to diverse data during training and fine-tuning, enterprises can reduce the risk of hallucinations and ensure outputs remain reliable. Furthermore, ongoing monitoring and iterative updates help maintain model accuracy as business needs evolve.

Practical considerations and implementation tips

  • Customise training data: Use enterprise-specific datasets during distillation to align the model with your organizational context.
  • Monitor model outputs: Regularly review the student’s responses to catch emerging issues early.
  • Plan for scale: Design the distilled model architecture to support future growth and integration with other systems.
  • Collaborate across teams: Involve domain experts during validation to ensure the model meets real-world requirements.

Benefits for large enterprises

For large organizations, model distillation offers several compelling advantages:

  • Cost savings: Reduced computational demands lead to lower infrastructure and energy costs.
  • Improved reliability: Streamlined models respond faster and are easier to maintain, ensuring consistent service.
  • Scalability: Lightweight models can be deployed across multiple platforms and locations, supporting enterprise expansion.
  • Enhanced accuracy: The framework’s focus on validation and fine-tuning helps minimise errors and hallucinations.

Conclusion

Model distillation stands out as a key technique for making large language models fit for enterprise operations. By transferring knowledge from complex models to efficient students, businesses can enjoy the best of both worlds—powerful AI capabilities without the heavy resource burden. As enterprises continue to adopt AI at scale, model distillation will play a pivotal role in ensuring solutions are cost-effective, reliable and tailored to real-world needs. IT professionals seeking to maximise the value of LLMs should consider integrating distillation frameworks into their optimization strategies, paving the way for smarter, more agile enterprise AI.

This article was made possible by our partnership with the IASA Chief Architect Forum. The CAF’s purpose is to test, challenge and support the art and science of Business Technology Architecture and its evolution over time as well as grow the influence and leadership of chief architects both inside and outside the profession. The CAF is a leadership community of the IASA, the leading non-profit professional association for business technology architects.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

From static workflows to intelligent automation: Architecting the self-driving enterprise

I want you to think about the most fragile employee in your organization. They don’t take coffee breaks, they work 24/7 and they cost a fortune to recruit. But if a button on a website moves a few pixels to the right, this employee has a complete mental breakdown and stops working entirely.

I am talking, of course, about your RPA (robotic process automation) bots.

For the last few years, I have observed IT leaders, CIOs and business leaders pour millions into what we call automation. We’ve hired armies of consultants to draw architecture diagrams and map out every possible scenario. We’ve built rigid digital train tracks, convinced that if we just laid enough rail, efficiency would follow.

But we didn’t build resilience. We built fragility.

As an AI solution architect, I see the cracks in this foundation every day. The strategy for 2026 isn’t just about adopting AI; it is about attacking the fragility of traditional automation. The era of deterministic, rule-based systems is ending. We are witnessing the death of determinism and the rise of probabilistic systems — what I call the shift from static workflows to intelligent automation.

The fragility tax of old automation

There is a painful truth we need to acknowledge: Your current bot portfolio is likely a liability.

In my experience and architectural practice, I frequently encounter what I call the fragility tax. This is the hidden cost of maintaining deterministic bots in a dynamic world. The industry rule of thumb  —  and one that I see validated in budget sheets constantly — is that for every $1 you spend on BPA licenses, you end up spending $3 on maintenance.

Why? Because traditional BPA is blind. It doesn’t understand the screen it is looking at; it only understands coordinates (x, y). It doesn’t understand the email it is reading; it only scrapes for keywords. When the user interface updates or the vendor changes an invoice format, the bot crashes.

I recall a disaster with an enterprise client who had an automated customer engagement process. It was a flagship project. It worked perfectly until the third-party system provider updated their solution. The submit button changed from green to blue. The bot, which was hardcoded to look for green pixels at specific coordinates, failed silently.

But fragility isn’t just about pixel colors. It is about the fragility of trust in external platforms.

We often assume fragility only applies to bad code, but it also applies to our dependencies. Even the vanguard of the industry isn’t immune. In September 2024, OpenAI’s official newsroom account on X (formerly Twitter) was hijacked by scammers promoting a crypto token.

Think about the irony: The company building the most sophisticated intelligence in human history was momentarily compromised not by a failure of their neural networks, but by the fragility of a third-party platform. This is the fragility tax in action. When you build your enterprise on deterministic connections to external platforms you don’t control, you inherit their vulnerabilities. If you had a standard bot programmed to Retweet@OpenAINewsroom, you would have automatically amplified a scam to your entire customer base.

The old way of scripting cannot handle this volatility. We spent years trying to predict the future and hard-code it into scripts. But the world is too chaotic for scripts. We need architecture that can heal itself.

The architectural pivot: From rules to goals

To capture the value of intelligent automation (IA), you must frame it as an architectural paradigm shift, not just a software upgrade. We are moving from task automation (mimicking hands) to decision automation (mimicking brains).

When I architect these systems, I look not only for rules but also for goals.

In the old paradigm, we gave the computer a script: Click button A, then type text B, then wait 5 seconds. In the new paradigm, we use cognitive orchestrators. We give the AI a goal: Perform this goal.

The difference is profound. If the submit button turns blue, a goal-based system using a large language model (LLM) and vision capabilities sees the button. It understands that despite the color change, it is still the submission mechanism. It adjusts its own path to achieving the goal.

Think of it like the difference between a train and an off-road vehicle. A train is fast and efficient, but it requires expensive infrastructure (tracks) and cannot steer around a rock on the line. Intelligent automation is the off-road vehicle. It uses sensors to perceive the environment. If it sees a rock, it doesn’t derail; it decides to go around it.

This isn’t magic; it’s a specific architectural pattern. The tech stack required to support this is fundamentally different from what most CIOs currently have installed. It is no longer just a workflow engine. The new stack requires three distinct components working in concert:

  1. The workflow engine: The hands that execute actions.
  2. The reasoning layer (LLM): The brain that figures out the steps dynamically and handles the logic.
  3. The vector database: The memory that stores context, past experiences and embedded data to reduce hallucinations.

By combining these, we move from brittle scripts to resilient agents.

Breaking the unstructured data barrier

The most significant limitation of the old way was its inability to handle unstructured data. We know that roughly 80% of enterprise data is unstructured, locked away in PDFs, email threads, Slack and MS Teams chats, and call logs. Traditional business process automation cannot touch this. It requires structured inputs: rows and columns.

This is where the multi-modal understanding of intelligent automation changes the architecture.

I urge you to adopt a new mantra: Data entry is dead. Data understanding is the new standard.

I am currently designing architectures where the system doesn’t just move a PDF from folder A to folder B. It reads the PDF. It understands the sentiment of the email attached to it. It extracts the intent from the call log referenced in the footer.

Consider a complex claims-processing scenario. In the past, a human had to manually review a handwritten accident report, cross-reference it with a policy PDF and check a photo of the damage. A deterministic bot is useless here because the inputs are never the same twice.

Intelligent automation changes the equation. It can ingest the handwritten note (using OCR), analyze the photo (using computer vision) and read the policy (using an LLM). It synthesizes these disparate, messy inputs into a structured claim object. It turns chaos into order.

This is the difference between digitization (making it electronic) and digitalization (making it intelligent).

Human-in-the-loop as a governance pattern

Whenever we present this self-driving enterprise concept to clients, the immediate reaction is “You want an LLM to talk to our customers?” This is a valid fear. But the answer isn’t to ban AI; it is to architect confidence-based routing.

We don’t hand over the keys blindly. We build governance directly into the code. In this pattern, the AI assesses its own confidence level before acting.

This brings us back to the importance of verification. Why do we need humans in the loop? Because trusted endpoints don’t always stay trusted.

Revisiting the security incident I mentioned earlier: If you had a fully autonomous sentient loop that automatically acted upon every post from a verified partner account, your enterprise would be at risk. A deterministic bot says: Signal comes from a trusted source -> execute.

A probabilistic, governed agent says: Signal comes from a trusted source, but the content deviates 99% from their semantic norm (crypto scam vs. tech news). The confidence score is low. Alert human.

That is the architectural shift we need.

  • Scenario A: The AI is 99% confident it understands the invoice, the vendor matches the master record and the semantics align with past behavior. The system auto-executes.
  • Scenario B: The AI is only 70% confident because the address is slightly different, the image is blurry or the request seems out of character (like the hacked tweet example). The system routes this specific case to a human for approval.

This turns automation into a partnership. The AI handles the mundane, high-volume work and your humans handle the edge cases. It solves the black box problem that keeps compliance officers awake at night.

Kill the zombie bots

If you want to prepare your organization for this shift, you don’t need to buy more software tomorrow. You need to start with an audit.

Look at your current automation portfolio. Identify the zombie bots, which are the scripts that are technically alive but require constant intervention to keep moving. These bots fail whenever vendors update their software. These are the bots that are costing you more in fragility tax than they save in labor.

Stop trying to patch them. These are the prime candidates for intelligent automation.

The future belongs to the probabilistic. It belongs to architectures that can reason through ambiguity, handle unstructured chaos and self-correct when the world changes. As leaders, we need to stop building trains and start building off-road vehicles.

The technology is ready. The question is, are you ready to let go of the steering wheel?

Disclaimer: This and any related publications are provided in the author’s personal capacity and do not represent the views, positions or opinions of the author’s employer or any affiliated organization.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

Gestión de la cartera de TI: cómo optimizar los activos tecnológicos para generar valor empresarial

En el ámbito financiero, la gestión de carteras consiste en seleccionar estratégicamente un conjunto de inversiones alineadas con los objetivos financieros y la tolerancia al riesgo del inversor. Este mismo enfoque puede aplicarse a la cartera de sistemas de TI, con una salvedad clave: además del rendimiento financiero, la función de TI debe evaluar cada activo también desde el punto de vista de su rendimiento operativo.

La TI actual combina sistemas heredados, plataformas en la nube y tecnologías emergentes o de vanguardia, como la inteligencia artificial (IA). Cada una de estas categorías incluye activos críticos para la misión, pero no todos los sistemas aportan el mismo valor empresarial, financiero o de mitigación de riesgos. ¿Cómo pueden los CIO optimizar el rendimiento global de su cartera tecnológica?

A continuación, se exponen cinco criterios de evaluación para maximizar el valor de la cartera de TI.

Activos críticos

Los sistemas más críticos para el funcionamiento diario del negocio constituyen una categoría en sí mismos. Pueden ser visibles o estar ocultos en lo más profundo de la pila tecnológica, pero todos los activos deben evaluarse en función de su grado de criticidad.

Por ejemplo, una solución ERP puede ser un sistema imprescindible 24×7 porque interactúa con una cadena de suministro global y soporta gran parte del negocio de la empresa. En cambio, una aplicación de recursos humanos o un sistema de analítica de marketing probablemente podrían permanecer inactivos durante un día si el personal dispone de soluciones alternativas.

Este mismo análisis debe aplicarse a servidores, redes y sistemas de almacenamiento. ¿Qué recursos son absolutamente imprescindibles y de cuáles se podría prescindir, aunque sólo sea de forma temporal?

A medida que el departamento de TI identifica estos activos críticos, conviene revisar esta clasificación con los usuarios finales y la dirección para asegurar un consenso real.

Utilización de los activos

Según Zylo, proveedor especializado en la gestión de inventario, licencias y renovaciones de SaaS, “el 53% de las licencias de SaaS no se utilizan o están infrautilizadas de media, por lo que identificar el software inactivo debería ser una prioridad”. Este problema del “software sin usar” no se limita al SaaS: también aparece en sistemas antiguos y modernos infrautilizados, servidores y discos obsoletos, o tecnologías de red que no se utilizan, pero por las que se sigue pagando.

En muchos casos, este fenómeno se debe a que el departamento de TI está tan centrado en proyectos que no dispone de tiempo para revisar inventarios y niveles de obsolescencia. Como resultado, productos antiguos permanecen en catálogo y se renuevan automáticamente.

Si se quiere maximizar el rendimiento y la rentabilidad de la cartera de TI, este problema debe abordarse de forma sistemática. Cuando el propio departamento no puede dedicar tiempo suficiente, puede recurrirse a consultores externos que realicen una evaluación de uso de activos para identificar aquellos que nunca se utilizan o lo hacen de forma marginal, para reutilizarlos o retirarlos.

Riesgo de los activos

El objetivo de una cartera de TI es reunir activos relevantes hoy y que sigan siéndolo mañana. Por ello, es imprescindible evaluar el riesgo asociado a cada recurso tecnológico.

¿Existe riesgo de que el proveedor retire el producto o lo deje obsoleto? ¿Es estable el propio proveedor? ¿Dispone el departamento de TI del talento necesario para seguir operando determinados sistemas, independientemente de su calidad, como aplicaciones heredadas personalizadas escritas en COBOL o Assembler? ¿Se está encareciendo en exceso el funcionamiento de un sistema o de un hardware concreto? ¿Existe una hoja de ruta clara para integrar los sistemas actuales con las tecnologías que llegarán en el futuro?

Cuando un activo se considera de riesgo, deben definirse estrategias para sacarlo de esa situación o planificar su sustitución.

Valor de la propiedad intelectual

Conozco a un CIO del sector hotelero que presume de que su sistema de reservas y el mainframe en el que se ejecuta no han fallado en 30 años. Atribuye gran parte de ese éxito al código personalizado y al sistema operativo especializado que utiliza la compañía, y tanto él como su comité de dirección lo consideran una ventaja competitiva frente a sus rivales.

No es un caso aislado. Muchas organizaciones operan con su propia “receta secreta de TI”, que impulsa directamente el negocio. Esta receta puede ser un sistema heredado o un algoritmo de IA. Activos de este tipo, que constituyen auténtica propiedad intelectual tecnológica, son un argumento sólido para su conservación dentro de la cartera.

TCO y ROI de los activos

¿Rinden al máximo de su potencial todos los activos tecnológicos? Al igual que ocurre con las inversiones financieras, las tecnologías deben demostrar que siguen generando un valor medible y sostenible. Los dos principales indicadores son el coste total de propiedad (TCO) y el retorno de la inversión (ROI).

El TCO mide el valor de un activo a lo largo de su vida útil. Por ejemplo, unos servidores adquiridos para el centro de datos pudieron resultar rentables hace cuatro años, pero hoy ese mismo centro puede albergar racks obsoletos con tecnología anticuada, cuando resulta más eficiente trasladar la carga de trabajo a la nube.

El ROI, por su parte, se utiliza sobre todo al adquirir nueva tecnología. Se definen métricas para determinar cuándo se recuperará la inversión inicial. Una vez alcanzado el punto de equilibrio, se sigue monitorizando para comprobar cómo se materializa la rentabilidad o el ahorro esperado. Sin embargo, no todas las inversiones salen según lo previsto: el caso de negocio puede cambiar o surgir complicaciones imprevistas que conviertan una inversión prometedora en una carga.

Tanto en términos de TCO como de ROI, la cartera de TI debe gestionarse de forma activa para eliminar activos que ya no aportan valor o que generan pérdidas.

En resumen

La gestión de la cartera de TI es una función esencial y continua del CIO, pero con demasiada frecuencia se aborda de forma reactiva: se sustituye un sistema cuando los usuarios lo reclaman o se retira un servidor cuando deja de funcionar.

Tampoco ayudan el CEO, el CFO y otros interlocutores clave durante la elaboración del presupuesto tecnológico. Aunque suelen interesarse por el plazo de amortización de una nueva inversión, rara vez preguntan por la visión global de la cartera de TI: cómo están funcionando los activos en conjunto y qué sistemas deberán sustituirse para mantener o incrementar el valor para la empresa.

Para mejorar su gestión tecnológica, los CIO deberían aprovechar el potencial de la gestión de carteras. Esto implica crear una cartera formal de activos de TI y revisarla periódicamente con las áreas del negocio que tienen influencia directa en los presupuestos.

Este enfoque conecta especialmente bien con el director financiero y el director general, acostumbrados a trabajar con carteras financieras y de riesgo. Una mayor visibilidad de la cartera tecnológica también facilita al CIO presentar nuevas recomendaciones, justificar sustituciones o actualizaciones y obtener aprobaciones cuando sea necesario.

10 top priorities for CIOs in 2026

A CIO’s wish list is typically long and costly. Fortunately, by establishing reasonable priorities, it’s possible to keep pace with emerging demands without draining your team or budget.

As 2026 arrives, CIOs need to take a step back and consider how they can use technology to help reinvent their wider business while running their IT capabilities with a profit and loss mindset, advises Koenraad Schelfaut, technology strategy and advisory global lead at business advisory firm Accenture. “The focus should shift from ‘keeping the lights on’ at the lowest cost to using technology … to drive topline growth, create new digital products, and bring new business models faster to market.”

Here’s an overview of what should be at the top of your 2026 priorities list.

1. Strengthening cybersecurity resilience and data privacy

Enterprises are increasingly integrating generative and agentic AI deep into their business workflows, spanning all critical customer interactions and transactions, says Yogesh Joshi, senior vice president of global product platforms at consumer credit reporting firm TransUnion. “As a result, CIOs and CISOs must expect bad actors will use these same AI technologies to disrupt these workflows to compromise intellectual property, including customer sensitive data and competitively differentiated information and assets.”

Cybersecurity resilience and data privacy must be top priorities in 2026, Joshi says. He believes that as enterprises accelerate their digital transformation and increasingly integrate AI, the risk landscape will expand dramatically. “Protecting sensitive data and ensuring compliance with global regulations is non-negotiable,” Joshi states.

2. Consolidating security tools

CIOs should prioritize re-baselining their foundations to capitalize on the promise of AI, says Arun Perinkolam, Deloitte’s US cyber platforms and technology, media, and telecommunications industry leader. “One of the prerequisites is consolidating fragmented security tools into unified, integrated, cyber technology platforms — also known as platformization.”

Perinkolam says a consolidation shift will move security from a patchwork of isolated solutions to an agile, extensible foundation fit for rapid innovation and scalable AI-driven operations. “As cyber threats become increasingly sophisticated, and the technology landscape evolves, integrating cybersecurity solutions into unified platforms will be crucial,” he says.

“Enterprises now face a growing array of threats, resulting in a sprawling set of tools to manage them,” Perinkolam notes. “As adversaries exploit fractured security postures, delaying platformization only amplifies these risks.”

3. Ensuring data protection

To take advantage of enhanced efficiency, speed, and innovation, organizations of all types and sizes are now racing to adopt new AI models, says Parker Pearson, chief strategy officer at data privacy and preservation firm Donoma Software.

“Unfortunately, many organizations are failing to take the basic steps necessary to protect their sensitive data before unleashing new AI technologies that could potentially be left exposed,” she warns, adding that in 2026 “data privacy should be viewed as an urgent priority.”

Implementing new AI models can raise significant concerns around how data is collected, used, and protected, Pearson notes. These issues arise across the entire AI lifecycle, from how the data used for initial training to ongoing interactions with the model. “Until now, the choices for most enterprises are between two bad options: either ignore AI and face the consequences in an increasingly competitive marketplace; or implement an LLM that could potentially expose sensitive data,” she says. Both options, she adds, can result in an enormous amount of damage.

The question for CIOs is not whether to implement AI, but how to derive optimal value from AI without placing sensitive data at risk, Pearson says. “Many CIOs confidently report that their organization’s data is either ‘fully’ or ‘end to end’ encrypted.” Yet Pearson believes that true data protection requires continuous encryption that keeps information secure during all states, including when it’s being used. “Until organizations address this fundamental gap, they will continue to be blindsided by breaches that bypass all their traditional security measures.”

Organizations that implement privacy-enhancing technology today will have a distinct advantage in implementing future AI models, Pearson says. “Their data will be structured and secured correctly, and their AI training will be more efficient right from the start, rather than continually incurring the expense, and risk of retraining their models.”

4. Focusing on team identity and experience

A top priority for CIOs in 2026 should be resetting their enterprise identity and employee experience, says Michael Wetzel, CIO at IT security software company Netwrix. “Identity is the foundation of how people show up, collaborate, and contribute,” he states. “When you get identity and experience right, everything else, including security, productivity, and adoption, follows naturally.”

Employees expect a consumer-grade experience at work, Wetzel says. “If your internal technology is clunky, they simply won’t use it.” When people work around IT, the organization loses both security and speed, he warns. “Enterprises that build a seamless, identity-rooted experience will innovate faster while organizations that don’t will fall behind.”

5. Navigating increasingly costly ERP migrations

Effectively navigating costly ERP migrations should be at the top of the CIO agenda in 2026, says Barrett Schiwitz, CIO at invoice lifecycle management software firm Basware. “SAP S/4HANA migrations, for instance, are complex and often take longer than planned, leading to rising costs.” He notes that upgrades can cost enterprises upwards of $100 million, rising to as much as $500 million depending on the ERP’s size and complexity.

The problem is that while ERPs try to do everything, they rarely perform specific tasks, such as invoice processing, really well, Schiwitz says. “Many businesses overcomplicate their ERP systems, customizing them with lots of add-ons that further increase risk.” The answer, he suggests, is adopting a “clean core” strategy that lets SAP do what it does best and then supplement it with best-in-class tools to drive additional value.

6. Doubling-down on innovation — and data governance

One of the most important priorities for CIOs in 2026 is architecting a foundation that makes innovation scalable, sustainable, and secure, says Stephen Franchetti, CIO at compliance platform provider Samsara.

Franchetti says he’s currently building a loosely coupled, API-first architecture that’s designed to be modular, composable, and extensible. “This allows us to move faster, adapt to change more easily, and avoid vendor or platform lock-in.” Franchetti adds that in an era where workflows, tools, and even AI agents are increasingly dynamic, a tightly bound stack simply won’t scale.

Franchetti is also continuing to evolve his enterprise data strategy. “For us, data is a long-term strategic asset — not just for AI, but also for business insight, regulatory readiness, and customer trust,” he says. “This means doubling down on data quality, lineage, governance, and accessibility across all functions.”

7. Facilitating workforce transformation

CIOs must prioritize workforce transformation in 2026, says Scott Thompson, a partner in executive search and management consulting company Heidrick & Struggles. “Upskilling and reskilling teams will help develop the next generation of leaders,” he predicts. “The technology leader of 2026 needs to be a product-centric tech leader, ensuring that product, technology, and the business are all one and the same.”

CIOs can’t hire their way out of the talent gap, so they must build talent internally, not simply buy it on the market, Thompson says. “The most effective strategy is creating a digital talent factory with structured skills taxonomies, role-based learning paths, and hands-on project rotations.”

Thompson also believes that CIOs should redesign job roles for an AI-enabled environment and use automation to reduce the amount of specialized labor required. “Forming fusion teams will help spread scarce expertise across the organization, while strong career mobility and a modern engineering culture will improve retention,” he states. “Together, these approaches will let CIOs grow, multiply, and retain the talent they need at scale.”

8. Improving team communication

A CIO’s top priority should be developing sophisticated and nuanced approaches to communication, says James Stanger, chief technology evangelist at IT certification firm CompTIA. “The primary effect of uncertainty in tech departments is anxiety,” he observes. “Anxiety takes different forms, depending upon the individual worker.”

Stanger suggests working closer with team members as well as managing anxiety through more effective and relevant training.

9. Strengthening drive agility, trust, and scale

Beyond AI, the priority for CIOs in 2026 should be strengthening the enabling capabilities that drive agility, trust, and scale, says Mike Anderson, chief digital and information officer at security firm Netskope.

Anderson feels that the product operating model will be central to this shift, expanding beyond traditional software teams to include foundational enterprise capabilities, such as identity and access management, data platforms, and integration services.

“These capabilities must support both human and non-human identities — employees, partners, customers, third parties, and AI agents — through secure, adaptive frameworks built on least-privileged access and zero trust principles,” he says, noting that CIOs who invest in these enabling capabilities now will be positioned to move faster and innovate more confidently throughout 2026 and beyond.

10. Addressing an evolving IT architecture

In 2026, today’s IT architecture will become a legacy model, unable to support the autonomous power of AI agents, predicts Emin Gerba, chief architect at Salesforce. He believes that in order to effectively scale, enterprises will have to pivot to a new agentic enterprise blueprint with four new architectural layers: a shared semantic layer to unify data meaning, an integrated AI/ML layer for centralized intelligence, an agentic layer to manage the full lifecycle of a scalable agent workforce, and an enterprise orchestration layer to securely manage complex, cross-silo agent workflows.

“This architectural shift will be the defining competitive wedge, separating companies that achieve end-to-end automation from those whose agents remain trapped in application silos,” Gerba says.

Corver avanza hacia un modelo digital integrado y estandarizado

La distribución multimarca lleva años viviendo una profunda transformación marcada por la digitalización, la automatización de procesos y la necesidad de dar una respuesta más eficiente a los clientes. En este contexto opera Corver, compañía española con más de 30 años dedicada a la importación y distribución en exclusiva de productos, accesorios y recambios para motocicletas y motoristas de las principales marcas del mercado.

Al igual que muchas compañías del sector, el grupo, que opera a través de diversas empresas en España y Portugal, se enfrenta a un reto doble: mejorar su eficiencia interna y sostener un negocio cada vez más dinámico, apoyado en el comercio electrónico, la gestión inteligente del catálogo y la toma de decisiones basada en datos. Para ello, la organización ha emprendido un proyecto de transformación digital profundo y progresivo que está redefiniendo la manera en la que trabaja.

Según Marc Codina, IT project manager y CTO del grupo Corver, la compañía “se encuentra en una fase de consolidación y escalado”. Durante 2025 han iniciado la unificación del ERP a nivel de grupo, han puesto en marcha un PIM corporativo como fuente única de datos de producto y trabajan en la convergencia del SGA. “El foco ahora está en estabilizar, extraer eficiencias y extender estándares al resto de compañías del grupo”, apunta.

El proceso de transformación digital arrancó formalmente en el año 2023, con una hoja de ruta que fue activándose por fases entre 2024 y 2025. Codina recuerda que el programa nació con cuatro objetivos principales: “Disponer de un dato maestro único en clientes, precios y producto; homogeneizar procesos en toda la organización, desde order-to-cash hasta procure-to-pay; asegurar la escalabilidad del comercio electrónico; y facilitar un reporting directivo en tiempo casi real”. Este enfoque busca eliminar silos y crear un modelo operativo más coherente y medible.

Uno de los proyectos estrella acometidos por la compañía dentro de ese proceso de transformación digital es la estandarización del ERP a nivel de grupo, una iniciativa que ha permitido unificar procedimientos y mejorar el control operativo en distintas áreas, aprovechando todas las capacidades de la solución SAGE X3. En este despliegue, Corver ha colaborado con el integrador Aitana, cuya participación ha sido clave para acompañar el cambio cultural asociado al proyecto. “Su equipo nos ha acompañado con una alta profesionalidad y experiencia, ayudándonos a superar la resistencia natural al cambio que conlleva una transformación tecnológica de este alcance”, explica Codina.  

Más del 95% de los pedidos ya se integran automáticamente en el ERP sin intervención manual, lo que ha permitido optimizar el proceso de compra, reducir errores y duplicidades, disminuir incidencias por recepciones incorrectas y aumentar la visibilidad en la trazabilidad de pedidos e información. Aunque la compañía no aporta cifras exactas de la inversión en este proyecto, según Marc Codina, “se sitúa en seis cifras, con un retorno estimado de entre 18 y 24 meses gracias a la eficiencia y el incremento de conversión en ventas”.

Marc Codina, IT Project Manager y CTO del grupo Corver

Marc Codina, IT Project Manager y CTO del grupo Corver.

Corver

Corver ha tenido que afrontar desafíos como la gobernanza del dato, la normalización de catálogos heredados, la gestión del cambio y la formación de usuarios

Bases sólidas de TI

Todo ese proceso de transformación ha llevado a que, en la actualidad, la base tecnológica de la compañía se sustente en un ERP estandarizado que centraliza procesos financieros y operativos, en un PIM corporativo que actúa como repositorio único de información de producto en varios idiomas y en una plataforma eCommerce B2C y B2B integrada directamente con ambos sistemas. Todo ello se complementa con un sistema de BI que da soporte a modelos semánticos y cuadros de mando corporativos y con soluciones de ITSM para soporte, trazabilidad y gobierno de cambios. Además, según el CTO, la seguridad se ha convertido en un aspecto crítico, donde la organización ha reforzado capacidades con tecnologías como SSO, MFA, hardening y monitorización activa.

Sin embargo, el camino no ha sido sencillo. Corver ha tenido que afrontar una serie de desafíos que han acompañado a la implantación tecnológica, entre ellos la gobernanza del dato, la normalización de catálogos heredados, la gestión del cambio y la formación de usuarios. A esto se suman condicionantes propios del negocio, como la necesidad de ajustar los despliegues a períodos de alta actividad comercial, especialmente durante Black Month (extensión de las ofertas del Black Friday) y Navidad, y la integración con sistemas legacy, donde el SGA ha sido uno de los puntos más exigentes. Codina también subraya la importancia de una disciplina estricta en integración continua y aseguramiento de calidad para evitar retrabajos y mantener la estabilidad operativa.

Un horizonte marcado por la transformación

De cara al futuro, el grupo español se ha fijado nuevos objetivos que consolidan y amplían la transformación ya iniciada. Entre ellos se encuentran la unificación del SGA/WMS, el desarrollo de una experiencia omnicanal real con stock unificado y reglas de precios coherentes, la automatización del ciclo de vida del producto de extremo a extremo, la mejora de las integraciones con herramientas más avanzadas y el refuerzo de la ciberseguridad y la gestión de identidades. En paralelo, Corver continúa incorporando tecnologías avanzadas allí donde tienen impacto directo, desde BI y analítica predictiva hasta IA y machine learning para enriquecimiento de atributos o traducciones asistidas, así como RPA en operaciones de back-office y traducción automática para acelerar la publicación de contenido multilingüe.

En definitiva, Codina define la visión del grupo como “pragmática y sostenida”, basada en seguir midiendo el valor de cada lanzamiento y extender herramientas y estándares a todas las empresas que forman parte del grupo. Todo ello con un objetivo claro: avanzar hacia una estandarización global que combine eficiencia operativa y la flexibilidad que exige el negocio.

“IT 자산도 투자 포트폴리오처럼” 성과를 높이는 IT 포트폴리오 관리 전략

금융에서 포트폴리오 관리는 투자자의 재무 목표와 리스크 감내 수준에 맞춘 투자 묶음을 전략적으로 선정하는 일을 뜻한다.

이런 포트폴리오 관리 접근법은 IT가 보유한 시스템 포트폴리오에도 적용할 수 있는데, 여기에 한 가지가 추가돼야 한다. IT는 포트폴리오에 포함된 각 자산의 운영 성능도 함께 평가해야 한다.

오늘날 기업 IT는 레거시, 클라우드 기반, AI 같은 신흥 또는 최첨단 시스템이 뒤섞여 있다. 각 범주에는 미션 크리티컬 자산이 포함돼 있지만, 모든 시스템이 기업에 비즈니스 가치, 재무 가치, 리스크 회피 가치를 제공하는 성능이 동일하지는 않다. 그렇다면, CIO는 IT 포트폴리오의 성과를 어떻게 최적화할 수 있을까?

IT 포트폴리오 가치를 극대화하기 위한 5가지 평가 항목을 살펴본다.

미션 크리티컬 자산

기업이 일상 업무를 수행하는 데 가장 중요한 시스템은 별도 범주로 봐야 한다. 중요 시스템은 쉽게 드러날 수도 있고, 기술 스택 깊숙한 곳에 숨어 있을 수도 있다. 따라서 모든 자산은 얼마나 ‘미션 크리티컬한가’를 평가해야 한다.

예를 들어 ERP 솔루션은 24시간 운영되는 글로벌 공급망과 연동돼 기업 비즈니스 대부분을 좌우하기 때문에 24시간 365일 ‘반드시 필요한’ 시스템이다. 반대로 HR 애플리케이션이나 마케팅 분석 시스템은 직원이 우회 절차로 대응하면 하루 정도 중단돼도 큰 문제가 없을 수 있다.

더 세밀하게는 IT 서버, 네트워크, 스토리지에도 같은 유형의 분석이 필요하다. IT가 반드시 확보해야 하는 리소스는 무엇이며, 잠깐은 없어도 되는 리소스는 무엇인가? IT가 이런 미션 크리티컬 자산을 식별하는 과정에서 최종 사용자 및 경영진과 함께 목록을 재검토해 상호 합의를 확인해야 한다.

자산 활용률

SaaS 인벤토리, 라이선스, 갱신을 관리하는 SaaS 관리 플랫폼 전문업체 자일로(Zylo)는 “SaaS 라이선스의 53%가 평균적으로 미사용 또는 저활용 상태”라고 추정한다. 따라서 휴면 소프트웨어를 찾는 일이 우선 과제여야 한다. 이른바 ‘셸프웨어(Shelfware)’ 문제는 SaaS에만 국한되지 않으며, 레거시 시스템이나 구형 서버와 디스크 드라이브, 사용하지 않지만 비용을 계속 지불하는 네트워크 기술에서도 나타날 수 있다.

셸프웨어는 다양한 형태로 생기는데, IT가 프로젝트에 쫓겨 인벤토리 점검과 노후화 점검을 제대로 못하기 때문이다. 그 결과 오래된 자산은 선반에 올려진 채 자동 갱신된다.

IT 포트폴리오 성과와 수익성을 극대화하려면 셸프웨어 문제를 해결해야 한다. IT가 셸프웨어 평가에 시간을 내기 어렵다면, 컨설턴트를 투입해 자산 사용 현황을 평가하고 전혀 사용하지 않거나 거의 사용하지 않는 자산을 재배치 또는 제거 대상으로 표시할 수 있다.

자산 리스크

IT 포트폴리오의 목표는 현재도 유효하고 앞으로도 오랫동안 유효할 자산을 담는 일이다. 따라서 자산 리스크는 각 IT 리소스별로 평가해야 한다.

솔루션 업체의 서비스 종료나 노후화 위험이 있는가? 솔루션 업체 자체가 불안정한가? IT 부서는 특정 시스템을 계속 운영할 내부 인력을 확보하고 있는가? 예를 들어 COBOL과 어셈블러로 작성한 커스텀 레거시 시스템은 아무리 성능이 좋아도 내부 인력 지속 여부가 핵심 변수일 수 있다. 특정 시스템이나 하드웨어가 운영 비용 측면에서 지나치게 비싸지고 있는가? 기존 IT 리소스는 미래 IT를 채울 새 기술과 통합될 명확한 경로가 있는가?

리스크가 있는 것으로 확인된 IT 자산에 대해서는 ‘리스크’ 상태를 해소하거나 자산을 교체하기 위한 전략을 실행해야 한다.

자산 IP 가치

환대 산업에서 일하는 한 CIO는 호텔 예약 프로그램과 해당 프로그램이 구동되는 메인프레임이 30년 동안 한 번도 멈춘 적이 없다고 자랑했다. 이 CIO는 회사가 사용하는 커스텀 코드와 전용 운영체제를 성공 요인으로 꼽았고, 경영진도 경쟁사 대비 전략적 우위로 평가했다고 말했다.

이렇게 생각하는 CIO는 적지 않다. 자사 비즈니스를 더 낫게 만드는 ‘자체 특제 소스’를 기반으로 운영하는 기업이 많다. IT 특제 소스는 레거시 시스템일 수도 있고 AI 알고리즘일 수도 있다. 이처럼 IT 지식재산(IP)으로 자리 잡은 자산은 IT 포트폴리오에서 보존할 근거가 된다.

자산 TCO와 ROI

모든 IT 자산이 제 몫을 하고 있는가? 현금이나 주식 투자처럼, 관리 대상 기술은 측정 가능하고 지속 가능한 가치를 계속 만들어내고 있음을 보여줘야 한다. IT가 자산 가치를 판단할 때 주로 쓰는 지표는 TCO와 ROI이다.

TCO는 시간이 지나면서 자산 가치가 어떻게 변하는지 가늠하는 지표이다. 예를 들어 데이터센터 신규 서버 투자는 4년 전에 성과를 냈을 수 있지만, 이제 데이터센터에는 구형 기술의 노후 서버 구역이 생겼고 컴퓨팅을 클라우드로 옮기는 편이 더 저렴할 수 있다.

ROI는 새 기술을 도입할 때 사용한다. 지표를 설정해 기술에 투입한 초기 투자가 언제 회수되는지 기준 시점을 정한다. 손익분기점에 도달한 뒤에도 ROI는 계속 측정되며, 기업은 투자에서 새로운 수익성이나 비용 절감이 실제로 나타나는지 확인하려 한다. 하지만 모든 기술 투자가 계획대로 흘러가지는 않는다. 기술 도입을 정당화했던 초기 비즈니스 케이스가 바뀌거나, 예기치 못한 복잡성이 발생해 투자가 손실을 감수해야 하는 사업으로 전락하는 경우가 있다.

TCO든 ROI든 어떤 지표에서 문제가 드러나면, IT 포트폴리오는 손실 자산이나 낭비 자산을 제거하는 방식으로 유지돼야 한다.

IT 포트폴리오 관리의 대원칙 “상시 관리”

IT 포트폴리오 관리는 CIO가 상시 수행해야 할 중요한 업무지만, 실제로는 문제가 발생했을 때 대응하는 방식으로 수행하는 경우가 너무 많다. 예를 들어 사용자 요청이 있을 때만 시스템을 교체하거나, 서버가 고장 나 데이터센터에서 제거해야 할 때 대응하는 방식이다.

CIO가 예산 편성 시기에 상대하는 CEO, CFO 같은 핵심 이해관계자도 도움이 되지 않는 경우가 있다. CEO와 CFO는 새 기술 도입이 얼마나 빨리 ‘본전을 찾는지’에는 관심을 보이지만, IT 포트폴리오 전체 자산의 성과가 어떤지, 가치를 유지하거나 높이기 위해 어떤 자산을 교체해야 하는지 같은 큰 그림을 CIO에게 묻는 경우는 거의 없다고 봐야 한다.

IT 관리 역량을 높이려면 CIO는 포트폴리오 관리 기회를 적극 활용해야 한다. CIO는 회사 IT 자산 포트폴리오를 수립하고, IT 예산에 직접 영향력을 가진 조직 구성원과 정기적으로 자산을 점검하는 방식으로 포트폴리오 관리에 착수할 수 있다.

IT 포트폴리오 관리는 CFO와 CEO에게도 공감대를 형성하기 쉬운데, CFO와 CEO는 비즈니스 관점에서 재무 포트폴리오와 리스크 포트폴리오를 상시 다루기 때문이다. IT 포트폴리오의 가시성이 높아지면 CIO가 새 기술을 추천하고, 필요 시 기존 자산 교체나 업그레이드 승인을 받아내는 일도 더 수월해질 것이다.
dl-ciokorea@foundryco.com

IT portfolio management: Optimizing IT assets for business value

In finance, portfolio management involves the strategic selection of a collection of investments that align with an investor’s financial goals and risk tolerance. 

This approach can also apply to IT’s portfolio of systems, with one addition: IT must also assess each asset in that portfolio for operational performance.

Today’s IT is a mix of legacy, cloud-based, and emerging or leading-edge systems, such as AI. Each category contains mission-critical assets, but not every system performs equally well when it comes to delivering business, financial, and risk avoidance value to the enterprise. How can CIOs optimize their IT portfolio performance?

Here are five evaluative criteria for maximizing the value of your IT portfolio.

Mission-critical assets

The enterprise’s most critical systems for conducting day-to-day business are a category unto themselves. These systems may be readily apparent, or hidden deep in a technical stack. So all assets should be evaluated as to how mission-critical they are.

For example, it might be that your ERP solution is a 24/7 “must have” system because it interfaces with a global supply chain that operates around the clock and drives most company business. On the other hand, an HR application or a marketing analytics system could probably be down for a day with work-arounds by staff.

More granularly, the same type of analysis needs to be performed on IT servers, networks and storage. Which resources do you absolutely have to have, and which can you do without, if only temporarily?

As IT identifies these mission-critical assets, it should also review the list with end-users and management to assure mutual agreement.

Asset utilization

Zylo, which manages SaaS inventory, licenses, and renewals, estimates that “53% of SaaS licenses go unused or underused on average, so finding dormant software should be a priority.” This “shelfware” problem isn’t only with SaaS; it can be found in underutilized legacy and modern systems, in obsolete servers and disk drives, and in network technologies that aren’t being used but are still being paid for.

Shelfware in all forms exists because IT is too busy with projects to stop for inventory and obsolescence checks. Consequently, old stuff gets set on the shelf and auto-renews.

The shelfware issue should be solved if IT portfolios are to be maximized for performance and profitability. If IT can’t spare the time for a shelfware evaluation, it can bring in a consultant to perform an assessment of asset use and to flag never-used or seldom-used assets for repurposing or elimination.

Asset risk

The goal of an IT portfolio is to contain assets that are presently relevant and will continue to be relevant well into the future. Consequently, asset risk should be evaluated for each IT resource.

Is the resource at risk for vendor sunsetting or obsolescence? Is the vendor itself unstable? Does IT have the on-staff resources to continue running a given system, no matter how good it is (a custom legacy system written in COBOL and Assembler, for example)? Is a particular system or piece of hardware becoming too expense to run? Do existing IT resources have a clear path to integration with the new technologies that will populate IT in the future?

For IT assets that are found to be at risk, strategies should be enacted to either get them out of “risk” mode, or to replace them.

Asset IP value

There is a CIO I know in the hospitality industry who boasts that his hotel reservation program, and the mainframe it runs on, have not gone down in 30 years. He attributes much of this success to custom code and a specialized operating system that the company uses, and he and his management view it as a strategic advantage over the competition.

He is not the only CIO who feels this way. There are many companies that operate with their “own IT special sauce” that makes their businesses better. This special sauce could be a legacy system or an AI algorithm. Assets like these that become IT intellectual property (IP) present a case for preservation in the IT portfolio.

Asset TCO and ROI

Is every IT asset pulling its weight? Like monetary and stock investments, technologies under management must show they are continuing to produce measurable and sustainable value. The primary indicators of asset value that IT uses are total cost of ownership (TCO) and return on investment (ROI).

TCO is what gauges the value of an asset over time. For instance, investments in new servers for the data center might have paid off four years ago, but now the data center has an aging bay of servers with obsolete technology and it is cheaper to relocate compute to the cloud.

ROI is used when new technology is acquired. Metrics are set that define at what point the initial investment into the technology will be recouped. Once the breakeven point has been reached, ROI continues to be measured because the company wants to see new profitability and/or savings materialize from the investment. Unfortunately, not all technology investments go as planned. Sometimes the initial business case that called for the technology changes or unforeseen complications arise that turn the investment into a loss leader.

In both cases, whether the issue is TCO or ROI, the IT portfolio must be maintained in a way such that losing or wasted assets are removed.

Summing it up

IT portfolio management is an important part of what CIOs should be doing on an ongoing basis, but all too often, it is approached in a reactionary mode — for example, with a system being replaced only when users ask for it to be replaced, or a server needing to be removed from the data center because it fails.

The CEO, the CFO, and other key stakeholders whom the CIO deals with during technology budgeting time don’t help, either. While they will be interested in how long it will take for a new technology acquisition to “pay for itself,” no one ever asks the CIO about the big picture of IT portfolio management: how the overall assets in the IT portfolio are performing, and which assets will require replacement for the portfolio to sustain or improve company value.

To improve their own IT management, CIOs should seize the portfolio management opportunity. They can do this by establishing a portfolio for their company’s IT assets and reviewing these assets periodically with those in the enterprise who have direct say over IT budgets.

IT portfolio management will resonate with the CFO and CEO because both continually work with financial and risk portfolios for the business. Broader visibility of the IT portfolio will also make it easier for CIOs to present new technology recommendations and to obtain approvals for replacing or upgrading existing assets when these actions are called for.

See also:

Iberostar confía en la IA para mejorar la captación y gestión de empleados en un sector con alta rotación

La escasez de trabajadores disponible, la alta rotación y la poca digitalización de los puestos de trabajo son, según Luis Zamora, director de personas (CHRO) de Grupo Iberostar, una de las grandes multinacionales españolas del sector turismo y especialmente fuerte en el negocio hotelero, los tres grandes desafíos que afronta este vertical en la actualidad desde el prisma laboral. Así lo indicó el directivo en un encuentro con prensa organizado por Workday, compañía con la que el grupo turístico selló un contrato en 2025 para implantar su tecnología de gestión de recursos humanos, en aras de unificar sus procesos de contratación, desarrollo de talento y administración de nóminas y beneficios laborales.

“En este momento —contó Zamora— nos hallamos en pleno proceso de implementación de Workday, que esperamos que esté a pleno uso el próximo mes de septiembre. Esto nos permitirá ofrecer formación a la carta a nuestros empleados, facilitarles realizar cambios de turno sin que esto impacte en el negocio y en función de la ocupación del hotel donde trabajen y mejorar así su experiencia y la de sus managers, facilitar el cumplimiento de la regulación, etc.”. El proyecto, añadió el responsable, implica un “gran cambio cultural”, pues “se trata de que el área de personas deje de realizar una gestión administrativa y de nómina a abordar una gestión del capital humano”.

La compañía, relató, confía en las posibilidades que trae consigo el software de la firma estadounidense, que se apalanca en la analítica de datos y la más reciente aplicación de agentes de inteligencia artificial, una tecnología que el directivo de Iberostar ve con buenos ojos incluso de cara a la captación de empleados. “Ya hemos realizado pilotos con IA que nos permiten realizar entrevistas de filtro con personas; esta tendencia, la de hablar con máquinas, que la gente más joven ya ha normalizado, entrará en los procesos de selección”, afirmó Zamora, convencido de que “la IA ayudará a quitar tareas administrativas a los gestores de forma que estos, en lugar de dedicar tanto tiempo a hacer informes, puedan centrarse en lo importante: en las personas y en sus equipos”.

“La IA entrará en los procesos de selección […] pero la máquina nunca va a tener la última decisión”

Con la IA, recalcó, “la mejora de la experiencia del empleado será importante”. Agregó que la naturalidad que traerá consigo a las conversaciones con las máquinas el uso de la IA generativa por voz será un factor determinante para ello; no obstante, subrayó, “hay que actuar con prudencia y tener en cuenta que en un proceso de selección de personal la máquina nunca va a tener la última decisión”. “En un sector donde la experiencia del cliente depende directamente de las personas, la IA no debe sustituir el criterio humano, sino potenciarlo”, recalcó.

La implantación de la tecnología de gestión de personal de Workday forma parte de una estrategia tecnológica a mayor escala que implica también el uso de soluciones de otras herramientas de gestión con las que estará integrada la primera, como una solución de planificación financiera de SAP u otra de Microsoft (Fabric) para hacer análisis avanzado de datos corporativos.

Adopción de la IA en la parte operacional de Iberostar

Zamora desgranó también otros proyectos del grupo en los que la inteligencia artificial tiene un rol relevante desde el punto de vista de la operación. Uno es el sistema Winnow, “basado en el machine learning de toda la vida” que ha implantado ya en decenas de sus hoteles y gracias al que ha conseguido ahorrar en estos establecimientos millones de comidas al año. “Nos preocupa el desperdicio alimentario”, apuntó el directivo. Este proyecto forma parte de su movimiento de sostenibilidad Wave of Change, que nació en 2022 con el objetivo de ahorrar 1.600 toneladas de residuos alimentarios al año, es decir, unos 5,3 millones de comidas.

El otro proyecto mencionado por Zamora fue BRAIAN, una inteligencia artificial diseñada para optimizar el consumo energético en los hoteles sin impactar en los huéspedes. La solución, desarrollada con la compañía Sener y que también forma parte del movimiento Wave of Change, tiene como objetivo reducir en un 35% el consumo energético y en un 85% las emisiones de alcance 1 y 2 para 2030.

Ya desde un punto de vista más general de transformación digital, Iberostar, bajo su proyecto Hotel Digital, utiliza la inteligencia artificial para mejorar la experiencia del cliente.

The tech leadership realizing more than the sum of parts

Waiting on replacement parts can be more than just an inconvenience. It can be a matter of sharp loss of income and opportunity. This is especially true for those who depend on industrial tools and equipment for agriculture and construction. So to keep things run as efficiently as possible, Parts ASAP CIO John Fraser makes sure end customer satisfaction is the highest motivation to get the tech implementation and distribution right.

“What it comes down to, in order to achieve that, is the team,” he says. “I came into this organization because of the culture, and the listen first, act later mentality. It’s something I believe in and I’m going to continue that culture.”

Bringing in talent and new products has been instrumental in creating a stable e-commerce model, so Fraser and his team can help digitally advertise to customers, establish the right partnerships to drive traffic, and provide the right amount of data.

“Once you’re a customer of ours, we have to make sure we’re a needs-based business,” he says. “We have to be the first thing that sticks in their mind because it’s not about a track on a Bobcat that just broke. It’s $1,000 a day someone’s not going to make due to a piece of equipment that’s down.”

Ultimately, this strategy helps and supports customers with a collection of highly-integrated tools to create an immersive experience. But the biggest challenge, says Fraser, is the variety of marketplace channels customers are on.

“Some people prefer our website,” he says. “But some are on Walmart or about 20 other commercial channels we sell on. Each has unique requirements, ways to purchase, and product descriptions. On a single product, we might have 20 variations to meet the character limits of eBay, for instance, or the brand limitations of Amazon. So we’ve built out our own product information management platform. It takes the right talent to use that technology and a feedback loop to refine the process.”

Of course, AI is always in the conversation since people can’t write updated descriptions for 250,000 SKUs.

“AI will fundamentally change what everybody’s job is,” he says. “I know I have to prepare for it and be forward thinking. We have to embrace it. If you don’t, you’re going to get left behind.”

Fraser also details practical AI adoption in terms of pricing, product data enhancement, and customer experience, while stressing experimentation without over-dependence. Watch the full video below for more insights, and be sure to subscribe to the monthly Center Stage newsletter by clicking here.

On consolidating disparate systems: You certainly run into challenges. People are on the same ERP system so they have some familiarity. But even within that, you have massive amounts of customization. Sometimes that’s very purpose-built for the type of process an organization is running, or that unique sales process, or whatever. But in other cases, it’s very hard. We’ve acquired companies with their own custom built ERP platform, where they spent 20 years curating it down to eliminate every button click. Those don’t go quite as well, but you start with a good culture, and being transparent with employees and customers about what’s happening, and you work through it together. The good news is it starts with putting the customer first and doing it in a consistent way. Tell people change is coming and build a rapport before you bring in massive changes. There are some quick wins and efficiencies, and so people begin to trust. Then, you’re not just dragging them along but bringing them along on the journey.

On AI: Everybody’s talking it, but there’s a danger to that, just like there was a danger with blockchain and other kinds of immersive technologies. You have to make sure you know why you’re going after AI. You can’t just use it because it’s a buzzword. You have to bake it into your strategy and existing use cases, and then leverage it. We’re doing it in a way that allows us to augment our existing strategy rather than completely and fundamentally change it. So for example, we’re going to use AI to help influence what our product pricing should be. We have great competitive data, and a great idea of what our margins need to be and where the market is for pricing. Some companies are in the news because they’ve gone all in on AI, and AI is doing some things that are maybe not so appropriate in terms of automation. But if you can go in and have it be a contributing factor to a human still deciding on pricing, that’s where we are rather than completely handing everything over to AI.

On pooling data: We have a 360-degree view of all of our customers. We know when they’re buying online and in person. If they’re buying construction equipment and material handling equipment, we’ll see that. But when somebody’s buying a custom fork for a forklift, that’s very different than someone needing a new water pump for a John Deere tractor. And having a manufacturing platform that allows us to predict a two and a half day lead time on that custom fork is a different system to making sure that water pump is at your door the next day. Trying to do all that in one platform just hasn’t been successful in my experience in the past. So we’ve chosen to take a bit of a hybrid approach where you combine the data but still have best in breed operational platforms for different segments of the business.

On scaling IT systems: The key is we’re not afraid to have more than one operational platform. Today, in our ecosystem of 23 different companies, we’re manufacturing parts in our material handling business, and that’s a very different operational platform than, say, purchasing overseas parts, bringing them in, and finding a way to sell them to people in need, where you need to be able to distribute them fast. It’s an entirely different model. So we’re not establishing one core platform in that case, but the right amount of platforms. It’s not 23, but it’s also not one. So as we think about being able to scale, it’s also saying that if you try to be all things to all people, you’re going to be a jack of all trades and an expert in none. So we want to make sure when we have disparate segments that have some operational efficiency in the back end — same finance team, same IT teams — we’ll have more than one operational platform. Then through different technologies, including AI, ensure we have one view of the customer, even if they’re purchasing out of two or three different systems.

On tech deployment: Experiment early and then make certain not to be too dependent on it immediately. We have 250,000 SKUs, and more than two million parts that we can special order for our customers, and you can’t possibly augment that data with a world-class description with humans. So we selectively choose how to make the best product listing for something on Amazon or eBay. But we’re using AI to build enhanced product descriptions for us, and instead of having, say, 10 people curating and creating custom descriptions for these products, we’re leveraging AI and using agents in a way that allow people to build the content. Now humans are simply approving, rejecting, or editing that content, so we’re leveraging them for the knowledge they need to have, and if this going to be a good product listing or not. We know there are thousands of AI companies, and for us to be able to pick a winner or loser is a gamble. Our approach is to make it a bit of a commoditized service. But we’re also pulling in that data and putting it back into our core operational platform, and there it rests. So if we’re with the wrong partner, or they get acquired, or go out of business, we can switch quickly without having to rewrite our entire set of systems because we take it in, use it a bit as a commoditized service, get the data, set it at rest, and then we can exchange that AI engine. We’ve already changed it five times and we’re okay to change it another five until we find the best possible partner so we can stay bleeding edge without having all the expense of building it too deeply into our core platforms.

Google’s Universal Commerce Protocol aims to simplify life for shopping bots… and CIOs

Google has published the first draft of Universal Commerce Protocol (UCP), an open standard to help AI agents order and pay for goods and services online.

It co-developed the new protocol with industry leaders including Shopify, Etsy, Wayfair, Target and Walmart. It also has support from payment system providers including Adyen, American Express, Mastercard, Stripe, and Visa, and online retailers including Best Buy, Flipkart, Macy’s, The Home Depot, and Zalando.

Google’s move has been eagerly awaited by retailers according to retail technology consultant, Miya Knights. “Retailers are keen to start experimenting with agentic commerce, selling directly through AI platforms like ChatGPT, Gemini, and Perplexity. They will embrace and experiment with it. They want to know how to show up and convert in consumer searches.”

Security shopping list

However, it will present challenges for CIOs, in particular in maintaining security, she said. UCP as implemented by Google means retailers will be exposing REST (Representational State Transfer) endpoints to create, update, or complete checkout sessions. “That’s an additional attack surface beyond your web/app checkout. API gateways, WAF/bot mitigation, and rate limits become part of checkout security, not just a ‘nice-to-have’. This means that CIOs will have to implement new reference architectures and runtime controls; new privacy, consent, and contracts protocols; and new fraud stack component integration.”

Info-Tech Research Group principal research director Julie Geller also sees new security challenges ahead. “This is a major shift in posture. It pushes retail IT teams toward deliberate agent gateways, controlled interfaces where agent identity, permissions, and transaction scope are clearly defined. The security challenge isn’t the volume of bot traffic, but non-human actors executing high-value actions like checkout and payments. That requires a different way of thinking about security, shifting the focus away from simple bot detection toward authorization, policy enforcement, and visibility,” she said.

The introduction of UCP will undoubtedly mean smoother integration of AI into retail systems but, besides security challenges, there will be other issues for CIOs to grapple with.

Geller said that one of the issues she foresees with UCP is that “it works too well”. By this she means that the integration is so smooth that there are governance issues. “When agents can act quickly and upstream of traditional control points, small configuration issues can surface as revenue, pricing, or customer experience problems almost immediately. This creates a shift in responsibility for IT departments. The question stops being whether integration is possible and becomes how variance is contained and accountability is maintained when execution happens outside the retailer’s own digital properties. Most retail IT architectures were not designed for that level of delegated autonomy.”

Google’s AI rival OpenAI launched a new feature last October that allowed users to discover and use third-party applications directly within the chat interface, at the same time publishing an early draft of a specification co-developed with Stripe, Agentic Commerce Protocol, to help AI agents make online transactions.

Knights expects the introduction of UCP to accelerate interest in and adoption of agentic commerce among retailers. “Google said that it had already worked with market leaders Etsy, Wayfair, Target, and Walmart to develop the UCP standard. This will force competitors to accelerate their agentic commerce strategies, and will help Google steal a march on competitors, given that it is the market leader,” she said.

For online retailers’ IT departments, it’s going to mean extra work, though, in implementing the new protocols and in ensuring their e-commerce sites are visible to consumers and bots alike.

SAP, S/4HANA 호환성 패키지 지원 종료 앞두고 ‘5개월 유예’ 카드···“이전 시작한 기업만 해당”

SAP가 데이터센터 환경 내 S/4HANA 호환성 패키지(Compatibility Pack)의 이전 작업을 마무리하지 못한 기업을 지원하기 위해 한시적인 유예 조치를 내놨다. 해당 패키지 사용 권한은 2025년 12월 31일에 만료될 예정이었지만, SAP는 아직 이를 사용 중인 기업이 네이티브 기능으로 옮길 수 있도록 5개월의 전환 기간을 추가로 제공한다고 밝혔다.

호환성 패키지는 온프레미스 환경에서 S/4HANA로 전환하는 기업이 기존 SAP ECC(ERP 센트럴 컴포넌트)의 기능을 유지할 수 있도록 하기 위해 2015년 도입됐다. 이는 S/4HANA 초기 버전에 포함되지 않았거나 이전하는 데 시간이 필요한 기능을 계속 사용할 수 있도록 하기 위한 조치였다. SAP에 따르면 누락된 기능은 2023년 버전의 S/4HANA를 통해 모두 제공됐다. SAP는 블로그 게시글에서, 라이즈 위드 SAP(Rise with SAP) 또는 SAP 클라우드 ERP 계약을 통해 클라우드 전환을 진행 중인 기업이 아닌 경우 호환성 패키지 사용 권한과 지원이 2025년 말에 종료된다고 밝힌 바 있다.

하지만 상당수 고객이 이전 작업을 완료하는 데 추가 시간이 필요한 것으로 나타났다. 이에 따라 SAP는 2026년 5월 말까지 추가 연장 기간을 제공하기로 했다고 밝혔다. 또한 호환성 패키지 기능을 대체하는 클라우드 제품으로의 전환을 앞당기기 위해 ‘맞춤형 프로그램’도 제공할 계획이다.

SAP 대변인은 “전환 인센티브와 클라우드 확장 정책을 비롯해 엔터프라이즈 아키텍트의 직접 지원, AI 기반 도구, 퍼블릭 클라우드 이전을 위한 모범 사례 가이드를 제공한다. 기업이 어떤 호환성 패키지 기능을 사용하고 있는지 식별하고, 이를 권장되는 클라우드 솔루션으로 대체하거나 이전할 수 있도록 전담 서비스를 제공해 비즈니스 운영에 차질이 없도록 지원하고 있다”라고 설명했다.

유예 기간이 제공되지 않는 경우

다만 이번 5개월 연장은 이미 이전 작업을 시작한 기업에게만 제공된다. SAP 대변인은 “이전을 아예 시작하지 않은 기업이 있다는 사실을 최근 확인했다. 이들에 대해서는 마감일 이후 선택지에 변화가 없을 것”이라고 언급했다.

업계 분석가들은 이번 연장이 합리적인 판단이라는 데에는 동의하면서도, 그 의미에 대해서는 서로 다른 시각을 보이고 있다.

인포테크 리서치 그룹의 자문 연구원 스콧 비클리는 “ECC/R3에서 S/4HANA로 이전을 진행 중인 기업이 많고, 이들 기업은 여전히 호환성 패키지가 제공하는 다양한 기능을 활용하고 있다. 이런 상황에서 SAP가 지원 및 사용 권한을 일방적으로 중단한다면, 기업 내부 업무와 고객 대상 비즈니스 프로세스 전반에 치명적인 혼란을 초래할 수 있다”라고 분석했다.

가트너의 부사장 겸 애널리스트 마이크 투차로네는 이번 조치가 기업의 현실적인 걸림돌을 반영한 실용적인 결정이라고 평가했다. 그는 “2025년 한 해 동안 가트너 애널리스트가 처리한 수천 건의 SAP 고객 문의 가운데 호환성 패키지 문제가 언급된 비중은 1% 미만이었다. 하지만 그 1%에게는 매우 심각한 문제”라고 설명했다. 이어 “이번 연장은 오늘날 많은 기업이 효율성 제고와 비용 최적화에 집중하는 상황에서 이전 작업이 상당한 부담이라는 점을 인정한 것”이라고 분석했다.

반면 그레이하운드 리서치의 수석 애널리스트 산칫 비르 고기아는 이를 정책 변화로 해석해서는 안 된다고 지적했다. 그는 “이미 개선 작업을 시작한 고객에게만 적용되는, 매우 제한적인 유예 기간일 뿐”이라며 “겉보기에는 유연해 보이지만 실제로는 선별적인 선택이다. 이는 전환 기간이지, 구제 조치가 아니다”라고 강조했다.

임시 방편

고기아는 SAP가 장기적인 로드맵을 훼손하지 않으면서 고객 반발을 관리하기 위해 의도적으로 짧은 유예 기간을 설정한 것이라고 분석했다. 그는 “연장 기간이 짧다는 점이 이를 분명히 보여준다. 5개월이라는 기간은 방향을 바꾸기 위한 것이 아니라, 이미 전환을 진행 중인 고객이 무리 없이 마무리할 수 있도록 여지를 준 것”이라고 설명했다. 이어 “이를 통해 호환성 패키지 제공이 애초부터 임시 방편에 불과했다는 점을 다시 한번 분명히 했다”라고 평가했다.

또한 고기아는 많은 CIO가 S/4HANA 이전을 성공적으로 완료했다고 판단했음에도 불구하고, 실제로는 기업이 여전히 위험에 노출돼 있다는 사실을 뒤늦게 인식하고 있다고 전했다. 그는 “이전 이후 더 이상 문제가 없다고 여겼던 기업이 내부 준비 상태 점검 과정이나 SAP 라이선스 감사, 혹은 업그레이드 계획 단계에서 호환성 패키지 요소가 여전히 활성화돼 있다는 사실을 발견하는 사례가 적지 않다”라고 설명했다.

이어 그는 “이 문제의 위험은 단순히 기술적인 차원에 그치지 않는다. 재무적, 운영적, 평판 리스크로까지 확대될 수 있다”라고 분석했다. 그는 “지원이 종료되면 더 이상 회피할 여지가 없다. SAP는 향후 S/4HANA 릴리스에서 호환성 패키지 기능을 기술적으로 비활성화할 가능성도 열어두고 있어, 이는 규정 준수 리스크를 넘어 실제 비즈니스 중단으로 이어질 수 있다”라고 경고했다.

비클리는 규제 준수 측면의 문제도 지적했다. 그는 “기존에 설정된 2025년 만료 시점을 기준으로 보면, 호환성 패키지를 계속 사용하는 기업은 기술적으로 SAP 소프트웨어 사용 권한을 준수하지 않는 상태에 놓인다. 이번 연장은 SAP와 협력해 호환성 패키지 기능 사용을 중단하려는 기업에게 숨통을 틔워주는 역할을 한다”라고 설명했다.

고기아는 이번 조치를 관통하는 핵심 키워드로 ‘긴급성’을 꼽았다. 그는 “기간 연장은 기대치를 낮춘 것이 아니라, 오히려 기준을 분명히 한 것”이라며 “기업은 이제 매우 짧은 기간 안에 이전이 실제로 진전되고 있는지, 아니면 여전히 위험에 노출돼 있는지를 보여줘야 하는 상황”이라고 분석했다. 이어 “선제적으로 대응하는 기업은 이를 일정과 책임자, 예산이 명확한 실행 프로그램으로 전환하고 있지만, 그렇지 않은 기업은 SAP뿐 아니라, 아직 시간이 있었을 때 왜 위험을 해소하지 않았느냐고 묻는 내부 이해관계자들의 문제 제기에 직면할 수 있다”라고 지적했다.
dl-ciokorea@foundryco.com

SAP tosses some Compatibility Pack users a (short) lifeline

SAP is throwing a lifeline to customers who are running late on their transition away from S/4HANA Compatibility Packs in their data centers. Although usage rights were set to expire on December 31, 2025 for most users, SAP has announced a “final” five-month transition period for customers still using them to move to native capabilities.

Compatibility Packs were introduced in 2015 to allow customers shifting to S/4HANA on premises to retain functionality from their legacy SAP ECC (ERP Central Component) that was either not in the initial release of S/4HANA or that would have taken time to migrate. According to SAP, the missing functionality was delivered in the 2023 release of S/4HANA, and it said in a 2022 blog post, updated in 2025, that given this, Compatibility Pack usage rights, and support for them, would expire at the end of 2025 unless the customer had signed a Rise with SAP or SAP Cloud ERP deal and was making the move to the cloud.

[ Related: More SAP news and analysis ]

But a number of customers needed extra time to complete their migrations, prompting the company to offer an additional extension until the end of May 2026, a SAP spokesperson said in an email. The company will also offer what it calls “tailored programs” to these customers to expedite the move to the cloud products that replace Compatibility Pack functions.

“We’re offering programs like transformation incentives and cloud extension policies, along with hands-on support from enterprise architects, AI-powered tools, and best-practice guidance for public cloud migrations,” the spokesperson sajd. “We also provide dedicated services to identify which compatibility pack functions a customer is using and help replace or migrate them to the recommended cloud solutions, ensuring a smooth transition without disrupting business operations.”

No reprieve for laggards

However, the extra five months are only being offered to customers who have already begun their transition. “We recently discovered that there are customers who have not even started. … Nothing has changed about their post-deadline options,” the spokesperson noted.

Analysts agree that the extension makes sense, though they see the move in different ways.

“SAP has a multitude of customers in the process of migrating from ECC/R3 to S/4 HANA, and those customers are still leveraging the functionality provided by various Compatibility Packs,” said Scott Bickley, advisory fellow at Info-Tech Research Group. “It would be quite the rug-pull for SAP to arbitrarily cut off support and use rights for these customers, resulting in potentially catastrophic disruptions to their internal and customer-facing business processes.”

And Gartner VP Analyst Mike Tucciarone views it as a purely practical move by SAP that “reflects the real-world hurdles organizations are facing today.” Gartner data shows that these compatibility pack issues came up in less than 1% of the thousands of SAP client calls analysts had in 2025, he said, “but for that 1%, it’s a serious issue. This extension acknowledges that these migrations are a big lift, especially when organizations today are focused on driving efficiencies and cost optimization.”

However, it shouldn’t be viewed as a policy change, said Sanchit Vir Gogia, chief analyst at Greyhound Research. “It’s a tightly framed grace period for customers who have already started their remediation journeys. That’s it,” he said. “SAP has chosen a path that appears flexible but is actually highly selective. This is a transition period, not a reprieve.”

Temporary scaffolding

SAP is using a calculated delay to manage customer pushback without compromising its long-term roadmap, he said. “The short duration of the extension makes that crystal clear. Five months is not about changing course. It’s about giving those in motion a chance to land safely, while reinforcing the message that Compatibility Packs were always temporary scaffolding.”

Gogia is also seeing CIOs waking up to the fact their organizations are still at risk, despite believing they had successfully completed their transitions. “Many assumed they were in the clear post-migration, only to discover Compatibility Pack elements still quietly active, sometimes flagged during internal readiness checks, sometimes triggered by SAP licensing audits, and occasionally revealed only during upgrade planning,” he said. “The risk here isn’t just technical. It’s financial, operational, and reputational. Once support ends, there’s nowhere to hide. SAP has even left the door open to technically disabling CP functionality in future S/4 releases, which would push this from compliance risk into outright business disruption.”

Bickley, too, sees compliance issues. “Under the existing 2025 expiration date, companies still using the Compatibility Packs would technically be non-compliant with their software use rights with SAP,” he pointed out. “This extension provides relief for this subset of customers as they work with SAP to migrate away from these solutions.”

One theme is consistent, said Gogia: urgency. “The extension didn’t lower expectations, it clarified them. Enterprises now have a very short path to demonstrate either progress or exposure. The smart ones are turning this into a structured program with timelines, ownership, and budget. The ones that don’t risk being caught flat-footed, not just by SAP, but by their own stakeholders asking why the risk wasn’t addressed when they still had time.”

More SAP news:

Epicor sets timeline to sunset on-prem ERP as cloud becomes the only path forward

Reflecting the continued push by ERP vendors to the cloud, Epicor has announced its schedule to sunset several of its legacy on-premises tools.

The company will roll out final releases for Epicor Kinetic, Epicor Prophet 21, and Epicor BisTrack, and will offer tiered support levels in a phased schedule beginning later this year.

Epicor says this will allow enterprises to take advantage of tools exclusive to Epicor Cloud without having to maintain infrastructure. But the move will present challenges for some organizations, particularly those in highly regulated and data-sensitive industries, analysts point out. 

“These organizations shouldn’t just see this change as a hosting decision shift; it signals a long-term operating model change,” noted Manish Jain, a principal research director at Info-Tech Research Group. It’s not customers choosing the cloud, he said, “It’s about vendors taking alternatives off the table.”

Not an overnight shift, but a fundamental one

With this move, customers will have quicker access to new features and AI-powered capabilities, such as the first ERP AI agent with outcomes-based pricing, as well as a “a modern, resilient platform” that reduces IT burden and operational risk, Epicor said.

Customers using on-premises versions of Kinetic, Prophet 21, and BisTrack will continue to receive support, the company noted, but final releases will roll out between 2026 and 2028, based on platform. Enterprises will then transition into ‘active support’ until 2029 at the latest, and ‘sustaining support’ will begin as early as 2027.

More than 20,000 businesses run on Epicor Cloud. Generally, Epicor Kinetic is used by mid-market and upper mid-market manufacturers, such as discrete manufacturers with complex production, supply chain, and shop-floor requirements, explained Robert Kramer, VP and principal analyst at Moor Insights & Strategy.

Wholesale and industrial distributors who require strong inventory management, pricing, and order fulfillment steer toward Prophet 21, while BisTrack is popular among building materials, lumber, and construction supply distribution sectors, he explained.

“Epicor is not turning off on-premises systems overnight,” Kramer emphasized, but all new capabilities, platform improvements, and long-term roadmap investments will be cloud-only.

The Epicor sunset timeline is as follows:

Kinetic

  • Final on-premises release tentatively scheduled for January 2028
  • Active support, which provides full access to Epicor phone support, security updates, new issue investigation, and more, will be offered through December 31, 2029
  • Sustaining support, which offers limited phone support, access to the latest release (but not to new modules), and an online knowledge base, begins January 1, 2030

Prophet 21

  • Final on-premises release tentatively scheduled for May 2028
  • Active support through June 30, 2029
  • Sustaining support beginning July 1, 2029

BisTrack

  • Final on-premises BisTrack Web Browser & API release tentatively scheduled for July 2028
  • Active support for on-premises BisTrack Web Browser & API through June 30, 2029
  • Sustaining support for on-premises BisTrack Web Browser & API release beginning July 1, 2029

BisTrack Desktop

  • Final on-premises release tentatively scheduled for December 2026
  • Active support through December 31, 2028
  • Sustaining support beginning January 1, 2029

BisTrack UK 3.9 (2017)

  • Active support through December 31, 2026
  • Sustaining support beginning January 1, 2027

New possibilities, different risks

This move will benefit customers looking to modernize and take advantage of the ERP systems of tomorrow: agentic AI and event-driven, noted Moor’s Kramer. Benefits will include simpler infrastructure, more predictable upgrades, and access to new capabilities without the need to manage servers, databases, or patches, or to cycle through time and resource-intensive upgrades.

“Staying on-prem becomes a supportable maintenance decision, not a growth one,” said Kramer.

Organizations will gain the freedom to innovate and “dynamically match costs” with revenue through unit economics, noted Info-Tech’s Jain. “This move will be projected as one that favors organizations prioritizing speed, scalability, and reduced infrastructure management, especially those with limited IT capacity to maintain ERP environments at production-grade reliability.”

For businesses with continuous operations, tight schedules, or requiring limited downtime, operational risk moves from internal IT to the vendor’s architecture, SLAs, and incident response, he explained.

Going forward, enterprises must plan for vendor-led upgrade cycles, tighter dependency on release roadmaps, and reduced control over infrastructure, he said. Cloud ERPs (whether Epicor, Microsoft, or SAP) don’t eliminate risk; they reshape it. Companies trade on-prem localized failures for platform-wide dependencies that can halt entire value chains if resilience and governance aren’t engineered deliberately.

“For organizations that rely on these solutions, the strategic shift is from deployment flexibility to dependency management, and many CIOs aren’t fully resourcing that transition,” said Jain.

Highly-regulated sectors won’t be excluded, he noted. Rather, they will be forced to adopt and combine cloud ERP cores with stricter data controls, residency requirements, and compensating governance mechanisms. Or, if they require strict data sovereignty, they may need to shift to sovereign clouds.

“Going forward, regulated industries aren’t cloud-blocked; they’re architecture blocked,” Jain emphasized. “As on-prem options disappear from the ERP space, compliance becomes an engineering challenge.”

The cloud is the future, and all enterprises must adapt

Across the board, ERP vendors, and most SaaS providers, have been converging on cloud-first models.

This helps them “accelerate innovation, standardize platforms, embed AI capabilities, and, most importantly, sustain recurring revenue,” Jain pointed out. ERP companies have considered on-premises architectures as roadblocks in achieving these objectives.

Concentrating development in the cloud has become the primary way vendors deliver continuous updates, embed AI integrated analytics, and provide security at scale without “forcing disruptive upgrade projects every few years,” said Kramer.

“Maintaining parallel on-prem and cloud platforms slows innovation and increases cost, which is why vendors are trying to draw a clearer line now,” he said.

The move will allow Epicor to focus engineering, security, and innovation on a single deployment model instead of “fragmenting effort across cloud and on-prem versions,” he said.

Customers do give up some control and accept dependencies on a centralized service. Cloud platforms are resilient, Kramer emphasized, but outages are no longer local events that customers can mitigate with internal failover or workarounds.

For regulated or sensitive industries that can’t fully pivot to public cloud, “this does not mean an immediate cliff, but it does narrow long-term options,” he pointed out. Hybrid, private cloud, and sovereign deployments will become the middle ground, but they come with their own challenges, requiring more deliberate planning, stronger governance, and clearer accountability.

“Over time, even highly regulated organizations will be pushed to modernize how they consume ERP,” Kramer noted. “Not because on-prem stops working, but because it gradually stops evolving in ways that support new business and regulatory demands.”

Salesforce’s Agentforce recalibration raises costs and complexity for CIOs

Salesforce is recalibrating its enterprise AI strategy — and CIOs could be footing the bill.

The company has added deterministic controls to Agentforce through a new scripting layer called Agent Script, shifting responsibility for AI behavior back onto customers. Analysts warn the move will force CIOs to absorb new costs, revisit delivery timelines, and defend AI decisions that were once marketed as autonomous.

“Agentforce was pitched as a self-directed agent that could resolve customer issues end-to-end without needing to be micromanaged. CIOs budgeted, planned, and communicated internally based on that vision. What Salesforce is now saying very clearly is that autonomy without guardrails is unscalable. You need deterministic controls not just to govern AI behavior, but to defend it. That’s a major shift, and it throws existing roadmaps into question,” said Sanchit Vir Gogia, CEO of Greyhound Research.

The need to recalibrate

Salesforce embedded Agent Script into Agentforce in October as part of an effort to make AI agents viable in production, not just pilots.

[ RelatedMore Salesforce news and insights ]

Phil Mui, SVP of Salesforce’s AI research division, wrote in October that its “most sophisticated customers are struggling” to keep autonomous agents on-topic in critical workflows, where their unpredictable behavior can drive up operational risk and downstream costs.

The problem, according to Gogia, was Agentforce’s heavy reliance on the Atlas Reasoning Engine that itself relied on multiple LLMs to plan, sequence, and select actions in real time based on user input.

While that approach showed promise in theory, it faltered in enterprise production as agent behavior varied from session to session, with identical customer scenarios triggering different execution paths based on how the model interpreted intent in the moment, Gogia said.

When agents drifted or stalled, developers had few options beyond continuously rewriting prompts — a pattern Salesforce engineers later described as a “doom-prompting” cycle that failed to address the underlying issue, Gogia added.

Animesh Banerjee, managing director Salesforce workforce development partner Bong Bong Academy, said that his firm faced issues while implementing Agentforce in multiple Indian enterprises. “The biggest friction points were high operating costs and unpredictable responses even after structured prompting, especially in customer support use cases,” he said.

Jayanta Acharjee, senior Salesforce consultant at global software firm Sitetracker, said that enterprises faced challenges while implementing Agentforce: Variance in LLM-led answers to user queries was hard to accept for use cases such as support in finance, healthcare.

“Even low-frequency inaccuracies are unacceptable when responses go directly to customers. The failure mode is ‘confidently wrong,’ which creates reputational and legal exposure,” said Acharjee, who previously worked at professional services firm Huron as a senior Salesforce developer.

According to Salesforce’s Mui, Agent Script paves the way for enterprises to impose deterministic (rule-based) structure on agent execution, breaking tasks into governed steps with defined logic and state, so enterprises can better control outcomes, limit unnecessary compute, and make agent behavior auditable.

However, analysts see this tradeoff from being fully autonomous to what Salesforce describes as “hybrid reasoning” as placing greater operational and governance responsibility on CIOs, requiring them to engineer and maintain AI controls that were once expected to be handled by the platform.

Challenges for CIOs

Avasant research director Chandrika Dutt sees the recalibration as a burden on CIOs and their teams, both from a cost and skills perspective.

The change, according to Dutt, will force enterprises to now invest in workflow mapping, data modeling, and prompt efficiency management to control token and consumption costs — all of which are capabilities that are often outside the core skill sets of many enterprises, particularly those that adopted Agentforce expecting abstraction and simplicity.

“As a result, many enterprises may find themselves increasingly reliant on service providers to operationalize, govern, and optimize agent-driven workflows. What was initially perceived as a productivity shortcut can quickly evolve into a services-heavy engagement,” Dutt said.

Greyhound Research’s Gogia pointed out that the layering of deterministic controls also affects the expected return on investment timeline for CIOs by stretching it further.  

“If you planned to roll out a GenAI service desk in Q2 and let it learn on the job, you now need to add scripting, testing, versioning, and QA cycles. That slows things down,” Gogia said, adding that this creates a “political problem” for CIOs who evangelized AI internally and now must explain why the vendor changed direction and why the cost-benefit equation has shifted.

Broader industry reality

Analysts also say that Salesforce’s move underscores a broader industry reality: At scale, autonomy is proving harder to control, and more expensive to defend, than vendors initially suggested.

Gogia pointed out that his firm had tracked similar pivots across Microsoft, OpenAI, LinkedIn, ServiceNow, and even emerging AI toolchains: OpenAI, for example, released AgentKit specifically to give developers a way to choreograph agent behavior through explicit steps.

In fact, given the inherent weakness of LLMs as “terrible operators”, Dutt expects other software vendors to follow a similar path to Salesforce, especially in regulated industries such as healthcare, banking, financial services, and insurance, and in the public sector.

“In these environments, deterministic automation is not optional; it is essential to ensure predictable behavior, auditability, regulatory compliance, and tightly controlled execution,” Dutt said.

Don’t treat AI agents as digital employees

CIOs facing the shift can only respond by altering the way they view their AI agents, treating them as embedded capabilities within business workflows instead of digital employees, analysts said.

That will mean more upfront investment.

“You must staff appropriately. That includes technical skills for scripting and debugging, but also design thinking for conversation flows, domain experts for policy design, and compliance leads for governance. AI is not a black box you license. It is a system you operate. If you don’t have the people to operate it you don’t have a strategy, you have a pilot,” Gogia said.

CIOs will also have to align board expectations with the recalibration, he said, advising them to be clear about AI’s limits as well as its value, and to use Salesforce’s pivot to reset expectations, explain the investment required to reach ROI, and position AI as a long-term, scalable capability, not a quick win.

Other alternatives would be to view the recalibration as an incremental step instead of a migration: CIOs should apply deterministic controls selectively, starting with workflows where predictability is non-negotiable, rather than rebuilding agents from scratch, said Akshay Sonawane, a former software engineering manager at Salesforce and currently a machine learning engineer at Apple.

Sonawane said that CIOs should also view learning a new domain-specific language like Agent Script as a short-term issue because the alternative is prompt engineering, which is more difficult to debug, test, and maintain.

A Salesforce spokesperson echoed Sonawane’s views on migration and said that enterprises facing issues with Agentforce implementation can reach out to its Professional Services team and other certified partners.

The caveat, though, as analysts pointed out, is that it is not free.

More Salesforce news

Salesforce: Latest news and insights

Salesforce (NYSE:CRM) is a vendor of cloud-based software and applications for sales, customer service, marketing automation, ecommerce, analytics, and application development. Based in San Francisco, Calif., its services include Sales Cloud, Service Cloud, Marketing Cloud, Commerce Cloud, and Salesforce Platform. Its subsidiaries include Tableau Software, Slack Technologies, and MuleSoft, among others.

The company is undergoing a pivot to agentic AI, increasingly focused on blending generative AI with a range of other capabilities to offer customers the ability to develop autonomous decision-making agents for their service and sales workflows. Salesforce has a market cap of $293 billion, making it the world’s 36th most valuable company by market cap.

Salesforce news and analysis

Salesforce’s Agentforce recalibration raises costs and complexity for CIOs

January 7, 2026: Salesforce is recalibrating its enterprise AI strategy — and CIOs could be footing the bill. Analysts warn the move will force CIOs to absorb new costs, revisit delivery timelines, and defend AI decisions that were once marketed as autonomous.

Salesforce is tightening control of its data ecosystem and CIOs may have to pay the price

December 17, 2025: Software partners are now feeling the impact of changes Salesforce announced in February to how it charges for API access and are weighing whether to absorb the higher the costs, pass them on to customers and risk backlash, or pursue alternative ways to access the data and risk straining its relationship with Salesforce.

Salesforce’s Agentforce 360 gets an enterprise data backbone with Informatica’s metadata and lineage engine

December 9, 2025: While studies suggest that a high number of AI projects fail, many experts argue that it’s not the model’s fault, it’s the data behind it. Salesforce aims to tackle this problem with the integration of its newest acquisition, Informatica.

Salesforce unveils observability tools to manage and optimize AI agents

November 20, 2025: Salesforce unveiled new Agentforce 360 observability tools to give teams visibility into why AI agents behave the way they do, and which reasoning paths they follow to reach decisions.

Salesforce unveils simulation environment for training AI agents

November 14, 2025: Salesforce AI Research today unveiled a new simulation environment for training voice and text agents for the enterprise. Dubbed eVerse, the environment leverages synthetic data generation, stress testing, and reinforcement learning to optimize agents.

Salesforce to acquire Doti to boost AI-based enterprise search via Slack

November 14, 2025: Salesforce wii acquire Israeli startup, Doti, aiming to enhance AI-based enterprise search capabilities offered via Slack. The demand for efficient data retrieval and interpretation has been growing within enterprises, driven by the need to streamline workflows and increase productivity.

Salesforce’s glaring Dreamforce omission: Vital security lessons from Salesloft Drift

October 22, 2025: Salesforce’s Dreamforce conference offered a range of sessions on best practices for securing their Salesforce environments and AI agents, but what it didn’t address were weaknesses exposed by the recent spate of Salesforce-related breaches.

Salesforce updates its agentic AI pitch with Agentforce 360

October 13, 2025: Salesforce has announced a new release of Agentforce that, it said, “gives teams the fastest path from AI prototypes to production-scale agents” — although with many of the new release’s features still to come, or yet to enter pilot phases or beta testing, some parts of that path will be much slower than others.

Lessons from the Salesforce breach

October 10, 2025: The chilling reality of a Salesforce.com data breach is a jarring wake-up call, not just for its customers, but for the entire cloud computing industry. 

Salesforce brings agentic AI to IT service management

October 9, 2025: Salesforce is bringing agentic AI to IT service management (ITSM). The CRM giant is taking aim at competitors like ServiceNow with Agentforce IT Service, a new IT support suite that leverages autonomous agents to resolve incidents and service requests.

Salesforce Trusted AI Foundation seeks to power the agentic enterprise

October 2, 2025: As Salesforce pushes further into agentic AI, its aim is to evolve Salesforce Platform from an application for building AI to a foundational operating system for enterprise AI ecosystems. The CRM giant took a step toward that vision today, announcing innovations across the Salesforce Platform, Data Cloud, MuleSoft, and Tableau.

Salesforce AI Research unveils new tools for AI agents

August 27, 2025: Salesforce AI Research announced three advancements designed to help customers transition to agentic AI: a simulated enterprise environment framework for testing and training agents, a benchmarking tool to measure the effectiveness of agents, and a data cloud capability for autonomously consolidating and unifying duplicated data.

Attackers steal data from Salesforce instances via compromised AI live chat tool

August 26, 2025: A threat actor managed to obtain Salesforce OAuth tokens from a third-party integration called Salesloft Drift and used the tokens to download large volumes of data from impacted Salesforce instances. One of the attacker’s goals was to find and extract additional credentials stored in Salesforce records that could expand their access.

Salesforce acquires Regrello to boost automation in Agentforce

August 19, 2025: Salesforce is buying Regrello to enhance Agentforce, its suite of tools for building autonomous AI agents for sales, service, and marketing. San Francisco-based startup Regrello specializes in turning data into agentic workflows, primarily for automating supply-chain business processes.

Salesforce adds new billing options to Agentforce

August 19, 2025: In a move that aims to improve accessibility for agentic AI, Salesforce announced new payment options for Agentforce, its autonomous AI agent suite.The new options, built on the flexible pricing the company introduced in May, allow customers to use Flex Credits to pay for the actions agents take.

Salesforce to acquire Waii to enhance SQL analytics in Agentforce

August 11, 2025: Salesforce has signed a definitive agreement to acquire San Francisco-based startup Waii for an undisclosed sum to enhance SQL analytics within Agentforce, its suite of tools aimed at helping enterprises build autonomous AI agents for sales, service, marketing, and commerce use cases.

Could Agentforce 3’s MCP integration push Salesforce ahead in the CRM AI race?

June 25, 2025: “[Salesforce’s] implementation of MCP is one of the most ambitious interoperability moves we have seen from a CRM vendor or any vendor. It positions Agentforce as a central nervous system for multi-agent orchestration, not just within Salesforce but across the enterprise,” said Dion Hinchcliffe, lead of the CIO practice at The Futurum Group. But it introduces new considerations around security.

Salesforce Agentforce 3 promises new ways to monitor and manage AI agents

June 24, 2025: This is the fourth version of Salesforce Agentforce since its debut in September last year, with the newest, Agentforce 3, succeeding the previous ‘2dx’ release. A new feature of the latest version is Agentforce Studio, which is also available as a separate application within Salesforce.

Salesforce supercharges Agentforce with embedded AI, multimodal support, and industry-specific agents

Jun 18, 2025: Salesforce is updating Agentforce with new AI features and expanding it across every facet of its ecosystem with the hope that enterprises will see the no-code platform as ready for tackling real-world digital execution, shaking its image of being a module for pilot projects.

CIOs brace for rising costs as Salesforce adds 6% to core clouds, bundles AI into premium plans

Jun 18, 2025: Salesforce is rolling out sweeping changes to its pricing and product packaging, including a 6% increase for Enterprise and Unlimited Editions of Sales Cloud, Service Cloud, Field Service, and select Industries Clouds, effective August 1.

Salesforce study warns against rushing LLMs into CRM workflows without guardrails

June 17, 2025: A new benchmark study from Salesforce AI Research has revealed significant gaps in how large language models handle real-world customer relationship management tasks.

Salesforce Industry Cloud riddled with configuration risks

June 16, 2025: AppOmni researchers found 20 insecure configurations and behaviors in Salesforce Industry Cloud’s low-code app building components that could lead to data exposure.

Salesforce changes Slack API terms to block bulk data access for LLMs

June 11, 2025: Salesforce’s Slack platform has changed its API terms of service to stop organizations from using Large Language Models to ingest the platform’s data as part of its efforts to implement better enterprise data discovery and search.

Salesforce to buy Informatica in $8 billion deal

May 27. 2025: Salesforce has agreed to buy Informatica in an $8 billion deal as a way to quickly access far more data for its AI efforts. Analysts generally agreed that the deal was a win-win for both companies’ customers, but for very different reasons. 

Salesforce wants your AI agents to achieve ‘enterprise general intelligence’

May 1, 2025: Salesforce AI Research unveiled a slate of new benchmarks, guardrails, and models to help customers develop agentic AI optimized for business applications.

Salesforce CEO Marc Benioff: AI agents will be like Iron Man’s Jarvis

April 17, 2025: AI agents are more than a productivity boost; they’re fundamentally reshaping customer interactions and business operations. And while there’s still work to do on trust and accuracy, the world is beginning a new tech era — one that might finally deliver on the promises seen in movies like Minority Report and Iron Man, according to Salesforce CEO Marc Benioff.

Agentblazer: Salesforce announces agentic AI certification, learning path

March 6, 2025: Hot on the heels of the release of Agentforce 2dx for developing, testing, and deploying AI agents, Salesforce introduced Agentblazer Status to its Trailhead online learning platform.

Salesforce takes on hyperscalers with Agentforce 2dx updates

March 6, 2025: Salesforce’s updates to its agentic AI offering — Agentforce — could give the CRM software provider an edge over its enterprise application rivals and hyperscalers including AWS, Google, IBM, Service Now and Microsoft.

Salesforce’s Agentforce 2dx update aims to simplify AI agent development, deployment

March 5, 2025: Salesforce released the third version of its agentic AI offering — Agentforce 2dx — to simplify the development, testing, and deployment of AI agents that can automate business processes across departments, such as sales, service, marketing, finance, HR, and operations.

Salesforce’s AgentExchange targets AI agent adoption, monetization

March 4, 2025: Salesforce is launching a new marketplace named AgentExchange for its agents and agent-related actions, topics, and templates to increase adoption of AI agents and allow its partners to monetize them.

Salesforce and Google expand partnership to bring Agentforce, Gemini together

February 25, 2025: The expansion of the strategic partnership will enable customers to build Agentforce AI agents using Google Gemini and to deploy Salesforce on Google Cloud.

AI to shake up Salesforce workforce with possible shift to sales over IT

February 5, 2025: With the help of AI, Salesforce can probably do without some staff. At the same time, the company needs salespeople trained in new AI products, CEO Marc Benioff has stated.

Salesforce’s Agentforce 2.0 update aims to make AI agents smarter

December 18, 2024: The second release of Salesforce’s agentic AI platform offers an updated reasoning engine, new agent skills, and the ability to build agents using natural language.

Meta creates ‘Business AI’ group led by ex-Salesforce AI CEO Clara Shih

November 20, 2024: The ex-CEO of Salesforce AI, Clara Shih, has turned up at Meta just a few days after quitting Salesforce. In her new role at Meta she will set up a new Business AI group to package Meta’s Llama AI models for enterprises.

CEO of Salesforce AI Clara Shih has left

November 15, 2024: The CEO of Salesforce AI, Clara Shih, has left after just 20 months in the job. Adam Evans, previously senior vice president of product for Salesforce AI Platform, has moved up to the newly created role of executive vice president and general manager of Salesforce AI.

Marc Benioff rails against Microsoft’s copilot

October 24, 2024: Salesforce’s boss doesn’t have a good word to say about Microsoft’s AI assistants, saying the technology is basically no better than Clippy 25 years ago.

Salesforce’s Financial Services Cloud targets ops automation for insurance brokerages

October 16, 2024: Financial Services Cloud for Insurance Brokerages will bring new features to help with commissions management and employee benefit servicing, among other things, when it is released in February 2025.

Explained: How Salesforce Agentforce’s Atlas reasoning engine works to power AI agents

September 30, 2024: AI agents created via Agentforce differ from previous Salesforce-based agents in their use of Atlas, a reasoning engine designed to help these bots think like human beings.

5 key takeaways from Dreamforce 2024

September 20, 2024: As Salesforce’s 2024 Dreamforce conference rolls up the carpet for another year, here’s a look at a few high points as Salesforce pitched a new era for its customers, centered around Agentforce, which brings agentic AI to enterprise sales and service operations.

Alation and Salesforce partner on data governance for Data Cloud

September 19, 2024: Data intelligence platform vendor Alation has partnered with Salesforce to deliver trusted, governed data across the enterprise. It will do this, it said, with bidirectional integration between its platform and Salesforce’s to seamlessly delivers data governance and end-to-end lineage within Salesforce Data Cloud. This enables companies to directly access key metadata (tags, governance policies, and data quality indicators) from over 100 data sources in Data Cloud, it said.

New Data Cloud features to boost Salesforce’s AI agents

September 17, 2024: Salesforce added new features to its Data Cloud to help enterprises analyze data from across their divisions and also boost the company’s new autonomous AI agents released under the name Agentforce, the company announced at the ongoing annual Dreamforce conference.

Dreamforce 2024: Latest news and insights

September 17, 2024: Dreamforce 2024 boasts more than 1,200 keynotes, sessions and workshops. While this year’s Dreamforce will encompass a wide spectrum of topics, expect Salesforce to showcase Agentforce next week at Dreamforce.

Salesforce unveils Agentforce to help create autonomous AI bots

September 12, 2024: The CRM giant’s new low-code suite enables enterprises to build AI agents that can reason for themselves when completing sales, service, marketing, and commerce tasks.

Salesforce to acquire data protection specialist Own Company for $1.9 billion

September 6, 2024: The CRM company said Own’s data protection and data management solutions will help it enhance availability, security, and compliance of customer data across its platform.

Salesforce previews new XGen-Sales model, releases xLAM family of LLMs

September 6, 2024: The XGen-Sales model, which is based on the company’s open source APIGen and its family of large action models (LAM), will aid developers and enterprises in automating actions taken by AI agents, analysts say.

Salesforce mulls consumption pricing for AI agents

August 30, 2024: Investors expect AI agent productivity gains to reduce demand for Salesforce license seats. CEO Marc Benioff says a per-conversation pricing model is a likely solution.

Coforge and Salesforce launch new offering to accelerate net zero goals

August 27, 2024: Coforge ENZO is designed to streamline emissions data management by identifying, consolidating, and transforming raw data from various emission sources across business operations.

Salesforce unveils autonomous agents for sales teams

August 22, 2024: Salesforce today announced two autonomous agents geared to help sales teams scale their operations and hone their negotiation skills. Slated for general availability in October, Einstein Sales Development Rep (SDR) Agent and Einstein Sales Coach Agent will be available through Sales Cloud, with pricing yet to be announced.

Salesforce to acquire PoS startup PredictSpring to augment Commerce Cloud

August 2, 2024: Salesforce has signed a definitive agreement to acquire cloud-based point-of-sale (PoS) software vendor PredictSpring. The acquisition will augment Salesforce’s existing Customer 360 capabilities.

Einstein Studio 1: What it is and what to expect

July 31, 2024: Salesforce has released a set of low-code tools for creating, customizing, and embed AI models in your company’s Salesforce workflows. Here’s a first look at what can be achieved using it.

Why are Salesforce and Workday building an AI employee service agent together?

July 26, 2024: Salesforce and Workday are partnering to build a new AI-based employee service agent based on a common data foundation. The agent will be accessible via their respective software interfaces.

Salesforce debuts gen AI benchmark for CRM

June 18, 2024: The software company’s new gen AI benchmark for CRM aims to help businesses make more informed decisions when choosing large language models (LLMs) for use with business applications.

Salesforce updates Sales and Service Cloud with new capabilities

June 6, 2024: The CRM software vendor has added new capabilities to its Sales Cloud and Service Cloud with updates to its Einstein AI and Data Cloud offerings, including additional generative AI support.

IDC Research: Salesforce 1QFY25: Building a Data Foundation to Connect with Customers

June 5, 2024: Salesforce reported solid growth including $9.13 billion in revenue or 11% year-over-year growth. The company has a good start to its 2025 fiscal year, but the market continues to shift in significant ways, and Salesforce is not immune to those changes.

IDC Research: Salesforce Connections 2024: Making Every Customer Journey More Personalized and Profitable Through the Einstein 1 Platform

June 5, 2024: The Salesforce Connections 2024 event showcased the company’s efforts to revolutionize customer journeys through its innovative artificial (AI)-driven platform, Einstein 1. Salesforce’s strategic evolution at Connections 2024 marks a significant step forward in charting the future of personalized and efficient AI-driven customer journeys.

Salesforce launches Einstein Copilot for general availability

April 25, 2024: Salesforce has announced the general availability of its conversational AI assistant along with a library of pre-programmed ‘Actions’ to help sellers benefit from conversational AI in Sales Cloud.

Salesforce debuts Zero Copy Partner Network to streamline data integration

April 25, 2024: Salesforce has unveiled a new global ecosystem of technology and solution providers geared to helping its customers leverage third-party data via secure, bidirectional zero-copy integrations with Salesforce Data Cloud.

Salesforce-Informatica acquisition talks falls through: Report

April 22, 2024: Salesforce’s negotiations to acquire enterprise data management software provider Informatica have fallen through as both couldn’t agree on the terms of the deal. The disagreement about the terms of the deal is more likely to be around the price of each share of Informatica.

Decoding Salesforce’s plausible $11 billion bid to acquire Informatica

April 17, 2024: Salesforce is seeking to acquire enterprise data management vendor Informatica, in a move that could mean consolidation for the integration platform-as-a-service (iPaaS) market and a new revenue stream for Salesforce.

Salesforce adds Contact Center updates to Service Cloud

March 26, 2024: Salesforce has announced new Contact Center updates to its Service Cloud, including features such as conversation mining and generative AI-driven survey summarization.

Salesforce bids to become AI’s copilot building platform of choice

March 7, 2024: Salesforce has entered the race to offer the preeminent platform for building generative AI copilots with Einstein 1 Studio, a new set of low-code/no-code AI tools for accelerating the development of gen AI applications. Analysts say the platform has all the tools to become the platform for building out and deploying gen AI assistants.

Salesforce rebrands its low-code platform to Einstein 1 Studio

March 6, 2024: Salesforce has rebranded its low-code platform to Einstein 1 Studio and bundled it with the company’s Data Cloud offering. The platform has added a new feature, Prompt Builder, which allows developers to create reusable LLM prompts without the need for writing code.

Salesforce’s Einstein 1 platform to get new prompt-engineering features

February 9, 2024: Salesforce is working on adding two new prompt engineering features to its Einstein 1 platform to speed up the development of generative AI applications in the enterprise. The features include a testing center and the provision of prompt engineering suggestions.


❌