Normal view

There are new articles available, click to refresh the page.
Yesterday — 5 December 2025Main stream

Microsoft shareholders invoke Orwell and Copilot as Nadella cites ‘generational moment’

5 December 2025 at 13:52
From left: Microsoft CFO Amy Hood, CEO Satya Nadella, Vice Chair Brad Smith, and Investor Relations head Jonathan Nielsen at Friday’s virtual shareholder meeting. (Screenshot via webcast)

Microsoft’s annual shareholder meeting Friday played out as if on a split screen: executives describing a future where AI cures diseases and secures networks, and shareholder proposals warning of algorithmic bias, political censorship, and complicity in geopolitical conflict.

One shareholder, William Flaig, founder and CEO of Ridgeline Research, quoted two authorities on the topic — George Orwell’s 1984 and Microsoft’s Copilot AI chatbot — in requesting a report on the risks of AI censorship of religious and political speech.

Flaig invoked Orwell’s dystopian vision of surveillance and thought control, citing the Ministry of Truth that “rewrites history and floods society with propaganda.” He then turned to Copilot, which responded to his query about an AI-driven future by noting that “the risk lies not in AI itself, but in how it’s deployed.”

In a Q&A session during the virtual meeting, Microsoft CEO Satya Nadella said the company is “putting the person and the human at the center” of its AI development, with technology that users “can delegate to, they can steer, they can control.”

Nadella said Microsoft has moved beyond abstract principles to “everyday engineering practice,” with safeguards for fairness, transparency, security, and privacy.

Brad Smith, Microsoft’s vice chair and president, said broader societal decisions, like what age kids should use AI in schools, won’t be made by tech companies. He cited ongoing debates about smartphones in schools nearly 20 years after the iPhone.

“I think quite rightly, people have learned from that experience,” Smith said, drawing a parallel to the rise of AI. “Let’s have these conversations now.”

Microsoft’s board recommended that shareholders vote against all six outside proposals, which covered issues including AI censorship, data privacy, human rights, and climate. Final vote tallies have yet to be released as of publication time, but Microsoft said shareholders turned down all six, based on early voting. 

While the shareholder proposals focused on AI risks, much of the executive commentary focused on the long-term business opportunity. 

Nadella described building a “planet-scale cloud and AI factory” and said Microsoft is taking a “full stack approach,” from infrastructure to AI agents to applications, to capitalize on what he called “a generational moment in technology.”

Microsoft CFO Amy Hood highlighted record results for fiscal year 2025 — more than $281 billion in revenue and $128 billion in operating income — and pointed to roughly $400 billion in committed contracts as validation of the company’s AI investments.

Hood also addressed pre-submitted shareholder questions about the company’s AI spending, pushing back on concerns about a potential bubble. 

“This is demand-driven spending,” she said, noting that margins are stronger at this stage of the AI transition than at a comparable point in Microsoft’s cloud buildout. “Every time we think we’re getting close to meeting demand, demand increases again.”

Samsung’s massive Odyssey Neo G9 just got a huge price cut

5 December 2025 at 10:10

If a normal ultrawide feels cramped, this is the upgrade you’ve been waiting for. The Samsung 57-inch Odyssey Neo G9 (G95NC) is on sale for $1,449.99, down from $2,299.99, which means you are saving about $850 on one of the most over-the-top gaming displays you can buy. You get a dual-4K canvas, a 240 Hz […]

The post Samsung’s massive Odyssey Neo G9 just got a huge price cut appeared first on Digital Trends.

Warnings About Retrobright Damaging Plastics After 10 Year Test

5 December 2025 at 07:00

Within the retro computing community there exists a lot of controversy about so-called ‘retrobrighting’, which involves methods that seeks to reverse the yellowing that many plastics suffer over time. While some are all in on this practice that restores yellow plastics to their previous white luster, others actively warn against it after bad experiences, such as [Tech Tangents] in a recent video.

Uneven yellowing on North American SNES console. (Credit: Vintage Computing)
Uneven yellowing on North American SNES console. (Credit: Vintage Computing)

After a decade of trying out various retrobrighting methods, he found for example that a Sega Dreamcast shell which he treated with hydrogen peroxide ten years ago actually yellowed faster than the untreated plastic right beside it. Similarly, the use of ozone as another way to achieve the oxidation of the brominated flame retardants that are said to underlie the yellowing was also attempted, with highly dubious results.

While streaking after retrobrighting with hydrogen peroxide can be attributed to an uneven application of the compound, there are many reports of the treatment damaging the plastics and making it brittle. Considering the uneven yellowing of e.g. Super Nintendo consoles, the cause of the yellowing is also not just photo-oxidation caused by UV exposure, but seems to be related to heat exposure and the exact amount of flame retardants mixed in with the plastic, as well as potentially general degradation of the plastic’s polymers.

Pending more research on the topic, the use of retrobrighting should perhaps not be banished completely. But considering the damage that we may be doing to potentially historical artifacts, it would behoove us to at least take a step or two back and consider the urgency of retrobrighting today instead of in the future with a better understanding of the implications.

Agents-as-a-service are poised to rewire the software industry and corporate structures

5 December 2025 at 05:00

This was the year of AI agents. Chatbots that simply answered questions are now evolving into autonomous agents that can carry out tasks on a user’s behalf, so enterprises continue to invest in agentic platforms as transformation evolves. Software vendors are investing in it as fast as they can, too.

According to a National Research Group survey of more than 3,000 senior leaders, more than half of executives say their organization is already using AI agents. Of the companies that spend no less than half their AI budget on AI agents, 88% say they’re already seeing ROI on at least one use case, with top areas being customer service and experience, marketing, cybersecurity, and software development.

On the software provider side, Gartner predicts 40% of enterprise software applications in 2026 will include agentic AI, up from less than 5% today. And agentic AI could drive approximately 30% of enterprise application software revenue by 2035, surpassing $450 billion, up from 2% in 2025. In fact, business users might not have to interact directly with the business applications at all since AI agent ecosystems will carry out user instructions across multiple applications and business functions. At that point, a third of user experiences will shift from native applications to agentic front ends, Gartner predicts.

It’s already starting. Most enterprise applications will have embedded assistants, a precursor to agentic AI, by the end of this year, adds Gartner.

IDC has similar predictions. By 2028, 45% of IT product and service interactions will use agents as the primary interface, the firm says. That’ll change not just how companies work, but how CIOs work as well.

Agents as employees

At financial services provider OneDigital, chief product officer Vinay Gidwaney is already working with AI agents, almost as if they were people.

“We decided to call them AI coworkers, and we set up an AI staffing team co-owned between my technology team and our chief people officer and her HR team,” he says. “That team is responsible for hiring AI coworkers and bringing them into the organization.” You heard that right: “hiring.”

The first step is to sit down with the business leader and write a job description, which is fed to the AI agent, and then it becomes known as an intern.

“We have a lot of interns we’re testing at the company,” says Gidwaney. “If they pass, they get promoted to apprentices and we give them our best practices, guardrails, a personality, and human supervisors responsible for training them, auditing what they do, and writing improvement plans.”

The next promotion is to a full-time coworker, and it becomes available to be used by anyone at the company.

“Anyone at our company can go on the corporate intranet, read the skill sets, and get ice breakers if they don’t know how to start,” he says. “You can pick a coworker off the shelf and start chatting with them.”

For example, there’s Ben, a benefits expert who’s trained on everything having to do with employee benefits.

“We have our employee benefits consultants sitting with clients every day,” Gidwaney says. “Ben will take all the information and help the consultants strategize how to lower costs, and how to negotiate with carriers. He’s the consultants’ thought partner.”

There are similar AI coworkers working on retirement planning, and on property and casualty as well. These were built in-house because they’re core to the company’s business. But there are also external AI agents who can provide additional functionality in specialized yet less core areas, like legal or marketing content creation. In software development, OneDigital uses third-party AI agents as coding assistants.

When choosing whether to sign up for these agents, Gidwaney says he doesn’t think of it the way he thinks about licensing software, but more to hiring a human consultant or contractor. For example, will the agent be a good cultural fit?

But in some cases, it’s worse than hiring humans since a bad human hire who turns out to be toxic will only interact with a small number of other employees. But an AI agent might interact with thousands of them.

“You have to apply the same level of scrutiny as how you hire real humans,” he says.

A vendor who looks like a technology company might also, in effect, be a staffing firm. “They look and feel like humans, and you have to treat them like that,” he adds.

Another way that AI agents are similar to human consultants is when they leave the company, they take their expertise with them, including what they gained along the way. Data can be downloaded, Gidwaney says, but not necessarily the fine-tuning or other improvements the agent received. Realistically, there might not be any practical way to extract that from a third-party agent, and that could lead to AI vendor lock-in.

Edward Tull, VP of technology and operations at JBGoodwin Realtors, says he, too, sees AI agents as something akin to people. “I see it more as a teammate,” he says. “As we implement more across departments, I can see these teammates talking to each other. It becomes almost like a person.”

Today, JBGoodwin uses two main platforms for its AI agents. Zapier lets the company build its own and HubSpot has its own AaaS, and they’re already pre-built. “There are lead enrichment agents and workflow agents,” says Tull.

And the company is open to using more. “In accounting, if someone builds an agent to work with this particular type of accounting software, we might hire that agent,” he says. “Or a marketing coordinator that we could hire that’s built and ready to go and connected to systems we already use.”

With agents, his job is becoming less about technology and more about management, he adds. “It’s less day-to-day building and more governance, and trying to position the company to be competitive in the world of AI,” he says.

He’s not the only one thinking of AI agents as more akin to human workers than to software.

“With agents, because the technology is evolving so far, it’s almost like you’re hiring employees,” says Sheldon Monteiro, chief product officer at Publicis Sapient. “You have to determine whom to hire, how to train them, make sure all the business units are getting value out of them, and figure when to fire them. It’s a continuous process, and this is very different from the past, where I make a commitment to a platform and stick with it because the solution works for the business.”

This changes how the technology solutions are managed, he adds. What companies will need now is a CHRO, but for agentic employees.

Managing outcomes, not persons

Vituity is one of the largest national, privately-held medical groups, with 600 hospitals, 13,800 employees, and nearly 14 million patients. The company is building its own AI agents, but is also using off-the-shelf ones, as AaaS. And AI agents aren’t people, says CIO Amith Nair. “The agent has no feelings,” he says. “AGI isn’t here yet.”

Instead, it all comes down to outcomes, he says. “If you define an outcome for a task, that’s the outcome you’re holding that agent to.” And that part isn’t different to holding employees accountable to an outcome. “But you don’t need to manage the agent,” he adds. “They’re not people.”

Instead, the agent is orchestrated and you can plug and play them. “It needs to understand our business model and our business context, so you ground the agent to get the job done,” he says.

For mission-critical functions, especially ones related to sensitive healthcare data, Vituity is building its own agents inside a HIPAA-certified LLM environment using the Workato agent development platform and the Microsoft agentic platform.

For other functions, especially ones having to do with public data, Vituity uses off-the-shelf agents, such as ones from Salesforce and Snowflake. The company is also using Claude with GitHub Copilot for coding. Nair can already see that agentic systems will change the way enterprise software works.

“Most of the enterprise applications should get up to speed with MCP, the integration layer for standardization,” he says. “If they don’t get to it, it’s going to become a challenge for them to keep selling their product.”

A company needs to be able to access its own data via an MCP connector, he says. “AI needs data, and if they don’t give you an MCP, you just start moving it all to a data warehouse,” he adds.

Sharp learning curve

In addition to providing a way to store and organize your data, enterprise software vendors also offer logic and functionality, and AI will soon be able to handle that as well.

“All you need is a good workflow engine where you can develop new business processes on the fly, so it can orchestrate with other agents,” Nair says. “I don’t think we’re too far away, but we’re not there yet. Until then, SaaS vendors are still relevant. The question is, can they charge that much money anymore.”

The costs of SaaS will eventually have to come down to the cost of inference, storage, and other infrastructure, but they can’t survive the way they’re charging now he says. So SaaS vendors are building agents to augment or replace their current interfaces. But that approach itself has its limits. Say, for example, instead of using Salesforce’s agent, a company can use its own agents to interact with the Salesforce environment.

“It’s already happening,” Nair adds. “My SOC agent is pulling in all the log files from Salesforce. They’re not providing me anything other than the security layer they need to protect the data that exists there.”

AI agents are set to change the dynamic between enterprises and software vendors in other ways, too. One major difference between software and agents is software is well-defined, operates in a particular way, and changes slowly, says Jinsook Han, chief of strategy, corporate development, and global agentic AI at Genpact.

“But we expect when the agent comes in, it’s going to get smarter every day,” she says. “The world will change dramatically because agents are continuously changing. And the expectations from the enterprises are also being reshaped.”

Another difference is agents can more easily work with data and systems where they are. Take for example a sales agent meeting with customers, says Anand Rao, AI professor at Carnegie Mellon University. Each salesperson has a calendar where all their meetings are scheduled, and they have emails, messages, and meeting recordings. An agent can simply access those emails when needed.

“Why put them all into Salesforce?” Rao asks. “If the idea is to do and monitor the sale, it doesn’t have to go into Salesforce, and the agents can go grab it.”

When Rao was a consultant having a conversation with a client, he’d log it into Salesforce with a note, for instance, saying the client needs a white paper from the partner in charge of quantum.

With an agent taking notes during the meeting, it can immediately identify the action items and follow up to get the white paper.

“Right now we’re blindly automating the existing workflow,” Rao says. “But why do we need to do that? There’ll be a fundamental shift of how we see value chains and systems. We’ll get rid of all the intermediate steps. That’s the biggest worry for the SAPs, Salesforces, and Workdays of the world.”

Another aspect of the agentic economy is instead of a human employee talking to a vendor’s AI agent, a company agent can handle the conversation on the employee’s behalf. And if a company wants to switch vendors, the experience will be seamless for employees, since they never had to deal directly with the vendor anyway.

“I think that’s something that’ll happen,” says Ricardo Baeza-Yates, co-chair of the  US technology policy committee at the Association for Computing Machinery. “And it makes the market more competitive, and makes integrating things much easier.”

In the short term, however, it might make more sense for companies to use the vendors’ agents instead of creating their own.

“I recommend people don’t overbuild because everything is moving,” says Bret Greenstein, CAIO at West Monroe Partners, a management consulting firm. “If you build a highly complicated system, you’re going to be building yourself some tech debt. If an agent exists in your application and it’s localized to the data in that application, use it.”

But over time, an agent that’s independent of the application can be more effective, he says, and there’s a lot of lock-in that goes into applications. “It’s going to be easier every day to build the agent you want without having to buy a giant license. “The effort to get effective agents is dropping rapidly, and the justification for getting expensive agents from your enterprise software vendors is getting less,” he says.

The future of software

According to IDC, pure seat-based pricing will be obsolete by 2028, forcing 70% of vendors to figure out new business models.

With technology evolving as quickly as it is, JBGoodwin Realtors has already started to change its approach to buying tech, says Tull. It used to prefer long-term contracts, for example but that’s not the case anymore “You save more if you go longer, but I’ll ask for an option to re-sign with a cap,” he says.

That doesn’t mean SaaS will die overnight. Companies have made significant investments in their current technology infrastructure, says Patrycja Sobera, SVP of digital workplace solutions at Unisys.

“They’re not scrapping their strategies around cloud and SaaS,” she says. “They’re not saying, ‘Let’s abandon this and go straight to agentic.’ I’m not seeing that at all.”

Ultimately, people are slow to change, and institutions are even slower. Many organizations are still running legacy systems. For example, the FAA has just come out with a bold plan to update its systems by getting rid of floppy disks and upgrading from Windows 95. They expect this to take four years.

But the center of gravity will move toward agents and, as it does, so will funding, innovation, green-field deployments, and the economics of the software industry.

“There are so many organizations and leaders who need to cross the chasm,” says Sobera. “You’re going to have organizations at different levels of maturity, and some will be stuck in SaaS mentality, but feeling more in control while some of our progressive clients will embrace the move. We’re also seeing those clients outperform their peers in revenue, innovation, and satisfaction.”

The best laptops for gaming and schoolwork in 2025

5 December 2025 at 05:01

Balancing schoolwork with gaming usually means finding a laptop that can do a little bit of everything. The best gaming laptops aren’t just built for high frame rates. They also need to handle long days of writing papers, running productivity apps and streaming lectures without slowing down. A good machine should feel reliable during class and powerful enough to jump into your favorite games once homework is out of the way.

There’s a wide range of options depending on how much performance you need. Some students prefer a slim, lightweight model that’s easy to carry to school, while others want a new gaming laptop with enough GPU power to handle AAA titles. If you’re watching your budget, there are plenty of solid choices that qualify as a budget gaming laptop without cutting too many corners.

It’s also worth looking at features that help with everyday use. A bright display makes long study sessions easier on the eyes, and a comfortable keyboard is essential if you type a lot. USB-C ports, decent battery life and a responsive trackpad can make a big difference during the school day. We’ve rounded up the best laptops that strike the right mix of performance, portability and value for both gaming and schoolwork.

Table of contents

Best laptops for gaming and school in 2025

Best laptop for gaming and schoolwork FAQs

Are gaming laptops good for school?

As we’ve mentioned, gaming laptops are especially helpful if you're doing any demanding work. Their big promise is powerful graphics performance, which isn't just limited to PC gaming. Video editing and 3D rendering programs can also tap into their GPUs to handle laborious tasks. While you can find decent GPUs on some productivity machines, like Dell's XPS 15, you can sometimes find better deals on gaming laptops. My general advice for any new workhorse: Pay attention to the specs; get at least 16GB of RAM and the largest solid state drive you can find (ideally 1TB or more). Those components are both typically hard to upgrade down the line, so it’s worth investing what you can up front to get the most out of your PC gaming experience long term. Also, don’t forget the basics like a webcam, which will likely be necessary for the schoolwork portion of your activities.

The one big downside to choosing a gaming notebook is portability. For the most part, we'd recommend 15-inch models to get the best balance of size and price. Those typically weigh in around 4.5 pounds, which is significantly more than a three-pound ultraportable. Today's gaming notebooks are still far lighter than older models, though, so at least you won't be lugging around a 10-pound brick. If you’re looking for something lighter, there are plenty of 14-inch options these days. And if you're not into LED lights and other gamer-centric bling, keep an eye out for more understated models that still feature essentials like a webcam (or make sure you know how to turn those lights off).

Do gaming laptops last longer than standard laptops?

Not necessarily — it really depends on how you define "last longer." In terms of raw performance, gaming laptops tend to pack more powerful components than standard laptops, which means they can stay relevant for longer when it comes to handling demanding software or modern games. That makes them a solid choice if you need a system that won’t feel outdated in a couple of years, especially for students or creators who also game in their downtime.

But there’s a trade-off. All that power generates heat, and gaming laptops often run hotter and put more strain on internal components than typical ultraportables. If they’re not properly cooled or regularly maintained (think dust buildup and thermal paste), that wear and tear can shorten their lifespan. They’re also usually bulkier and have shorter battery life, which can impact long-term usability depending on your daily needs.

Gaming laptops can last longer performance-wise, but only if you take good care of them. If your needs are light — browsing, writing papers and streaming — a standard laptop may actually last longer simply because it’s under less stress day-to-day.

What is the role of GPU in a computer for gaming and school?

The GPU plays a big role in how your laptop handles visuals — and it’s especially important if you’re using your computer for both gaming and school.

For gaming, the GPU is essential. It’s responsible for rendering graphics, textures, lighting and all the visual effects that make your favorite titles look smooth and realistic. A more powerful GPU means better frame rates, higher resolutions and the ability to play modern games without lag or stuttering.

For schoolwork, the GPU matters too — but its importance depends on what you're doing. If your school tasks mostly involve writing papers, browsing the web or using productivity tools like Google Docs or Microsoft Office, you don’t need a high-end GPU. But if you’re working with graphic design, video editing, 3D modeling or anything else that’s visually demanding, a good GPU can speed things up significantly and improve your workflow.

Georgie Peru contributed to this report.

This article originally appeared on Engadget at https://www.engadget.com/computing/laptops/best-laptops-for-gaming-and-school-132207352.html?src=rss

©

© Engadget

The best laptops for gaming and schoolwork

HPE CEO 네리, 주니퍼 인수 효과 공개···네트워크·AI 결합 가속

5 December 2025 at 02:54



HPE가 HP에서 분리돼 독립적인 여정을 시작한 지 10년이 지난 시점에, 최고경영자 안토니오 네리는 12월 3와 4일 바르셀로나에서 열린 HPE의 주요 연례 유럽 행사 무대에 올랐다. 네리는 이 자리에서 네트워크, 클라우드, 인공지능(AI)이라는 세 가지 기술 축을 중심으로 한 HPE의 로드맵을 공개했다.

네리는 HPE 디스커버 바르셀로나 2025 행사에 참석한 6,000여 명의 청중을 향해 “지난 10년 동안 우리가 함께 만들어낸 성과가 매우 자랑스럽다”라며 “앞으로 펼쳐질 변화는 더욱 기대된다”라고 말했다.

HPE가 제시한 세 축의 전략은 오늘날 기업이 직면한 핵심 IT 과제를 해결하기 위한 것이다. 네리에 따르면 기업들은 여전히 레거시 인프라 처리, 데이터 주권 확보, 지속적으로 증가하는 비용 관리, AI 확산으로 높아진 컴퓨팅 수요 등의 도전에 맞서고 있다.

특히 주니퍼네트웍스(Juniper Networks)를 지난해 7월 인수하며 크게 강화된 네트워크 기술은 이번 바르셀로나 행사에서 핵심 요소로 부각됐다.

주니퍼의 전 CEO이자 현재 HPE 네트워킹 사업 총괄을 맡고 있는 라미 라힘은 행사에 참석해 양사 통합의 첫 기술 성과를 소개했다. 양사의 네트워크 관리 플랫폼에 새로운 AI 기반 운영 기능을 통합하고, 공동 하드웨어를 처음으로 공개한 것이다.

라힘은 “지금처럼 네트워크의 중요성이 높아진 시기는 없었다”라고 말하면서, 이제 네트워크의 목표는 단순 연결이 아니라 ‘자율적 관리’라고 설명했다. 그는 네트워크가 스스로 구성·최적화·복구하는 방향으로 나아가야 한다고 강조하며, AI로 설계되고 AI를 위한 네트워크가 증가하는 기기 연결, 복잡해지는 환경, 고도화되는 보안 위협을 처리할 수 있다고 밝혔다.

네리는 “라미와 내가 가진 공통의 목표는 네트워킹 분야에서 새로운 리더를 만드는 것”이라고 말했다. 그는 주니퍼 인수 후 5개월 만에 HPE가 이미 이전 경쟁사였던 주니퍼 기술과 2015년 인수한 아루바 솔루션을 결합한 커넥티비티 제품을 시장에 제공하고 있다고 설명했다. 이어 “앞으로는 양사가 각각 무엇을 하고 있는지조차 구분되지 않을 것”이라며, “기본적인 이중 설계를 이미 지원하고 있다는 사실은 두 조직이 얼마나 빠르게 하나로 융합되고 있으며, 동시에 HPE의 혁신 역량이 어떻게 활용되고 있는지를 잘 보여준다”라고 덧붙였다.

HPE의 주니퍼 인수, 복잡한 과정을 거치다

140억 달러(약 20조 원) 규모의 HPE의 주니퍼 인수는 단순한 거래가 아니라 매우 복잡하고 긴 여정이었다. 2024년 1월 인수 계획이 발표됐지만 최종 거래는 2025년 7월에 이르러서야 마무리됐다. 미국에서는 특히 논란도 적지 않았다. 미국 법무부(DOJ)가 이번 인수가 네트워크 장비 시장, 특히 무선랜(WLAN) 분야의 경쟁을 약화시킨다며 소송을 제기했기 때문이다.

이번 인수 승인 과정에서 겪은 난관과 여전히 남아 있는 미국 내 비판에 대해 파운드리 산하 언론사 컴퓨터월드의 질문을 받은 네리는 먼저 “미국을 제외한 국가에서는 통상적인 6개월 내 승인이 완료됐다”라고 설명했다. 2024년 여름에는 3개국만 승인이 남아 있었고, 그중 2개국은 다음 3개월 내 승인을 마쳤다는 것이다. 네리는 미국의 경우 “선거와 행정부 교체라는 변수가 있었고, 이후 절차가 다시 진행됐다”라고 덧붙였다.

네리는 이번 사례를 분석하면서 “미국 법무부는 캠퍼스와 지사 시장, 특히 무선 분야에서 경쟁사가 3곳에서 2곳으로 줄어들 것으로 판단했다”라고 말했다. 하지만 실제 시장은 그보다 훨씬 크다는 게 네리의 설명이다. 그는 “미국 시장만 보더라도 시스코, 주니퍼, HPE, 캄비움네트웍스(Cambium Networks), 유비쿼티(Ubiquity), 아리스타(Arista) 등 7~8개 업체가 경쟁하고 있다”라며 산업군별로 강점이 다르고 대기업 시장과 공공 부문에서도 경쟁 구도가 다르다고 언급했다. 이어 “여러분(기자들)이 보도하는 시장점유율만 봐도 시장 규모가 크고 매우 분산돼 있다는 사실을 확인할 수 있다”라고 말했다.

결국 미국 법무부와는 “상호에 도움이 되는 건설적인 과정을 거쳤다”라고 네리는 설명했다. 그는 “이번 인수 시장은 경쟁을 촉진하는 환경임을 입증했다”라며, 미국의 대형 M&A 최종 심사 단계에서도 고객이나 경쟁사로부터 어떠한 이의 제기도 받지 않았다고 강조했다.

AI와 클라우드에 집중되다

바르셀로나에서 네리는 최근 몇 달 동안 HPE가 클라우드와 AI 분야에서 이뤄낸 기술적 진전을 강조했다. 그는 AI를 “전형적인 하이브리드 워크로드”라고 규정하면서, 두 기술이 불가분하게 연결돼 있다고 설명했다.

네리는 사용량 기반 모델로 시작해 현재 전 세계 4만 6,000명 고객을 확보한 하이브리드 클라우드 플랫폼 그린레이크(GreenLake)를 소개하며, 여기에 자율 에이전트 기반 프레임워크 ‘그린레이크 인텔리전스(GreenLake Intelligence)’와 같은 AI 기능을 추가할 계획이라고 밝혔다. 이 기능은 지난 6월 HPE가 발표한 것으로, 하이브리드 클라우드 환경에서 IT 운영을 자동화하고 단순화하는 데 초점을 둔다. 네리는 “IT 운영 단순화의 미래가 이미 도착했다”라고 말했다.

네리는 또 HPE의 에어갭 기반 프라이빗 클라우드 전략이 EU처럼 규제가 강한 지역, 그리고 군과 같이 민감 데이터가 중요한 전략 분야에서 큰 의미가 있다고 강조했다.

네리는 바르셀로나에서 공개된 또 하나의 솔루션에도 주목했다. AMD의 ‘헬리오스(Helios)’ 랙 스케일 AI 아키텍처가 이더넷 네트워킹과 통합된 첫 사례다. 그는 이 솔루션이 주니퍼의 연결 하드웨어와 소프트웨어, 브로드컴 토마호크6 네트워킹 칩을 결합해 “수조 개 매개변수 모델의 학습 트래픽, 높은 추론 처리량, 초대형 모델을 지원할 수 있다”라고 설명했다. 이 구성은 HPE 서비스팀이 공급한다.

네리는 또한 슈퍼컴퓨팅 분야에서 HPE가 보유한 강력한 입지도 강조했다. 이는 2019년 슈퍼컴퓨터 전문 기업 크레이(Cray)를 인수하며 확보한 기반이 크게 작용했다. 그는 “HPE는 세계에서 가장 큰 슈퍼컴퓨터 6대를 구축한 기업이며 이 분야의 글로벌 선도 기업”이라고 말했다. 다만 “AI 수요가 그 어느 때보다 커졌지만 모든 기업이 이를 처리하기 위해 슈퍼컴퓨터가 필요한 것은 아니다”라며, 그러나 “모든 기업에는 안전한 AI 스택이 필요하다”라고 덧붙였다.

HPE는 이러한 요구에 대응하기 위해 엔비디아와 협력해 프라이빗 클라우드 환경에서 생성형 AI 애플리케이션 개발·배포를 가속화하는 통합 인프라 솔루션 ‘HPE 프라이빗 클라우드 AI’를 제공하고 있다. 네리는 이 솔루션이 “법적 데이터 요구사항을 충족하며”, 동시에 AI 혁신의 핵심 과제인 “시간, 비용, 위험”을 해결하는 데 초점을 맞춘다고 설명했다. 그는 여기에 더해 HPE가 최근 엔비디아와 AMD와 함께 AI 구축을 가속화하는 고성능 네트워킹 솔루션을 추가했다고 바르셀로나에서 밝혔다.

본사업 기반 성장과 M&A 기반 확장

HPE가 지난해 9월 회계연도 3분기 실적 발표에서 제시한 전망에 따르면, 회사는 2025 회계연도(10월 31일 종료) 매출이 고정 환율 기준 14~16% 증가할 것으로 예상하고 있다. 2024 회계연도 매출은 301억 달러(약 44조 원)로, 2023년 대비 3.4% 증가했다.

네리의 리더십 아래 HPE는 총 35건의 인수를 진행했다. 네리는 바르셀로나 기자회견에서 이를 직접 상기시키며, 앞서 언급한 주니퍼네트웍스와 크레이 외에도 여러 주요 인수를 나열했다.

2020년에는 SD-WAN 기업 실버피크(Silver Peak)를, 2021년에는 데이터 보호 및 재해복구 기업 제르토(Zerto)를 인수했다. 2023년에는 보안 및 IT 운영 분야의 액시스시큐리티(Axis Security)와 옵스램프(OpsRamp)를 추가했으며, 2024년에는 하이브리드 클라우드 관리 기업 모르페우스데이터(Morpheus Data)를 인수했다.

네리는 “우리는 포트폴리오를 보완하고 목표 시장에서 규모를 확장할 수 있는 적절한 자산을 찾고 있다”라며 “이 자산은 매출과 수익 측면에서 타당해야 하며, 동시에 주주들에게 가치도 제공해야 한다”라고 말했다.
dl-ciokorea@foundryco.com


These could be the creepiest robots you’ve ever set eyes on

4 December 2025 at 23:15

Those of a nervous disposition might want to skip this article. It’s about some of the creepiest robots ever to have walked this Earth. Apart from this one, perhaps.  Part of a new art exhibition at Art Basel Miami, the robot dogs feature unnervingly lifelike copies of the heads of some of the biggest names in […]

The post These could be the creepiest robots you’ve ever set eyes on appeared first on Digital Trends.

Before yesterdayMain stream

AWS CEO Matt Garman thought Amazon needed a million developers — until AI changed his mind

4 December 2025 at 18:56
AWS CEO Matt Garman, left, with Acquired hosts Ben Gilbert and David Rosenthal. (GeekWire Photo / Todd Bishop)

LAS VEGAS — Matt Garman remembers sitting in an Amazon leadership meeting six or seven years ago, thinking about the future, when he identified what he considered a looming crisis.

Garman, who has since become the Amazon Web Services CEO, calculated that the company would eventually need to hire a million developers to deliver on its product roadmap. The demand was so great that he considered the shortage of software development engineers (SDEs) the company’s biggest constraint.

With the rise of AI, he no longer thinks that’s the case.

Speaking with Acquired podcast hosts Ben Gilbert and David Rosenthal at the AWS re:Invent conference Thursday afternoon, Garman told the story in response to Gilbert’s closing question about what belief he held firmly in the past that he has since completely reversed.

“Before, we had way more ideas than we could possibly get to,” he said. Now, “because you can deliver things so fast, your constraint is going to be great ideas and great things that you want to go after. And I would never have guessed that 10 years ago.”

He was careful to point out that Amazon still needs great software engineers. But earlier in the conversation, he noted that massive technical projects that once required “dozens, if not hundreds” of people might now be delivered by teams of five or 10, thanks to AI and agents.

Garman was the closing speaker at the two-hour event with the hosts of the hit podcast, following conversations with Netflix Co-CEO Greg Peters, J.P. Morgan Payments Global Co-Head Max Neukirchen, and Perplexity Co-founder and CEO Aravind Srinivas.

A few more highlights from Garman’s comments:

Generative AI, including Bedrock, represents a multi-billion dollar business for Amazon. Asked to quantify how much of AWS is now AI-related, Garman said it’s getting harder to say, as AI becomes embedded in everything. 

Speaking off-the-cuff, he told the Acquired hosts that Bedrock is a multi-billion dollar business. Amazon clarified later that he was referring to the revenue run rate for generative AI overall. That includes Bedrock, which is Amazon’s managed service that offers access to AI models for building apps and services. [This has been updated since publication.]

How AWS thinks about its product strategy. Garman described a multi-layered approach to explain where AWS builds and where it leaves room for partners. At the bottom are core building blocks like compute and storage. AWS will always be there, he said.

In the middle are databases, analytics engines, and AI models, where AWS offers its own products and services alongside partners. At the top are millions of applications, where AWS builds selectively and only when it believes it has differentiated expertise.

Amazon is “particularly bad” at copying competitors. Garman was surprisingly blunt about what Amazon doesn’t do well. “One of the things that Amazon is particularly bad at is being a fast follower,” he said. “When we try to copy someone, we’re just bad at it.” 

The better formula, he said, is to think from first principles about solving a customer problem, only when it believes it has differentiated expertise, not simply to copy existing products.

Your Windows on ARM laptop could someday play real PC games

4 December 2025 at 15:52

Windows on ARM has always lacked serious graphics power, but that may finally be changing. A new Chinese-made discrete GPU has now been shown running on ARM-based Windows, hinting that proper PC gaming on ARM may no longer be a distant dream.

The post Your Windows on ARM laptop could someday play real PC games appeared first on Digital Trends.

Blue Yeti USB mic drops to $84.97 in early streaming gear deal

4 December 2025 at 13:10

If your audio still sounds like it’s coming from a laptop mic, this is your sign to step it up. The Logitech for Creators Blue Yeti USB Microphone is down to $84.97 on Amazon, a 39% cut from its usual $139.99 list price. For streamers, podcasters, and anyone who lives on Zoom or Discord, this […]

The post Blue Yeti USB mic drops to $84.97 in early streaming gear deal appeared first on Digital Trends.

RTX 5060 Ti price drop finally makes sense for budget gaming pcs

4 December 2025 at 12:30

When the RTX 5060 Ti first showed up, the performance was fine but the price wasn’t. You were paying close to upper midrange money for what was basically a very capable 1080p and entry-level 1440p GPU. At its original $469.99 list price, it was hard to recommend over slightly more expensive cards that delivered bigger […]

The post RTX 5060 Ti price drop finally makes sense for budget gaming pcs appeared first on Digital Trends.

US federal software reform bill aims to strengthen software management controls

4 December 2025 at 11:57

Software management struggles that have pained enterprises for decades cause the same anguish to government agencies, and a bill making its way through the US House of Representatives to strengthen controls around government software management holds lessons for enterprises too.

The Strengthening Agency Management and Oversight of Software Assets (SAMOSA) bill, H.R. 5457, received unanimous approval from a key US House of Representative committee, the Committee on Oversight and Government Reform, on Tuesday.

SAMOSA is mostly focused on trying to fix “software asset management deficiencies” as well as requiring more “automation of software license management processes and incorporation of discovery tools,” issues that enterprises also have to deal with.

In addition, it requires anyone involved in software acquisition and development to be trained in the agency’s policies and, more usefully, in negotiation of contract terms, especially those that put restrictions on software deployment and use.

This training could also be quite useful for enterprise IT operations. It would teach “negotiating options” and specifically the “differences between acquiring commercial software products and services and acquiring or building custom software and determining the costs of different types of licenses and options for adjusting licenses to meet increasing or decreasing demand.”

The mandated training would also include tactics for measuring “actual software usage via analytics that can identify inefficiencies to assist in rationalizing software spending” along with methods to “support interoperable capabilities between software.”

Outlawing shadow IT

The bill also attempts to rein in shadow IT by “restricting the ability of a bureau, program, component, or operational entity within the agency to acquire, use, develop, or otherwise leverage any software entitlement without the approval of the Chief Information Officer of the agency.” But there are no details about how such a rule would be enforced.

It would require agencies “to provide an estimate of the costs to move toward more enterprise, open-source, or other licenses that do not restrict the use of software by the agency, and the projected cost savings, efficiency measures, and improvements to agency performance throughout the total software lifecycle.” But the hiccup is that benefits will only materialize if technology vendors change their ways, especially in terms of transparency.

However, analysts and consultants are skeptical that such changes are likely to happen.

CIOs could be punished

Yvette Schmitter, a former Price Waterhouse Coopers principal who is now CEO of IT consulting firm Fusion Collective, was especially pessimistic about what would happen if enterprises tried to follow the bill’s rules.

“If the bill were to become law, it would set enterprise CIOs up for failure,” she said. “The bill doubles down on the permission theater model, requiring CIO approval for every software acquisition while providing zero framework for the thousands of generative AI tools employees are already using without permission.”

She noted that although the bill mandates comprehensive assessments of “software paid for, in use, or deployed,” it neglects critical facets of today’s AI software landscape. “It never defines how you access an AI agent that writes its own code, a foundation model trained on proprietary data, or an API that charges per token instead of per seat,” she said. “Instead of oversight, the bill would unlock chaos, potentially creating a compliance framework where CIOs could be punished for buying too many seats for a software tool, but face zero accountability for safely, properly, and ethically deploying AI systems.”

Schmitter added: “The bill is currently written for the 2015 IT landscape and assumes that our current AI systems come with instruction manuals and compliance frameworks, which they obviously do not.”

She also pointed out that the government seems to be working at cross-purposes. “The H.R. 5457 bill is absurd,” she said. “Congress is essentially mandating 18-month software license inventories while the White House is simultaneously launching the Genesis Mission executive order for AI that will spin up foundation models across federal agencies in the next nine months. Both of these moves are treating software as a cost center and AI as a strategic weapon, without recognizing that AI systems are software.”

Scott Bickley, advisory fellow at Info-Tech Research Group, was also unimpressed with the bill. “It is a sad, sad day when the US Federal government requires a literal Act of Congress to mandate the Software Asset Management (SAM) behaviors that should be in place across every agency already,” Bickley said. “One can go review the [Office of Inspector General] reports for various government agencies, and it is clear to see that the bureaucracy has stifled all attempts, assuming there were attempts, at reining in the beast of software sprawl that exists today.”

Right goal, but toothless

Bickley said that the US government is in dire need of better software management, but that this bill, even if it was eventually signed into law, would be unlikely to deliver any meaningful reforms. 

“This also presumes the federal government actually negotiates good deals for its software. It unequivocally does not. Never has there been a larger customer that gets worse pricing and commercial terms than the [US] federal government,” Bickley said. “At best, in the short term, this bill will further enrich consultants, as the people running IT for these agencies do not have the expertise, tooling, or knowledge of software/subscription licensing and IP to make headway on their own.”

On the bright side, Bickley said the goal of the bill is the right one, but the fact that the legislation didn’t deliver or even call for more funding makes it toothless. “The bill is noble in its intent. But the fact that it requires a host of mandatory reporting, [Government Accountability Office] oversight, and actions related to inventory and overall [software bill of materials] rationalization with no new budget authorization is a pipe dream at best,” he said. 

Sanchit Vir Gogia, the chief analyst at Greyhound Research, was more optimistic, saying that the bill would change the law in a way that should have happened long ago.

“[It] corrects a long-standing oversight in federal technology management. Agencies are currently spending close to $33 billion every year on software. Yet most lack a basic understanding of what software they own, what is being used, or where overlap exists. This confusion has been confirmed by the Government Accountability Office, which reported that nine of the largest agencies cannot identify their most-used or highest-cost software,” Gogia said. “Audit reports from NASA and the Environmental Protection Agency found millions of dollars wasted on licenses that were never activated or tracked. This legislation is designed to stop such inefficiencies by requiring agencies to catalogue their software, review all contracts, and build plans to eliminate unused or duplicate tools.”

Lacks operational realism

Gogia also argued, “the added pressure of transparency may also lead software providers to rethink their pricing and make it easier for agencies to adjust contracts in response to actual usage.” If that happens, it would likely trickle into greater transparency for enterprise IT operations. 

Zahra Timsah, co-founder and CEO of i-GENTIC AI, applauded the intent of the bill, while raising logistical concerns about whether much would ultimately change even if it ultimately became law.

“The language finally forces agencies to quantify waste and technical fragmentation instead of talking about it in generalities. The section restricting bureaus from buying software without CIO approval is also a smart, direct hit on shadow IT. What’s missing is operational realism,” Timsah said. “The bill gives agencies a huge mandate with no funding, no capacity planning, and no clear methodology. You can’t ask for full-stack interoperability scoring and lifecycle TCO analysis without giving CIOs the tools or budget to produce it. My concern is that agencies default to oversized consulting reports that check the box without actually changing anything.”

Timsah said that the bill “is going to be very difficult to implement and to measure. How do you measure it is being followed?” She added that agencies will parrot the bill’s wording and then try to hire people to manage the process. “It’s just going to be for optic’s sake.”

LG’s 34-Inch 240Hz Ultrawide Gaming Monitor drops to $359.99 on Amazon

4 December 2025 at 11:44

Ultrawide monitors are one of the easiest upgrades you can make if you want games to feel more immersive and your desktop to feel less cramped. The LG 34G630A-B UltraGear 34-inch curved gaming monitor hits that sweet spot, and it is currently on sale for $359.99, down from $499.99 on Amazon, a 28% discount on […]

The post LG’s 34-Inch 240Hz Ultrawide Gaming Monitor drops to $359.99 on Amazon appeared first on Digital Trends.

The Database Powering America’s Hospitals May Not be What You Expect

3 December 2025 at 22:00

Ever heard of MUMPS? Both programming language and database, it was developed in the 1960s for the Massachusetts General Hospital. The goal was to streamline the increasingly enormous timesink that information and records management had become, a problem that was certain to grow unless something was done. Far from being some historical footnote, MUMPS (Massachusetts General Hospital Utility Multi-Programming System) grew to be used by a wide variety of healthcare facilities and still runs today. If you’ve never heard of it, you’re in luck because [Asianometry] has a documentary video that’ll tell you everything.

MUMPS had rough beginnings but ultimately found widespread support and use that continues to this day. As a programming language, MUMPS (also known simply as “M”) has the unusual feature of very tight integration with the database end of things. That makes sense in light of the fact that it was created to streamline the gathering, processing, and updating of medical data in a busy, multi-user healthcare environment that churned along twenty-four hours per day.

It may show its age (the term “archaic” — among others — gets used when it’s brought up) but it is extremely good at what it does and has a proven track record in the health care industry. This, combined with the fact that efforts to move to newer electronic record systems always seem to find the job harder than expected, have helped keep it relevant. Have you ever used MUMPS? Let us know in the comments!

And hey, if vintage programming languages just aren’t unusual enough for you, we have some truly strange ones for you to check out.

Building resilience for AI workloads in the cloud

3 December 2025 at 17:56

In 2025, more than 75% of organizations have reported using AI in at least one business function, according to McKinsey’s latest Global Survey on AI.

AI has moved from pilots to production and now powers decisions, customer experiences, and compliance processes, raising the stakes for resilience. Outages, data corruption, or misconfigured agents can interrupt critical workflows, erode customer trust, and trigger regulatory scrutiny. Cloud platforms have become the backbone for AI workloads, offering elasticity and scale, yet many resilience programs were designed for older compute patterns.

But as AI adoption accelerates, cloud environments have evolved from simple compute and storage layers to sprawling ecosystems of data pipelines, model registries, orchestration tools, and agentic processes. The complexity demands resilience strategies that go beyond traditional recovery, ensuring rapid restoration of operations.

Why AI changes the resilience equation

AI amplifies the challenge of resilience. Data and infrastructure sprawl across hybrid and multi-cloud estates creates intricate dependency chains. Models evolve continuously, and autonomous agents can trigger unintended changes that ripple through systems. Traditional backup cannot guarantee a safe recovery point for these dynamic interactions.

Resilience begins with clear segmentation of environments, robust identity controls, and immutable copies of critical data. Observability must extend beyond virtual machines to include pipelines, model endpoints, and orchestration layers. Recovery should be validated in isolated environments to prevent hidden contamination from re-entering production. Automation is essential to reduce recovery time and ensure consistency across regions and providers. What organizations need is resilience that combines immutable backups, automated lineage tracking, and clean rollback to ensure that recovery is fast, accurate, and trusted.

A recent example highlights how an AI coding assistant at a tech firm went rogue and wiped out the production database of SaaStr, a startup, during a code freeze. The AI not only deleted critical data but also generated fake users and fabricated reports, making it difficult to identify a clean recovery point. The rogue AI action underscores how autonomous AI actions can cause cascading failures and why organizations need advanced resilience strategies.

Cognizant and Rubrik: A partnership for AI resilience

Cognizant and Rubrik deliver Business Resilience-as-a-Service (BRaaS), an offering for organizations scaling AI in the cloud. BRaaS leverages Cognizant’s global delivery capabilities and cloud infrastructure expertise, alongside Rubrik’s advanced cyber resilience platform. Together, they help address the need for AI workloads to have resilience controls that address the full lifecycle.

Rubrik Agent Cloud is designed to monitor and audit agentic actions, enforce real-time guardrails for agentic changes, fine-tune agents for accuracy, and undo agent mistakes. Built on the Rubrik Platform that uniquely combines data, identity, and application contexts, Rubrik Agent Cloud gives customers security, accuracy, and efficiency as they transform their organizations into AI enterprises.

Comprehensive controls over data, orchestration, and recovery can further an organization’s confidence in AI. Cognizant’s Neuro® AI platform features multi-agent orchestration with embedded policy guardrails operating across protected data estates.

Together, these capabilities support safe experimentation while shielding core business operations from risk. Cognizant and Rubrik aim to protect the foundation for the agentic AI era, where trusted data and rapid recovery are essential — helping organizations gain the confidence to innovate with AI, knowing they can quickly and safely undo any destructive agent actions and maintain business resilience.

Practical guidance for enterprise teams

Leaders can strengthen AI resilience with eight practical steps:

  1. Inventory AI services and dependencies across models, pipelines, data sources, vector stores, orchestration tools, and consuming applications.
  2. Tier AI workloads and set recovery time and point objectives that match customer and regulatory expectations. Include model registries, feature stores, and prompt libraries in scope.
  3. Protect trusted data with immutable storage and frequent, policy-driven snapshots. Guard gold datasets and production feature stores as crown jewels.
  4. Validate recovery in isolation using clean rooms that mirror production scale. Confirm that models, data, and configurations work together before go-live.
  5. Automate recovery workflows and integrate with incident response, service management, monitoring, and identity systems for coordinated action.
  6. Harden identity and access with zero trust principles, short-lived credentials, and strong separation of duties for AI platform operations.
  7. Run end-to-end exercises that include technology, security, data, and business owners. Rehearse cutover, rollback, and communications. Close gaps with time-bound plans.
  8. Track a resilience scorecard for AI, including detection speed, isolation time, recovery performance by tier, validation frequency, and control drift.

By following these steps, organizations move beyond reactive recovery to embed resilience into AI operations. Proactive planning, rigorous validation, and continuous measurement ensure that innovation does not come at the expense of stability or trust. With the right safeguards in place, enterprises can scale AI confidently, knowing they are prepared to withstand disruptions and protect both business value and customer trust.

Leadership driven by insights and outcomes

Resilience is about continuity of outcomes, not only restoration of systems. When AI services remain trustworthy during a disruption, customers stay served, regulators see control, and teams can resume work without guesswork. Predictable recovery also builds confidence to scale AI programs. Leaders can allocate budgets more efficiently when recovery targets and costs are clear. Measurable progress shows up as faster mean time to recover and fewer failed cutbacks.

Conclusion: Innovate with confidence

AI adoption will continue to accelerate. Organizations that embed resilience into cloud architecture and operating models will move fast and with fewer surprises. Cognizant and Rubrik provide the platform, delivery scale, and service model to make that shift attainable. The goal is simple: keep data trusted, restore services cleanly, and validate outcomes before going live. With this foundation, AI becomes a growth engine that leaders can scale with confidence.

Take the next step towards resilient AI innovation. Contact Cognizant to assess your current posture, explore tailored Rubrik solutions, and discover how to safely scale your AI initiatives on a foundation of resilience and trust. To schedule your resilience assessment, get in touch at BusinessResilience@cognizant.com or click here to learn more.

About Sriramkumar Kumaresan

srcset="https://b2b-contenthub.com/wp-content/uploads/2025/12/Sriram-Headshot2.jpg?quality=50&strip=all 500w, https://b2b-contenthub.com/wp-content/uploads/2025/12/Sriram-Headshot2.jpg?resize=247%2C300&quality=50&strip=all 247w, https://b2b-contenthub.com/wp-content/uploads/2025/12/Sriram-Headshot2.jpg?resize=138%2C168&quality=50&strip=all 138w, https://b2b-contenthub.com/wp-content/uploads/2025/12/Sriram-Headshot2.jpg?resize=69%2C84&quality=50&strip=all 69w, https://b2b-contenthub.com/wp-content/uploads/2025/12/Sriram-Headshot2.jpg?resize=395%2C480&quality=50&strip=all 395w, https://b2b-contenthub.com/wp-content/uploads/2025/12/Sriram-Headshot2.jpg?resize=296%2C360&quality=50&strip=all 296w, https://b2b-contenthub.com/wp-content/uploads/2025/12/Sriram-Headshot2.jpg?resize=206%2C250&quality=50&strip=all 206w" width="500" height="608" sizes="auto, (max-width: 500px) 100vw, 500px">

Cognizant

Sriram Kumaresan leads the Global Cloud, Infrastructure and Security practice atCognizant, overseeing approximately 35,000 professionals. With over 25 years of experience, he excels in building and scaling businesses from strategy to execution. Sriram is responsible for driving market share (strategy, GTM and growth) and mindshare (offering, partner strategy and market positioning) through strategic approaches, customer centricity and the deep technical expertise inCognizant’s Cloud, Infrastructure and Security business. Beyond his professional achievements, he is also a mentor and advocate for diversity in tech, aiming to inspire future IT leaders.

❌
❌