Reading view

There are new articles available, click to refresh the page.

I’ve Got a Lot of Problems With You People

S. Schuchart

Summary Bullets:

  • The technology industry can do better.
  • Let’s just hope that the uneasy feeling about the AI bubble everyone is experiencing is just a bit of leftover holiday undigested beef, blot of mustard, a crumb of cheese, or a fragment of an underdone potato (with apologies to Dickens).

Festivus took place on December 23, 2025, but despite being late, there are grievances to air in regard to the technology industry as it relates to enterprises in 2025. So, let’s start. “I’ve got a lot of problems with you people!”

Artificial Intelligence
Let’s start with the biggest and likely the most diaphanous super-elephant in the room, the current AI boom/craze/bubble. Everyone in the technology industry has been affected by AI. Talk about AI is ubiquitous. Every technology product or service launch prominently mentions AI. Of course, the talk is two-fold. First, it’s about how much money everyone is going to make with AI. Second is the talk about how much money everyone is spending on AI, including all the letters of intent, acquisitions, stock trades, and projected spending.

Any dissent is quickly quashed – AI is the future. AI is all. AI will make everyone so much money. How? Well, of course, there is talk about AI coding, automation, agentic AI, and reducing staff headcount with AI. That last bit isn’t often talked about on the technology side – but it sure is in boardrooms and on Wall Street. The problem with these use cases is that nobody can seem to point to success, outside of a few highly orchestrated pilots. The grievance is that the enterprise technology industry isn’t being fair to its customers when it comes to AI. There is SO MUCH investment money involved in AI, and so many promises that technology vendors and service providers have a near-fatal case of FOMO, the fear of missing out.

It’s hard to see a path to profitability for AI companies, considering the amount they need to spend in order to make large language model (LLM) AI work. It’s equally hard to see a path for enterprises to the rich gains or savings they were promised – these use cases are either not working out or were specious in the first place.

Is the AI boom a bubble? I’m a technology guy, not a finance guy. But even as my hands itch for a soldering iron rather than the complexities of finance it seems pretty clear that there is an alarming amount of money being poured into AI companies. These companies are not profitable. Nor does there seem to be a path to profitability, considering the staggering amounts of money invested. It is more than a little reminiscent of the dot.com bubble and ensuing financial downturn. Oh, and to those that dismiss the dot.com bust with “well it all worked out”? Clearly, you didn’t live through it or were lucky enough to be insulated from it, because there was real harm done. Businesses died, jobs were lost, careers were ruined, and everybody suffered from the economic recession the dot.com created. Let’s just hope that the uneasy feeling about the AI bubble everyone is experiencing is just a bit of leftover holiday undigested beef, blot of mustard, a crumb of cheese, or a fragment of an underdone potato (with apologies to Dickens).

Customer-First
Every technology vendor wants to tell customers that they are the vendor’s first priority. Well, there is plenty of grievance to air on this point. More and more, the technology industry is valuing its financial gain over what is right for their customers. Subscription services are now the norm, and the party benefiting from those is rarely the customer. Mega merger/acquisitions result in wide-spread reorientation of the acquired firm to just its most profitable customers. New terms and conditions, increased prices, forced bundling sold as added value, and long-standing product lines ended, longer support queues, and disruption for the customer. Sure, there are good use cases for subscription services and for acquisitions, but the needs of the customer are getting lost more and more.

This extends to changes to foundational software and services. New releases with radically different user interface that simply don’t have a better workflow than the previous user interface, and now have to be learned. Unasked for features, including AI features, that cannot be turned off. Unnecessarily forcing products to constantly connect across the internet to the vendor, and cannot be shut off. Moving critical management tools to the cloud, but not giving enterprises a way to self-host. Forced arbitration agreements that more often than not only benefit the vendor or service provider.

The technology industry can do better. It has in the past, and the technology industry needs to move back toward a system in which power is more equal and both parties have the goal of providing each other with a square deal.

This Industry Analyst
Of course, I’m not going to leave myself out of the grievances. There are more use cases for technology than anyone can keep in their head, and I’m no exception. I need to be more accepting and expansive about possible use cases and less quick to focus on the negative side. I need to help our customers more by refreshing the love of technology that got me here in the first place and letting that enthusiasm lend wings to my writing and humor to my demeanor. I need to ask more questions, probe deeper when speaking with enterprises, vendors, and service providers. I need to let the worry wane so I can see the sun again and regain the wonder I used to have for technology.

I wish you all a safe, happy, healthy, and prosperous 2026.

Slow Your Roll on AI

S. Schuchart

AI has been the rage for at least three years now, first just generative AI (GenAI), and now agentic AI. AI can be pretty useful, at GlobalData we’ve done some very cool things with AI on our site. Strategic things, that serve a defined purpose and add value. The use of AI at GlobalData hasn’t been indiscriminate – it has been thought through with how it could help our customers and ourselves. Even this skeptical author can appreciate what’s been done.

But a lot of what is happening out there with AI is indiscriminate and doesn’t attack problems in a prescriptive way. Instead, it is sold as a panacea. A cure for all business and IT ills. The claims are always huge but strangely lacking in detail. It’s particularly true for agentic AI where only in the last month managed to get MCP into the Linux Foundation as a standard. The security issues of agentic AI are still largely unaddressed and certainly not addressed in any standardized fashion. It’s not that agentic AI is a bad idea, it’s not. But the way it’s being sold has a tinge of irrational hysteria.

Sometimes when a vendor introduces a new capability that proudly uses agentic AI, it’s not clear why that capability being ‘agentic’ makes any difference than just AI. New AI features are appearing everywhere, with vendors jamming Ai in every nook and cranny, ignoring the privacy issues, and making it next to impossible to avoid or turn off. The worst part is often these AI features are half-baked ideas implemented too quickly or even worse, written by AI itself and all of the security and code bloat issues that ensue.

The prevailing wind, no scratch that, the hurricane force gale in the IT industry is that AI is everything, AI must be everywhere, and *any* AI is good AI. Any product, service, solution, or announcement must spend at least half of its content on how this is AI and how AI is good.

AI *can* be a wonderful thing. But serious enterprise IT administrators, coders, and engineers know a few things:

1. In a new market like AI, not every company selling AI will continue to sell AI. There will be consolidation, especially in an overhyped trend. Vendors and products will disappear.2. Version 1.0 hardly ever lives up to its billing. Even Windows wasn’t really minimally viable until 3.1. 3. Aligning IT/business value received vs. costs to implement/continue is a core component to the job.4. The bigger the hype, the bigger the backlash.5. The bigger the hype, the bigger the fear of missing out (FOMO) amongst senior management.6. The problems are in the details, not in the overall concept.

So let’s all slow our roll when it comes to AI. More focus on what matters, what can *demonstrably* provide value vs. what is claimed will provide value. Implementation costs as well as one year, three year, and five year costs. Risk assessment from a data privacy, cybersecurity, and regulation standpoint. In short, a little bit more due diligence and a lot less FOMO. AI is going to happen; that’s not the issue. The issue is for enterprises to implement AI where it will help, rather than viewing it as a panacea for all problems.

Take a Hard Pass on AI Browsers and AI Extensions for Browsers

S. Schuchart

Summary Bullets:

• Don’t use AI browsers or AI browser extensions – the loss of privacy isn’t worth the functionality.

• AI companies mean well, but the privacy implications of these products are unsuitable for enterprise or personal use.

“If you are not paying for it, you’re not the customer; you’re the product being sold.” – Andrew Lewis (blue_beetle), MetaFilter comment (2010)

It’s not news that AI is being talked about everywhere. It’s also not news that the websites and applications you use regularly are doing their level best to spy on you or obtain data that can be used internally or be sold to advertisers. Nor is it news that the state of privacy laws across the world is pretty poor, despite the EU giving its best attempt and the US pretending that three lines of legalese in a 15-page disclaimer somehow magically sets the ‘informed’ flag on users.

But the latest trend involves AI companies either creating browser extensions or, in at least one case, creating their own browser. OpenAI is touting its AI-enabled browser called Atlas, designed to both remember all activity, search that activity, chat, and do any number of AI-enhanced things. OpenAI rival Perplexity has a browser product called Comet. There are even sidebar browser extensions for Microsoft Copilot and Google Gemini. Some browsers, such as Firefox and Brave, come with an AI sidebar but uses your choice of LLM.

The first problem is an AI watching everything – your passwords, all text you type, your URLs… everything. Then that data isn’t stored locally; it’s stored with the AI. The problems here are no different than the problems with Microsoft Recall, an AI-driven search and backup feature that Microsoft released earlier in 2025, much to the consternation of pretty much everyone. All these AI companies have multiple safeguards to protect data, have stated policies on how such data can be used and where, and are being pretty upfront about how and when they use your data. They allow end users to pick and choose when the AI is available or even forget that data after a session. Companies adding these AI features to the browser are legitimately trying to make the lives of users easier with AI and protect user privacy.

They are adding other safeguards as well. OpenAI says that its Atlas AI browser cannot access other applications, download files, and cannot install extensions. Technological limits to prevent AI browsers and extensions from becoming security risks are being taken.

But giving any corporation a detailed record of all activities conducted on the internet, including every click, search, text, or picture and the metadata around it could have disastrous consequences in the long term. Hackers could gain access to the data. Governments could seize the data and use it against a populace or an individual. Companies get bought, end user agreements change, or investors could simply demand that all that personal data is monetized. If companies go out of business, what happens to the data? A fair amount of the world doesn’t have any legal mechanism to force businesses to delete data either.

Then there are the other issues, regarding security on your desktop. Social engineering or AI chat window spoofing is a real issue. That’s just the tip of the iceberg.

Every individual and every enterprise have the choice to decide whether the risks are worth the utility of having AI integrated into your browser. Everyone wants tools that work better; some of the features in AI browsers are impressive, and likely even more features will be coming. But that shouldn’t be at the expense of risking all your personal data or risking the company’s internal data, no matter how nice the tools look or how much you trusts a given AI vendor. This is about ensuring personal privacy and the data security of enterprises. Take a pass on AI browsers and AI browser extensions. Nobody would stand for being under video and audio surveillance every second and everywhere. Don’t allow the same to happen to your digital life.

Security Falls on Deaf Ears

S. Schuchart

Jaguar Land Rover, the iconic British car manufacturer has had virtually no production in its plants since the end of August 2025. A devastating cyberattack shut the company down – details on how the attack happened, who initiated the attack, and why it so thoroughly shut down Jaguar Land Rover have not been released to date. The postmortem will be an interesting read, more so to find out how much of the effect of this cyberattack was Jaguar Land Rover’s fault. No, this isn’t indulgent victim-blaming, and right now there is no proof the Jaguar Land Rover was anything but diligent. But the length of the shutdown and the secrecy does arise suspicions. Under principles of good business continuity and disaster recovery, Jaguar Land Rover should have been at least somewhat back in production by now. But analysis will really have to wait until details emerge.

This does highlight an issue that most organizations struggle with. Cybersecurity, as well as disaster recovery and business continuity, are preventative – they shouldn’t be noticed unless they are needed… or if they didn’t work. It’s hard to get satisfaction creating business continuity/disaster recovery (BC/DR) systems that you may never get to actually use. Security has a much higher profile… but ‘everything is running smoothly’ doesn’t often gain accolades.

Cybersecurity, and especially BC/DR are often pressured to compromise, for finance, for convenience, and because neither function will ever make money for the organization. Often there is a push to compare cybersecurity and BC/DR to an automotive or homeowner’s insurance policy, that they offer peace of mind. There is a better way to think about it. Think of cybersecurity and BC/DR like law enforcement thinks about bomb squad units. Bomb squad units get all the training and practice they want. Bomb squad units are encouraged to get the latest training, learn the latest advances, and to keep their equipment as up to date as possible. Nobody thinks that the bomb squad has it easy when they render an explosive safe, or in the best of times are not called on. Nobody suggests that the bomb squad does more with less. Because the consequences are so extreme, both for the bomb squad and for the law enforcement organization.

Budget holders need to start viewing cybersecurity, BC/DR, and BC/DR testing like the bomb squad. Yes, they provide peace of mind. But what they really provide is protection from extreme consequences. Nobody wants the organization in the news for having been knocked offline for a month in every major news outlet. Nobody wants to have to create the postmortem and present it to the board and likely various government officials, insurance executives, investor representatives and lawyers. Let’s not let this plea to take cybersecurity and BC/DR seriously fall on deaf ears like it has in the past.

Cisco Quantum – Simply Network All the Quantum Computers

S. Schuchart

Cisco’s Quantum Labs research team, part of Outshift by Cisco, has announced that they have completed a complete software solution prototype. The latest part is the Cisco Quantum Complier prototype, designed for distributed quantum computing across networked processors. In short, it allows a network of quantum computers, of all types, to participate in solving a single problem. Even better, this new compiler supports distributed quantum error correction. Instead of a quantum computer needing to have a huge number of qbits itself, the load can be spread out among multiple quantum computers. This coordination is handled across a quantum network, powered by Cisco’s Quantum Network entanglement chip, which was announced in May 2025. This network could also be used to secure communications for traditional servers as well.

For some quick background – one of the factors holding quantum computers back is the lack of quantity and quality when it comes to qubits. Most of the amazing things quantum computers can in theory do require thousands or millions of qubits. Today we have systems with around a thousand qubits. But those qubits need to be quality qubits. Qubits are extremely susceptible to outside interference. Qubits need to be available in quantity as well as quality. To fix the quality problem, there has been a considerable amount of work performed on error correction for qubits. But again, most quantum error correction routines require even more qubits to create logical ‘stable’ qubits. Research has been ongoing across the industry – everyone is looking for a way to create large amounts of stable qubits.

What Cisco is proposing is that instead of making a single quantum processor bigger to have more qubits, multiple quantum processors can be strung together with their quantum networking technology and the quality of the transmitted qubits should be ensured with distributed error correction. It’s an intriguing idea – as Cisco more or less points out we didn’t achieve scale with traditional computing by simply making a single CPU bigger and bigger until it could handle all tasks. Instead, multiple CPUs were integrated on a server and then those servers networked together to share the load. That makes good sense, and it’s an interesting approach. Just like with traditional CPUs, quantum processors will not suddenly stop growing – but if this works it will allow scaling of those quantum processors on a smaller scale, possibly ushering in useful, practical quantum computing sooner.

Is this the breakthrough needed to bring about the quantum computing revolution? At this point it’s a prototype – not an extensively tested method. Quantum computing requires so much fundamental physics research and is so complicated that its extremely hard to say if what Cisco is suggesting can usher in that new quantum age. But it is extremely interesting, and it will certainly be worth watching this approach as Cisco ramps up its efforts in quantum technologies.

HPE’s Concessions Made to US DoJ to Acquire Juniper Will Have an Uncertain Impact

S. Schuchart

Summary bullets:

• The long-awaited merger is nearly here

• The impact of these concessions will play out over time

The long, drawn-out saga of HPE’s quest to buy Juniper has reached another milestone. HPE and the United States Department of Justice (DoJ) have reached a deal that, pending judicial approval, will allow the transaction to complete. However, there are a couple of concessions on HPE’s part. First it must divest its HPE Aruba Instant On business within 180 days to satisfy the DoJ’s worries about Wi-Fi market share of the combined companies. Second, HPE must auction off a perpetual, non-exclusive license to the source code for AI Ops for Mist.

The impact to HPE is more around the Instant On business. HPE’s networking division, in all its many perturbations dating all the way back to its original ProCurve networking products, has had a strong presence in the SMB/SME market. Loss of the Instant On business will be a blow to any SMB/SME ambitions HPE may have. However, this comes in exchange for access to the data center networking market, a much more mature AI to base their networking and other products on, and access to the security and telco markets that the acquisition of Juniper will facilitate.

When it comes to having to license the AI Ops for Mist product, the potential competitive issues for HPE really depend on who wins the auction. A direct networking competitor would be the worst result for HPE. But if if AI Ops for Mist by more of a generalized Ops-focused vendor, it would be easier for HPE to compete. HPE already has its OpsRamp solution and folding the Mist AI into it should be technically doable if it becomes a competitive issue. The big place for AI Ops for Mist is of course in the networking division, where HPE can claim that Mist AI is more mature than the AI offered by other networking competitors.

One last thing on the subject – it is food for thought to ponder where the HPE Aruba Instant On solution may land. With security and networking becoming so interconnected to meet the enterprise goals of security, simplicity, and operational efficiency, there very well may be a security company out there that would like to start their networking journey and Instant On would be a place to start in the SMB/SME market. Companies like Palo Alto Networks come to mind, or a networking vendor that wants to crack the SMB/SME market like Arista. All of that, including how much it will cost to buy Instant On, is a matter of speculation. Over the next few months there will no doubt be more news on HPE’s latest networking acquisition and the fallout from the concessions made to the DoJ.

We Are Becoming Numb to Cybersecurity Breaches

S. Schuchart

Summary Bullets:

• Password managers do tend to make logging in easier – but it’s a change that people must get used to…

• To really embrace cybersecurity, there needs to be a reckoning to correct old thinking and ideas.

Sixteen (16) billion. That’s a number that isn’t comprehendible. It’s a number you hear on the news, usually in a science segment or in a finance segment talking about the ultra-wealthy. But this time, 16 billion is the number of exposed login credentials researchers from Cybernews found in an exposed dataset. This dataset contains stolen login credentials, mostly gained via malware. The credentials come from everywhere – from websites around the world, including popular websites and cloud services.

What is known is that the dataset was visible for a short time before being taken down. We know that some or all of the data in the dataset is not new but comes from earlier breaches and infostealers. We do not know where the data was being held/exposed from. The data wasn’t stolen from any one site breach, but likely a compilation of earlier stolen credentials. Initial reports seem to indicate that much of the discovery is net-new, but that has since been disputed. Still, that many credentials in one spot is a worry.

What was interesting about this information was essentially the lack of reaction from the public. Sure, skepticism of the discovery happened quickly – many security experts feel that this was a bit of a case of crying wolf. But the initial reaction by the public was more of a shrug. After all, how many times can a person’s login credentials get stolen? How many times should an individual go through the cumbersome process of updating passwords? Especially when it seems like there are more breaches every day. Keeping one’s credentials up to date after breaches begins to look like a Sisyphean task.

Cybersecurity fatigue is real, and the public is becoming increasingly numb about cybersecurity incidents. Reminders to update passkeys, use password managers, don’t reuse passwords, and enable multi-factor authentication are a constant drumbeat. With every hysteria-filled announcement of another breach that spills user data and login credentials, more people tune it out entirely – after all, *they* have never been hit.

The ugly truth: Good cybersecurity is difficult, even when just talking about login and passwords. Passwords should be long, 20-30 characters, randomly generated, and contain upper- and lower-case letters, numbers, and symbols. Each site should have its own password. People resist that – extremely difficult to remember a password like that, and it’s much easier to simply have a single password to use everywhere. A password manager is required to generate and store these passwords, as well as enter them when it comes time to log in. That password manager needs to work across platforms – e.g., Apple (e.g., phones, tablets, macs), PC, Android, and Linux.

But a password manager is yet another thing – one that requires its own password. To make it worse, the very public breach of LastPass, a popular password manager, makes people distrust password managers, especially those with a cloud component. There is also the learning barrier – using a password manager requires effort and changes how you log in. Password managers do tend to make logging in easier – but it’s a change that people must get used to, and people hate change to daily routines like logging in. Changing habits is hard, and not being able to just instantly enter a memorized password feels frustrating at first.

To really embrace cybersecurity, there needs to be a reckoning to correct old thinking and ideas. Let’s take a look:

• Password managers are not hard or scary – they are designed for ease of use, and there are tons of tutorials.

• Your personal password generation is vulnerable, no matter how clever the scheme you created is. Brute force techniques are far better than you imagine. And no, the word ‘password’ backwards isn’t clever.

• Password re-use is a vulnerability, no matter how easy it makes things.

• The fact that a person has never been hacked or doesn’t know anyone who has been isn’t a reason to keep old practices.

• This isn’t about having perfect security. It’s about protecting yourself and limiting damage if a breach occurs. Just like locking your doors and putting your blinds down at night.

Take the plunge yourself, get a password manager, then show a friend that it isn’t that hard and, in the end, never forgetting a password is a time-saver too! Proactive action with a password manager and password hygiene is important, and we cannot let the slew of high-profile breaches numb us from upping the quality of our own cybersecurity regimen.

❌