On Tuesday, eBay updated its User Agreement to explicitly ban third-party "buy for me" agents and AI chatbots from interacting with its platform without permission, first spotted by Value Added Resource. On its face, a one-line terms of service update doesn't seem like major news, but what it implies is more significant: The change reflects the rapid emergence of what some are calling "agentic commerce," a new category of AI tools designed to browse, compare, and purchase products on behalf of users.
eBay's updated terms, which go into effect on February 20, 2026, specifically prohibit users from employing "buy-for-me agents, LLM-driven bots, or any end-to-end flow that attempts to place orders without human review" to access eBay's services without the site's permission. The previous version of the agreement contained a general prohibition on robots, spiders, scrapers, and automated data gathering tools but did not mention AI agents or LLMs by name.
At first glance, the phrase "agentic commerce" may sound like aspirational marketing jargon, but the tools are already here, and people are apparently using them. While fitting loosely under one label, these tools come in many forms.
The Defense Logistics Agency โ an organization responsible for supplying everything from spare parts to food and fuel โ is turning to artificial intelligence and machine learning to fix a long-standing problem of predicting what the military needs on its shelves.
While demand planning accuracy currently hovers around 60%, DLA officials aim to push that baseline figure to 85% with the help of AI and ML tools. Improved forecasting will ensure the services have access to the right items exactly when they need them.ย
โWe are about 60% accurate on what the services ask us to buy and what we actually have on the shelf.ย Part of that, then, is we are either overbuying in some capacity or we are under buying. That doesnโt help the readiness of our systems,โ Maj. Gen. David Sanford, DLA director of logistics operations, said during the AFCEA NOVA Army IT Day event on Jan. 15.
Rather than relying mostly on historical purchase data, the models ingest a wide range of data that DLA has not previously used in forecasting. That includes supply consumption and maintenance data, operational data gleaned from wargames and exercises, as well as data that impacts storage locations, such as weather.
The models are tied to each weapon system and DLA evaluates and adjusts the models on a continuing basis as they learn.ย
โWe are using AI and ML to ingest data that we have just never looked at before. Thatโs now feeding our planning models. We are building individual models, we are letting them learn, and then those will be our forecasting models as we go forward,โ Sanford said.
Some early results already show measurable improvements. Forecasting accuracy for the Armyโs Bradley Infantry Fighting Vehicle, for example, has improved by about 12% over the last four months, a senior DLA official told Federal News Network.
The agency has made the most progress working with the Army and the Air Force and is addressing โsome final data-interoperability issuesโ with the Navy. Work with the Marine Corps is also underway.ย
โThe Army has done a really nice job of ingesting a lot of their sustainment data into a platform called Army 360. We feed into that platform live data now, and then we are able to receive that live data. We are ingesting data now into our demand planning models not just for the Army. Weโre on the path for the Navy, and then the Air Force is next. We got a little more work to do with Marines. Weโre not as accurate as where we need to be, and so this is our path with each service to drive to that accuracy,โ Sanford said.
Demand forecasting, however, varies widely across the services โ the DLA official cautioned against directly comparing forecasting performance.
โWhen we compare services from a demand planning perspective, itโs not an apples-to-apples comparison.ย Each service has different products, policies and complexities that influence planning variables and outcomes. Broadly speaking, DLA is in partnership with each service to make improvements to readiness and forecasting,โ the DLA official said.
The agency is also using AI and machine learning to improve how it measures true administrative and production lead times. By analyzing years of historical data, the tools can identify how industry has actually performed โ rather than how long deliveries were expected to take โ and factor that into DLA stock levels.ย ย
โWhen we put out requests, we need information back to us quickly. And then you got to hold us accountable to get information back to you too quickly. And then on the production lead times, theyโre not as accurate as what they are. Thereโs something thatโs advertised, but then thereโs the reality of what weโre getting and is not meeting the target that that was initially contracted for,โ Sanford said.
On Saturday, tech entrepreneur Siqi Chen released an open source plugin for Anthropic's Claude Code AI assistant that instructs the AI model to stop writing like an AI model. Called "Humanizer," the simple prompt plugin feeds Claude a list of 24 language and formatting patterns that Wikipedia editors have listed as chatbot giveaways. Chen published the plugin on GitHub, where it has picked up over 1,600 stars as of Monday.
"It's really handy that Wikipedia went and collated a detailed list of 'signs of AI writing,'" Chen wrote on X. "So much so that you can just tell your LLM to... not do that."
The source material is a guide from WikiProject AI Cleanup, a group of Wikipedia editors who have been hunting AI-generated articles since late 2023. French Wikipedia editor Ilyas Lebleu founded the project. The volunteers have tagged over 500 articles for review and, in August 2025, published a formal list of the patterns they kept seeing.
On Friday, OpenAI announced it will begin testing advertisements inside the ChatGPT app for some US users in a bid to expand its customer base and diversify revenue. The move represents a reversal for CEO Sam Altman, who in 2024 described advertising in ChatGPT as a "last resort" and expressed concerns that ads could erode user trust, although he did not completely rule out the possibility at the time.
The banner ads will appear in the coming weeks for logged-in users of the free version of ChatGPT as well as the new $8 per month ChatGPT Go plan, which OpenAI also announced Friday is now available worldwide. OpenAI first launched ChatGPT Go in India in August 2025 and has since rolled it out to over 170 countries.
Users paying for the more expensive Plus, Pro, Business, and Enterprise tiers will not see advertisements.
On Thursday, Taiwan Semiconductor Manufacturing Company (TSMC) reported record fourth-quarter earnings and said it expects AI chip demand to continue for years. During an earnings call, CEO C.C. Wei told investors that while he cannot predict the semiconductor industry's long-term trajectory, he remains bullish on AI.
TSMC manufactures chips for companies including Apple, Nvidia, AMD, and Qualcomm, making it a linchpin of the global electronics supply chain. The company produces the vast majority of the world's most advanced semiconductors, and its factories in Taiwan have become a focal point of US-China tensions over technology and trade. When TSMC reports strong demand and ramps up spending, it signals that the companies designing AI chips expect years of continued growth.
"All in all, I believe in my point of view, the AI is realโnot only real, it's starting to grow into our daily life. And we believe that is kind ofโwe call it AI megatrend, we certainly would believe that," Wei said during the call. "So another question is 'can the semiconductor industry be good for three, four, five years in a row?' I'll tell you the truth, I don't know. But I look at the AI, it looks like it's going to be like an endlessโI mean, that for many years to come."
On Thursday, the Wikimedia Foundation announced API access deals with Microsoft, Meta, Amazon, Perplexity, and Mistral AI, expanding its effort to get major tech companies to pay for high-volume API access to Wikipedia content, which these companies use to train AI models like Microsoft Copilot and ChatGPT.
The deals mean that most major AI developers have now signed on to the foundation's Wikimedia Enterprise program, a commercial subsidiary that sells high-speed API access to Wikipedia's 65 million articles at higher speeds and volumes than the free public APIs provide. Wikipedia's content remains freely available under a Creative Commons license, but the Enterprise program charges for faster, higher-volume access to the data. The foundation did not disclose the financial terms of the deals.
The new partners join Google, which signed a deal with Wikimedia Enterprise in 2022, as well as smaller companies like Ecosia, Nomic, Pleias, ProRata, and Reef Media. The revenue helps offset infrastructure costs for the nonprofit, which otherwise relies on small public donations while watching its content become a staple of training data for AI models.
On Tuesday, Bandcamp announced on Reddit that it will no longer permit AI-generated music on its platform. "Music and audio that is generated wholly or in substantial part by AI is not permitted on Bandcamp," the company wrote in a post to the r/bandcamp subreddit. The new policy also prohibits "any use of AI tools to impersonate other artists or styles."
The policy draws a line that some in the music community have debated: Where does tool use end and full automation begin? AI models are not artists in themselves, since they lack personhood and creative intent. But people do use AI tools to make music, and the spectrum runs from using AI for minor assistance (cleaning up audio, suggesting chord progressions) to typing a prompt and letting a model generate an entire track. Bandcamp's policy targets the latter end of that spectrum while leaving room for human artists who incorporate AI tools into a larger creative process.
The announcement emphasized the platform's desire to protect its community of human artists. "The fact that Bandcamp is home to such a vibrant community of real people making incredible music is something we want to protect and maintain," the company wrote. Bandcamp asked users to flag suspected AI-generated content through its reporting tools, and the company said it reserves "the right to remove any music on suspicion of being AI generated."
Deep learning libraries are essentially sets of functions and routines written in a given programming language. A large set of deep learning libraries can make it quite simpler for data engineers, data scientists and developers to perform tasks of any complexity without having to rewrite vast lines of code. Artificial intelligence (AI) has been rapidly [โฆ]
On Monday, US Defense Secretary Pete Hegseth said he plans to integrate Elon Musk's AI tool, Grok, into Pentagon networks later this month. During remarks at the SpaceX headquarters in Texas reported by The Guardian, Hegseth said the integration would place "the world's leading AI models on every unclassified and classified network throughout our department."
The announcement comes weeks after Grok drew international backlash for generating sexualized images of women and children, although the Department of Defense has not released official documentation confirming Hegseth's announced timeline or implementation details.
During the same appearance, Hegseth rolled out what he called an "AI acceleration strategy" for the Department of Defense. The strategy, he said, will "unleash experimentation, eliminate bureaucratic barriers, focus on investments, and demonstrate the execution approach needed to ensure we lead in military AI and that it grows more dominant into the future."
On Tuesday, Microsoft announced a new initiative called "Community-First AI Infrastructure" that commits the company to paying full electricity costs for its data centers and refusing to seek local property tax reductions.
As demand for generative AI services has increased over the past year, Big Tech companies have been racing to spin up massive new data centers for serving chatbots and image generators that can have profound economic effects on the surrounding areas where they are located. Among other concerns, communities across the country have grown concerned that data centers are driving up residential electricity rates through heavy power consumption and by straining water supplies due to server cooling needs.
The International Energy Agency (IEA) projects that global data center electricity demand will more than double by 2030, reaching around 945 TWh, with the United States responsible for nearly half of total electricity demand growth over that period. This growth is happening while much of the country's electricity transmission infrastructure is more than 40 years old and under strain.
On Sunday, Google removed some of its AI Overviews health summaries after a Guardian investigation found people were being put at risk by false and misleading information. The removals came after the newspaper found that Google's generative AI feature delivered inaccurate health information at the top of search results, potentially leading seriously ill patients to mistakenly conclude they are in good health.
Google disabled specific queries, such as "what is the normal range for liver blood tests," after experts contacted by The Guardian flagged the results as dangerous. The report also highlighted a critical error regarding pancreatic cancer: The AI suggested patients avoid high-fat foods, a recommendation that contradicts standard medical guidance to maintain weight and could jeopardize patient health. Despite these findings, Google only deactivated the summaries for the liver test queries, leaving other potentially harmful answers accessible.
The investigation revealed that searching for liver test norms generated raw data tables (listing specific enzymes like ALT, AST, and alkaline phosphatase) that lacked essential context. The AI feature also failed to adjust these figures for patient demographics such as age, sex, and ethnicity. Experts warned that because the AI model's definition of "normal" often differed from actual medical standards, patients with serious liver conditions might mistakenly believe they are healthy and skip necessary follow-up care.
On Wednesday, OpenAI announced ChatGPT Health, a dedicated section of the AI chatbot designed for "health and wellness conversations" intended to connect a user's health and medical records to the chatbot in a secure way.
But mixing generative AI technology like ChatGPT with health advice or analysis of any kind has been a controversial idea since the launch of the service in late 2022. Just days ago, SFGate published an investigation detailing how a 19-year-old California man died of a drug overdose in May 2025 after 18 months of seeking recreational drug advice from ChatGPT. It's a telling example of what can go wrong when chatbot guardrails fail during long conversations and people follow erroneous AI guidance.
Despite the known accuracy issues with AI chatbots, OpenAI's new Health feature will allow users to connect medical records and wellness apps like Apple Health and MyFitnessPal so that ChatGPT can provide personalized health responses like summarizing care instructions, preparing for doctor appointments, and understanding test results.
AI is everywhere, and CIOs canโt lead AI strategy alone. With 62% of organizations experimenting with AI, its reach is too broad for oversight to live solely with IT. Nearly half (48%) of CIOs still shoulder responsibility for leading AI strategy, even though 88% of generative AI usage happens outside their teams. The result: AI implementation is skyrocketing, but few projects across the business deliver real impact.
The solution is not more IT oversight, but distributed leadership. Department leaders know their teams best. They observe firsthand which processes slow teams down, where AI can automate, and how workflows truly function. This deep expertise makes them uniquely suited to lead AI strategy across their respective departments and realize AIโs full potential. CIOs need to pass the torch and empower them to lead.
The AI Bottleneck: Even the strongest CIOs canโt carry the entire AI agenda alone
CIOs are the champions of innovation โย expected to deliver real ROI from AI while keeping the enterprise secure, aligned, and ahead of the curve. But when every AI request, experiment, and implementation lands on their desk, even the best leaders face impossible bottlenecks. On top of this, most generative AI usage now resides outside IT, across finance, marketing, HR, and more.
The consequence is a growing โadoption-value gap.โ AI initiatives exist throughout the business, but only 5% deliver measurable ROI. When CIOs try to own every AI project, innovation stalls. To get real value, responsibility must shift to department leaders โ those closest to the work who drive meaningful results.
Distributed Leadership: The New Model for AI Success
The most successful and impactful CIOs donโt try to own everything โ they orchestrate. Department leaders who understand AI tools and are comfortable using them can step up and take ownership within their teams, relieving the CIO burden. At Freshworks, weโre putting this into practice: AI works alongside our people to remove busy work, accelerate productivity, and unlock higher-value work.
Our teams are seeing measurable efficiency gains across the organization:
Customer Support: AI agents now handle 34% of chat tickets, allowing human agents to focus on complex, high-value conversations. Productivity per agent has increased 25%, and new agent ramp time has been reduced from six months to three months.
Engineering & Quality: Developers use AI tools to write code, while quality engineers leverage AI for test cases and automation. Cycle times have dropped by up to 50%, and debugging efficiency has improved from hours to minutes in some cases.
Web & Digital Teams: Building new web pages now takes hours instead of weeks, freeing teams to focus on higher-impact initiatives.
IT Teams: AI automates ticketing, categorizes issues, and resolves requests faster, improving employee experience across the business.
HR & Recruiting: AI-powered Slack integrations help review resumes quickly and accurately, streamlining recruiting and onboarding.
Shifting ownership to department leaders unlocks each teamโs potential. CIOs move from โownersโ to enablers, setting frameworks and guardrails. This approach isnโt about cost-cutting โย it frees talent to drive innovation, growth, and problem-solving, benefiting business outcomes and employee engagement.
Building AI-Native Leaders Across the Business
Non-technical leaders may find taking the reins daunting. CIOs can support them by introducing simple, intuitive AI tools, offering literacy programs, and creating โAI championโ groups to share best practices. Teams can explore use cases tied to KPIsโfinancial forecasting, talent analytics, or operational efficiencyโwhile clear policies encourage responsible experimentation.
From Ownership to Orchestration: The CIO as the Conductor
Think of the CIO as a conductor, not a player. They set the vision, ensure harmony, and provide structure, while department leaders apply their expertise strategically. The result: an AI-fluent organization where experimentation happens faster, and value grows organically.
AI success comes from collaboration across the business. CIOs who empower leaders while providing clear governance unlock AIโs true potentialโmaking it work for people, not against them.
For CIOs seeking concrete examples of driving measurable ITSM value with AI, learn more about Freshserviceย here.
Stewart Cheifet, the television producer and host who documented the personal computer revolution for nearly two decades on PBS, died on December 28, 2025, at age 87 in Philadelphia. Cheifet created and hosted Computer Chronicles, which ran on the public television network from 1983 to 2002 and helped demystify a new tech medium for millions of American viewers.
Computer Chronicles covered everything from the earliest IBM PCs and Apple Macintosh models to the rise of the World Wide Web and the dot-com boom. Cheifet conducted interviews with computing industry figures, including Bill Gates, Steve Jobs, and Jeff Bezos, while demonstrating hardware and software for a general audience.
From 1983 to 1990, he co-hosted the show with Gary Kildall, the Digital Research founder who created the popular CP/M operating system that predated MS-DOS on early personal computer systems.
A demo video from Ai2 shows Molmo tracking a specific ball in this cat video, even when it goes out of frame. (Allen Institute for AI Video)
How many penguins are in this wildlife video? Can you track the orange ball in the cat video? Which teams are playing, and who scored? Give me step-by-step instructions from this cooking video?
Those are examples of queries that can be fielded by Molmo 2, a new family of open-source AI vision models from the Allen Institute for AI (Ai2) that can watch, track, analyze and answer questions about videos: describing whatโs happening, and pinpointing exactly where and when.
Ai2 cites benchmark tests showing Molmo 2 beating open-source models on short video analysis and tracking, and surpassing closed systems like Googleโs Gemini 3 on video tracking, while approaching their performance on other image and video tasks.
In a series of demos for reporters recently at the Ai2 offices in Seattle, researchers showed how Molmo 2 could analyze a variety of short video clips in different ways.ย
In a soccer clip, researchers asked what defensive mistake led to a goal. The model analyzed the sequence and pointed to a failure to clear the ball effectively.
In a baseball clip, the AI identified the teams (Angels and Mariners), the player who scored (#55), and explained how it knew the home team by reading uniforms and stadium branding.
Given a cooking video, the model returned a structured recipe with ingredients and step-by-step instructions, including timing pulled from on-screen text.
Asked to count how many flips a dancer performed, the model didnโt just say โfiveโ โ it returned timestamps and pixel coordinates for each one.
In a tracking demo, the model followed four penguins as they moved around the frame, maintaining a consistent ID for each bird even when they overlapped.
When asked to โtrack the car that passes the #13 car in the end,โ the model watched an entire racing clip first, understood the query, then went back and identified the correct vehicle. It tracked cars that went in and out of frame.
Big year for Ai2
Molmo 2, announced Tuesday morning, caps a year of major milestones for the Seattle-based nonprofit, which has developed a loyal following in business and scientific circles by building fully open AI systems. Its approach contrasts sharply with the closed or partially open approaches of industry giants like OpenAI, Google, Microsoft, and Meta.
Founded in 2014 by the late Microsoft co-founder Paul Allen, Ai2 this year landed $152 million from the NSF and Nvidia, partnered on an AI cancer research initiative led by Seattleโs Fred Hutch, and released Olmo 3, a text model rivaling Meta, DeepSeek and others.
Ai2 has seen more than 21 million downloads of its models this year and nearly 3 billion queries across its systems, said Ali Farhadi, the Ai2 CEO, during the media briefing last week at the instituteโs new headquarters on the northern shore of Seattleโs Lake Union.ย
Ai2 CEO Ali Farhadi. (GeekWire File Photo / Todd Bishop)
As a nonprofit, Ai2 isnโt trying to compete commercially with the tech giants โ itโs aiming to advance the state of the art and make those advances freely available.
The institute has released open models for text (OLMo), images (the original Molmo), and now video โ building toward what he described as a unified model that reasons across all modalities.
โWeโre basically building models that are competitive with the best things out there,โ Farhadi said โ but in a completely open manner, for a succession of different media and situations.
In addition to Molmo 2, Ai2 on Monday released Bolmo, an experimental text model that processes language at the character level rather than in word fragments โ a technical shift that improves handling of spelling, rare words, and multilingual text.
Expanding into video analysis
With the newly released Molmo 2, the focus is video. To be clear: the model analyzes video, it doesnโt generate video โ think understanding footage rather than creating it.
The original Molmo, released last September, could analyze static images with precision rivaling closed-source competitors. It introduced a โpointingโ capability that let it identify specific objects within a frame. Molmo 2 brings that same approach to video and multi-image understanding.
An Ai2 analysis benchmarks Molmo 2 against a variety of closed-source models. (Click for larger image)
The concept isnโt new. Googleโs Gemini, OpenAIโs GPT-4o, and Metaโs Perception LM can all process video. But in line with Ai2โs broader mission as a nonprofit institute, Molmo 2 is fully open, with its model weights, training code, and training data all publicly released.
Thatโs different from โopen weightโ models that release the final product but not the original recipe, and a stark contrast to closed systems from Google, OpenAI and others.
The distinction is not just an academic principle. Ai2โs approach means developers can trace a modelโs behavior back to its training data, customize it for specific uses, and avoid being locked into a vendorโs ecosystem.
Ai2 also emphasizes efficiency. For example, Metaโs Perception LM was trained on 72.5 million videos. Molmo 2 used about 9 million, relying on high-quality human annotations.
The result, Ai2 claims, is a smaller, more efficient model that outperforms their own much larger model from last year, and comes close to matching commercial systems from Google and OpenAI, while being simple enough to run on a single machine.
When the original Molmo introduced its pointing capability last year โ allowing the model to identify specific objects in an image โ competing models quickly adopted the feature.
โWe know they adopted our data because they perform exactly as well as we do,โ said Ranjay Krishna, who leads Ai2โs computer vision team. Krishna is also a University of Washington assistant professor, and several of his graduate students also work on the project.
Farhadi frames the competitive dynamic differently than most in the industry.
โIf you do real open source, I would actually change the word competition to collaboration,โ he said. โBecause there is no need to compete. Everything is out there. You donโt need to reverse engineer. You donโt need to rebuild it. Just grab it, build on top of it, do the next thing. And we love it when people do that.โ
A work in progress
At the same time, Molmo 2 has some clear constraints. The tracking capability โ following objects across frames โ currently tops out at about 10 items. Ask it to track a crowd or a busy highway, and the model canโt keep up.
โThis is a very, very new capability, and itโs one thatโs so experimental that weโre starting out very small,โ Krishna said. โThereโs no technological limit to this, it just requires more data, more examples of really crowded scenes.โ
Long-form video also remains a challenge. The model performs well on short clips, but analyzing longer footage requires compute that Ai2 isnโt yet willing to spend. In the playground launching alongside Molmo 2, uploaded videos are limited to 15 seconds.
And unlike some commercial systems, Molmo 2 doesnโt process live video streams. It analyzes recordings after the fact. Krishna said the team is exploring streaming capabilities for applications like robotics, where a model would need to respond to observations in real time, but that work is still early.
โThere are methods that people have come up with in terms of processing videos over time, streaming videos,โ Krishna said. โThose are directions weโre looking into next.โ
Molmo 2 is available starting today on Hugging Face and Ai2โs playground.
Salt Security has announced Salt MCP Finder technology, a dedicated discovery engine for Model Context Protocol (MCP) servers, the fast-proliferating infrastructure powering agentic AI. MCP Finder provides an organisation with a complete, authoritative view of its MCP footprint at a moment when MCP servers are being deployed rapidly, often without IT or security awareness.
As enterprises accelerate the adoption of agentic AI, MCP servers have emerged as the universal API broker that lets AI agents take action by retrieving data, triggering tools, executing workflows, and interfacing with internal systems. But this new power comes with a new problem: MCP servers are being deployed everywhere, by anyone, with almost no guardrails. MCPs are widely used for prototyping, integrating agents with SaaS tools, supporting vendor projects, and enabling shadow agentic workflows in production.
This wave of adoption sits atop fractured internal API governance in most enterprises, compounding risk. Once deployed, MCP servers become easily accessible, enabling agents to connect and execute workflows with minimal oversight. This becomes a major source of operational exposure.
The result is a rapidly growing API fabric of AI-connected infrastructure that is largely invisible to central security teams. Organisations currently lack visibility regarding how many MCP servers are deployed across the enterprise, who owns or controls them, which APIs and data they expose, what actions agents can perform through them, and whether corporate security standards and basic controls (like authentication, authorisation, and logging) are properly implemented.
Recent industry observations show why this visibility crisis matters. One study showed that only ten months after the launch of the MCP, there were over 16,000 MCP servers deployed across Fortune 500 companies. Another showed that in a scan of 1,000 MCP servers, 33% had critical vulnerability and the average MCP server had more than 5. MCP is quickly becoming one of the largest sources of โShadow AIโ as organisations scale their agentic workloads.
According to Gartnerยฎ โMost tech providers remain unprepared for the surge in agent-driven API usage. Gartner predicts that by 2028, 80% of organisations will see AI agents consume the majority of their APIs, rather than human developers.โ
Gartner further stated, โAs agentic AI transforms enterprise systems, tech CEOs who understand and implement MCP would drive growth, ensure responsible deployment and secure a competitive edge in the evolving AI landscape. Ignoring MCP risks falling behind as composability and interoperability become critical differentiators. Tech CEOs must prioritize MCP to lead in the era of agentic AI. MCP is foundational for secure, efficient collaboration among autonomous agents, directly addressing trust, security, and cost challenges.โ*
Saltโs MCP Finder technology solves the foundational challenge: you cannot monitor, secure, or govern AI agents until you know what attack surfaces exist. MCP servers are a key component of that surface.
Nick Rago, VP of Product Strategy at Salt Security, said: โYou canโt secure what you canโt see. Every MCP server is a potential action point for an autonomous agent. Our MCP Finder technology gives CISOs the single source of truth they need to finally answer the most important question in agentic AI: What can my AI agents do inside my enterprise?โ
Saltโs MCP Finder technology uniquely consolidates MCP discovery across three systems to build a unified, authoritative registry:
External Discovery โ Salt Surface
Identifies MCP servers exposed to the public internet, including misconfigured, abandoned, and unknown deployments.
Code Discovery โ GitHub Connect
Using Saltโs recently announced GitHub Connect capability, MCP Finder inspects private repositories to uncover MCP-related APIs, definitions, shadow integrations, and blueprint files before theyโre deployed.
Runtime Discovery โ Agentic AI Behavior Mapping
Analyses real traffic from agents to observe which MCP servers are in use, what tools they invoke, and how data flows through them.
Together, these sources give organisations the single source of truth required to visualise risk, enforce posture governance, and apply AI safety policies that extend beyond the model into the actual action layer.
Saltโs MCP Finder technology is available immediately as a core capability within the Salt Illuminate platform.
ย
*Source: Gartner Research, Protect Your Customers: Next-Level Agentic AI With Model Context Protocol, By Adrian Lee, Marissa Schmidt, November 2025.