Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Microsoft Trials System Recovery Features and UI Refinements in Windows 11

24 November 2025 at 09:48

The update reflects Microsoft’s ongoing strategy to modernize Windows through incremental, toggle-controlled feature rollouts.

The post Microsoft Trials System Recovery Features and UI Refinements in Windows 11 appeared first on TechRepublic.

Microsoft Trials System Recovery Features and UI Refinements in Windows 11

24 November 2025 at 09:48

The update reflects Microsoft’s ongoing strategy to modernize Windows through incremental, toggle-controlled feature rollouts.

The post Microsoft Trials System Recovery Features and UI Refinements in Windows 11 appeared first on TechRepublic.

Read AI steps into the real world with new system for capturing everyday work chatter

19 November 2025 at 08:00
Read AI’s apps, including its new Android app, now include the ability to record impromptu in-person meetings. (Read AI Images)

Read AI, which made its mark analyzing online meetings and messages, is expanding its focus beyond the video call and the email inbox to the physical world, in a sign of the growing industry trend of applying artificial intelligence to offline and spontaneous work data.

The Seattle-based startup on Wednesday introduced a new system called Operator that captures and analyzes interactions throughout the workday, including impromptu hallway conversations and in-person meetings in addition to virtual calls and emails, working across a wide range of popular apps and platforms. 

With the launch, Read AI is releasing new desktop clients for Windows and macOS, and a new Android app to join its existing iOS app and browser-based features.

For offline conversations — like a coffee chat or a conference room huddle — users can open the Read AI app and manually hit record. The system then transcribes that audio and incorporates it into the company’s AI system for broader insights into each user’s meetings and workday.

It comes as more companies bring workers back to the office for at least part of the week. According to new Read AI research, 53% of meetings now happen in-person or without a calendar invite — up from 47% in 2023 — while a large number of workday interactions occur outside of meetings entirely.

Read AI is seeing an expansion of in-person and impromptu work meetings across its user base. (Read AI Graphic; Click for larger image)

In a break from others in the industry, Operator works via smartphone in these situations and does not require a pendant or clip-on recording device. 

“I don’t think we’d ever build a device, because I think the phones themselves are good enough,” said Read AI CEO David Shim in a recent interview, as featured on this week’s GeekWire Podcast.

This differs from hardware-first competitors like Limitless and Plaud, which require users to purchase and wear dedicated devices to capture “real-world” audio throughout the day.

While these companies argue that a wearable provides a frictionless, “always-on” experience without draining your phone’s battery, Read AI is betting that the friction of charging and wearing a separate gadget is a bigger hurdle than simply using the device you already have.

To address the privacy concerns of recording in-person chats, Read AI relies on user compliance rather than an automated audible warning. When a user hits record on the desktop or mobile app, a pop-up prompts them to declare that the conversation is being captured, via voice or text. On mobile, a persistent reminder remains visible on the screen for the duration of the recording.

Founded in 2021 by David Shim, Robert Williams, and Elliott Waldron, Read AI has raised more than $80 million and landed major enterprise customers for its cross-platform AI meeting assistant and productivity tools. It now reports 5 million monthly active users, with 24 million connected calendars to date.

Operator is included in all of Read AI’s existing plans at no additional cost.

Does your AI have an ID? Microsoft looks to document the digital workforce with new ‘Agent 365’

18 November 2025 at 11:26
A registration line at a 2024 Microsoft developer conference. (GeekWire File Photo / Todd Bishop)

Satya Nadella recently foreshadowed a major shift in the company’s business — saying the tech giant will increasingly build its products and infrastructure not just for human users, but for autonomous AI agents that operate as a new class of digital workers.

“The way to think about the per-user business is not just per user, it’s per agent,” the Microsoft CEO said during his latest appearance on Dwarkesh Patel’s podcast.

At its Ignite conference this week, the company is starting to show what that means. Microsoft is unveiling a series of new products that give IT departments a way to manage and secure their new AI workforce, in much the same way as HR oversees human employees.

The big announcement: Microsoft Agent 365, a new “control plane” that functions as a central management dashboard inside the Microsoft 365 Admin Center that IT teams already use. 

Its core function is to govern a company’s entire AI workforce — including agents from Microsoft and other companies — by giving every agent a unique identification. This lets companies use their existing security systems to track what agents are doing, control what data they can access, and prevent them from being hacked or leaking sensitive information.

Microsoft’s approach addresses what has become a major headache for businesses in 2025: “Shadow AI,” with employees turning to unmanaged AI tools at growing rates. 

It also represents a big opportunity for the tech industry, as tech giants look to grow revenue to match their massive infrastructure investments. The AI agent market is expanding rapidly, with Microsoft citing analyst estimates of 1.3 billion agents by 2028. Market research firms project the market will grow from around $7.8 billion in 2025 to over $50 billion by 2030.

Google, Amazon, and Salesforce have all rolled out their own agentic platforms for corporate use — Google with its Gemini Enterprise platform, Amazon with new Bedrock AgentCore tools for managing AI agents, and Salesforce with Agentforce 360 for customer-facing agents.

Microsoft is making a series of announcements related to agents at Ignite, its conference for partners, developers, and customers, taking place this week in San Francisco. Other highlights:

  • A “fully autonomous” Sales Development Agent will research, qualify, and engage sales leads on its own, acting basically like a new member of the sales team.
  • Security Copilot agents in Microsoft’s security tools will help IT teams automate tasks, like having an agent in Intune create a new security policy from a text prompt.
  • Word, Excel, and PowerPoint agents will allow users to ask Copilot, via chat, to create a complete, high-quality document or presentation from scratch. 
  • Windows is getting a new “Agent Workspace,” a secure, separate environment on the PC where an agent can run complex tasks using its own ID, letting IT monitor its work.

As a backbone for the announcements, Agent 365 leverages Microsoft’s entrenched position in corporate identity and security systems. Instead of asking companies to adopt an entirely new platform, it’s building AI agents into tools that many businesses already use. 

For example, in the Microsoft system, each agent gets its own identity inside Microsoft Entra, formerly Active Directory, the same system that handles employee logins and permissions.

Microsoft is rolling out Agent 365 starting this week in preview through Frontier, its early-access program for its newest AI innovations. Pricing has not yet been announced.

Real revenue, actual value, and a little froth: Read AI CEO David Shim on the emerging AI economy

15 November 2025 at 10:30
Read AI CEO David Shim discusses the state of the AI economy in a conversation with GeekWire co-founder John Cook during a recent Accenture dinner event for the “Agents of Transformation” series. (GeekWire Photo / Holly Grambihler)

[Editor’s Note: Agents of Transformation is an independent GeekWire series and 2026 event, underwritten by Accenture, exploring the people, companies, and ideas behind the rise of AI agents.]

What separates the dot-com bubble from today’s AI boom? For serial entrepreneur David Shim, it’s two things the early internet never had at scale: real business models and customers willing to pay.

People used the early internet because it was free and subsidized by incentives like gift certificates and free shipping. Today, he said, companies and consumers are paying real money and finding actual value in AI tools that are scaling to tens of millions in revenue within months.

But the Read AI co-founder and CEO, who has built and led companies through multiple tech cycles over the past 25 years, doesn’t dismiss the notion of an AI bubble entirely. Shim pointed to the speculative “edges” of the industry, where some companies are securing massive valuations despite having no product and no revenue — a phenomenon he described as “100% bubbly.”

He also cited AMD’s deal with OpenAI — in which the chipmaker offered stock incentives tied to a large chip purchase — as another example of froth at the margins. The arrangement had “a little bit” of a 2000-era feel of trading, bartering and unusual financial engineering that briefly boosted AMD’s stock.

But even that, in his view, is more of an outlier than a systemic warning sign.

“I think it’s a bubble, but I don’t think it’s going to burst anytime soon,” Shim said. “And so I think it’s going to be more of a slow release at the end of the day.”

Shim, who was named CEO of the Year at this year’s GeekWire Awards, previously led Foursquare and sold the startup Placed to Snap. He now leads Read AI, which has raised more than $80 million and landed major enterprise customers for its cross-platform AI meeting assistant and productivity tools.

He made the comments during a wide-ranging interview with GeekWire co-founder John Cook. They spoke about AI, productivity, and the future of work at a recent dinner event hosted in partnership with Accenture, in conjunction with GeekWire’s new “Agents of Transformation” editorial series.

We’re featuring the discussion on this episode of the GeekWire Podcast. Listen above, and subscribe to GeekWire in Apple Podcasts, Spotify, or wherever you listen. Continue reading for more takeaways.

Successful AI agents solve specific problems: The most effective AI implementations will be invisible infrastructure focused on particular tasks, not broad all-purpose assistants. The term “agents” itself will fade into the background as the technology matures and becomes more integrated.

Human psychology is shaping AI deployment: Internally, ReadAI is testing an AI assistant named “Ada” that schedules meetings by learning users’ communication patterns and priorities. It works so quickly, he said, that Read AI is building delays into its responses, after finding that quick replies “freak people out,” making them think their messages didn’t get a careful read.

Global adoption is happening without traditional localization: Read AI captured 1% of Colombia’s population without local staff or employees, demonstrating AI’s ability to scale internationally in ways previous technologies couldn’t.

“Multiplayer AI” will unlock more value: Shim says an AI’s value is limited when it only knows one person’s data. He believes one key is connecting AI across entire teams, to answer questions by pulling information from a colleague’s work, including meetings you didn’t attend and files you’ve never seen.

“Digital Twins” are the next, controversial frontier: Shim predicts a future in which a departed employee can be “resurrected” from their work data, allowing companies to query that person’s institutional knowledge. The idea sounds controversial and “a little bit scary,” he said, but it could be invaluable for answering questions that only the former employee would have known.

Subscribe to GeekWire in Apple Podcasts, Spotify, or wherever you listen.

Inside Microsoft’s new ‘Experience Center One’: What we learned at the edge of the AI frontier

13 November 2025 at 10:38
An interactive portal at Microsoft’s new Experience Center One grounds visitors in scenes of nature. (GeekWire Photo / Todd Bishop)

[Editor’s Note: Agents of Transformation is an independent GeekWire series and 2026 event, underwritten by Accenture, exploring the people, companies, and ideas behind the rise of AI agents.]

REDMOND, Wash. — If AI were a religion, this would probably qualify as a cathedral.

On the edge of Microsoft’s headquarters, overlooking Lake Bill amid a stand of evergreens, a new four-story building has emerged as a destination for business and tech decision-makers.

Equal parts briefing center, conference hall, and technology showroom, Microsoft’s “Experience Center One” offers a curated glimpse of the future — guided tours through glowing demo rooms where AI manages factory lines, models financial markets, and helps design new drugs.

It’s part of a larger scene playing out across tech. As Microsoft, Google, Amazon and others pour billions into data centers, GPUs, and frontier models, they’re making the case that AI represents not a bubble but a business transformation that’s here to stay. 

Microsoft’s new Experience Center One on the company’s Redmond campus was designed by WRNS Studio. (Photo by Jason O’Rear)

As the new center shows, Microsoft’s pitch isn’t just about off-the-shelf AI models or run-of-the-mill chatbots — it’s about custom agentic systems that act on behalf of workers to complete tasks across a variety of tools and data sources.

That idea runs through nearly everything inside the facility, a glass-encased building featuring an elevated garden in a soaring open-air atrium, just across from Microsoft’s new executive offices on its revamped East Campus.

Experience Center One highlights what Microsoft calls “frontier firms” — ambitious companies using AI to push their operations to the edge of what’s possible in their industries. 

Agentic AI is “fast becoming the next defining chapter of a frontier organization,” said Alysa Taylor, Microsoft chief marketing officer for Commercial Cloud and AI, in an interview.

The underlying message is clear: get on board or risk falling behind, both competitively and financially. A new IDC study, commissioned by Microsoft, finds both opportunity in spending big and risk in not being bold enough. Companies integrating AI across an average of seven business functions are realizing a return on investment of 2.84 times, it says. In contrast, “laggards” are seeing returns of 0.84 times — basically losing money on their initial spend.

The divide extends to revenue, too: 88% of frontier firms report top-line growth from their AI initiatives, compared to just 23% of laggards, according to the IDC study.

And hey, somebody has to foot the bill for those multi-billion-dollar AI superfactories.

For this second installment in our Agents of Transformation series, GeekWire visited the new Microsoft facility to see first-hand how the company is presenting its vision of the future. Here are some of the takeaways from the sampling of demos we saw.

These are not off-the-shelf solutions. Each demo reflects a custom deployment built with a major customer, showing how AI tools can be tailored to specific business problems. 

Collin Vandament of Microsoft demonstrates a BlackRock investment-analysis scenario inside Experience Center One, showing how a custom AI copilot can translate natural-language questions into the firm’s proprietary BQL code during a tour of the new facility in Redmond. (GeekWire Photo / Todd Bishop)

For example, one shows how Microsoft has worked with BlackRock to integrate a custom AI copilot inside the investment firm’s Aladdin platform to help analysts process large volumes of client and market data more efficiently. It helps reduce the manual work of gathering data and points analysts to potential risks sooner than they might have spotted it on their own.

As another example of the customization, the system is trained to translate natural language requests into “BQL,” BlackRock’s proprietary programming language.

This deep level of integration tracks with the findings in the IDC report. It found that 58% of “frontier firms” are already relying on custom-built or fine-tuned solutions rather than generic models. This is expected to accelerate, with 70% planning to move toward customized tools in the next two years to better handle their proprietary data and compliance needs.

“That’s a trend that we’ve seen even in the low-code movement — taking an out-of-the-box solution, extending it, and customizing it,” said Taylor, the Microsoft commercial CMO.

OpenAI integration remains critical for many Microsoft customers. Another demo focused on Microsoft’s work with Ralph Lauren, showing how the “Ask Ralph” assistant interprets a shopper’s intent and recommends full outfits from available inventory.

Like many of the scenarios inside Experience Center One, this experience runs on Microsoft’s Azure OpenAI Service. It’s a reminder that Microsoft’s partnership with OpenAI — renewed and expanded in recent months — is still a key driver of commercial demand for the tech giant, even as both companies increasingly work with other industry partners.

Teams of agents are starting to redefine industrial work. The clearest example of this was a digital twin simulation from Mercedes-Benz — essentially a virtual version of a factory that lets engineers anticipate and diagnose issues without stopping real production.

The demo begins with a production alert triggered by a drop in efficiency. In a real plant, tracking down the cause (something as small as a slight angle change in a screw) might take a team of specialists days of sorting through machine logs and sensor data.

In Microsoft’s version, a human manager simply asks the system to diagnose what’s causing the problem, through a natural language interface. That question triggers a set of AI agents, each with a specific role: one pulls the right data, another retrieves machine logs, and a third interprets what it all means in plain language.

Within about 15 minutes, the system produces a clear explanation of the likely cause and possible fixes, shortcutting a task that could otherwise stretch across most of a week.

AI is compressing weeks or months of scientific research into days or hours. A demo focusing on Insilico Medicine’s work with Microsoft showed how AI is starting to significantly collapse the timeline for drug discovery. 

The process begins with a “digital researcher” that scans huge amounts of public biomedical data to surface promising disease targets. It’s the kind of work that would otherwise take teams of scientists months of reading and analysis.

Collin Vandament demonstrates an Insilico Medicine drug-discovery scenario inside Experience Center One, where interactive displays visualize how AI models surface potential disease targets and rank candidate molecules. (GeekWire Photo / Todd Bishop)

A second system runs simulated chemistry experiments in the cloud, generating and ranking potential molecules that might bind to those targets. These simulations can be completed in a matter of days, or less, replacing weeks or even months of traditional laboratory work.

The demo follows a real example: Insilico used this workflow to identify a potential target for a lung disease and design molecules that could affect it. The company then synthesized dozens of these AI-generated compounds in the lab. One of them is now in Phase 2a human trials.

That’s a small sampling of the demos inside Microsoft’s Experience Center One. During our tour, we walked past displays for other major brands, including more iconic U.S. companies, but not everyone was willing to have the media spotlight cast upon their projects. As a condition of access, we agreed to stick to the examples cleared for public release.

Of course, the demos are carefully curated, and it remains to be seen how broadly companies will deploy these kinds of systems in their real-world operations.

The open-air atrium at Microsoft’s Experience Center One brings natural light into the building. (GeekWire Photo / Todd Bishop)

In many ways, the facility is a successor to Microsoft’s longtime Executive Briefing Center and conference facility, which remains in use a short walk away on the Redmond campus.

Experience Center One has been in operation for a couple months, hosting delegations of business clients and political dignitaries, including the prime minister of Luxembourg this week.

Trevor Noah with Hadi Partovi of Code.org during an October event at Experience Center One. (GeekWire Photo / Taylor Soper)

It’s closed to the public, invite-only. Employees can request access to visit.

Visitors arrive via a circular drive, with a plaque at the entrance dedicating the building to John Thompson, the former Microsoft board chair who led the search process that resulted in Satya Nadella’s appointment as CEO. There are private briefing suites on the upper floors, and a full cafe on the second. The building also includes a conference center with three auditoriums. 

But perhaps the most distinct feature is the interactive portal. As they leave the demos, visitors walk through an immersive digital corridor with scenes of nature on the virtual walls.

Walking through the tunnel, motion sensors track their movement, causing digital leaves and particles on the wall-sized screens to swirl and flow in their wake.

The audio consists of nature sounds (birds, wind, and rustling trees) that were recorded locally in the Redmond and Sammamish area. And in a fittingly Pacific Northwest touch, the visual display is connected to a weather API. If it has been raining outside (as it often has been recently) the digital environment inside the tunnel turns rainy, too.

It’s meant to be a final moment of grounding — a programmed moment of Zen to help executives decompress and center themselves as they contemplate the frontier ahead.

❌
❌