Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Amazon fixes Alexa ordering bug, Microsoft rethinks AI data centers, and cameras capture every fan

17 January 2026 at 10:37

Someone listening to last week’s GeekWire Podcast caught something we missed: a misleading comment by Alexa during our voice ordering demo — illustrating the challenges of ordering by voice vs. screen. We followed up with Amazon, which says it has fixed the underlying bug.

On this week’s show, we play the audio of the order again. Can you catch it? 

Plus, Microsoft announces a “community first” approach to AI data centers after backlash over power and water usage — and President Trump scooped us on the story. We discuss the larger issues and play a highlight from our interview with Microsoft President Brad Smith.

Also: the technology capturing images of every fan at Lumen Field, UK police blame Copilot for a hallucinated soccer match, and Redfin CEO Glenn Kelman departs six months after the company’s acquisition by Rocket.

Subscribe to GeekWire in Apple Podcasts, Spotify, or wherever you listen.

Audio editing by Curt Milton.

Lawmaker eyes bill to codify NIST AI center

A top House lawmaker is developing legislation to codify the National Institute of Standards and Technology’s Center for AI Standards and Innovation into law.

The move to codify CAISI comes as lawmakers and the Trump administration debate and discuss the federal government’s role in overseeing AI technology.

Rep. Jay Obernolte (R-Calif.), chairman of the House Science, Space and Technology Committee’s research and technology subcommittee, said he has a “forthcoming” bill dubbed the “Great American AI Act.”

During a Wednesday hearing, Obernolte said the bill will formalize CAISI’s role to “advance AI evaluation and standard setting.”

“The work it does in doing AI model evaluation is essential in creating a regulatory toolbox for our sectoral regulators, so everyone doesn’t have to reinvent the wheel,” Obernolte said.

The Biden administration had initially established an AI Safety Institute at NIST. But last summer, the Trump administration rebranded the center to focus on standards and innovation.

Last September, the center released an evaluation of the Chinese “DeepSeek” AI model that found it lagged behind U.S. models on cost, security and performance. More recently, CAISI released a request for information on securing AI agent systems.

Despite the Trump administration’s rebranding, however, Obernolte noted the NIST center’s functions have largely stayed consistent. He argued codifying the center would provide stability.

“I think everyone would agree, it’s unhealthy for us to have every successive administration spin up a brand new agency that, essentially, is doing something with a long-term mission that needs continuity,” he said.

Obernolte asked Michael Kratsios, the director of the White House Office of Science and Technology Policy, what he thought about codifying the center into law.

Kratsios said CAISI is an “very important part of the larger AI agenda.” He also said it was important for the administration to reframe the center’s work around innovation and standards, rather than safety.

“It’s absolutely important that the legacy work around standards relating to AI are undertaken by CAISI, and that’s what they’re challenged to do,” Kratsios said. “And that’s the focus that they should have, because the great standards that are put out by CAISI and by NIST are the ones that, ultimately, will empower the proliferation of this technology across many industries.”

Later on in the hearing, Kratsios said the NIST center would play a key role in setting standards for “advanced metrology of model evaluation.”

“That is something that can be used across all industries when they want to deploy these models,” he said. “You want to have trust in them so that when everyday Americans are using, whether it be medical models or anything else, they are comfortable with the fact that it has been tested and evaluated.”

Obernolte and Rep. Sarah McBride (D-Del.), meanwhile, have also introduced the “READ AI Act.” The bill would direct NIST to develop guidelines for how AI models should be evaluated, including standard documentation.

Asked about the bill, Kratsios said it was worthy of consideration, but added that any such efforts should avoid just focusing on frontier AI model evaluation.

“The reality is that the most implementation that’s going to happen across industry is going to happen through fine-tuned models for specific use cases, and it’s going to be trained on specific data that the large frontier models never had access to,” he added.

“In my opinion, the greatest work that NIST could do is to create the science behind how you measure models, such that any time that you have a specific model – for finance, for health, for agriculture – whoever’s attempting to implement it has a framework and a standard around how they can evaluate that model,” Kratsios continued. “At the end of the day, the massive proliferation is going to be through these smaller, fine-tuned models for specific use cases.”

Discussion around the role of the NIST center comes amid a larger debate over the role of the federal government in setting AI standards. In a December executive order, President Donald Trump called for legislative recommendations to create a national framework that would preempt state AI laws.

But during the hearing, Kratsios offered few specifics on what he and Special Adviser for AI and Crypto David Sacks have been considering.

“That’s something that I very much look forward to working with everyone on this committee on,” Kratsios said.  “What was clear in the executive order, specifically, was that, any proposed legislation should not preempt otherwise lawful state actions relating to child safety protections, AI compute and data infrastructure, and also state government procurement and use of AI.”

“But, we look forward over the next weeks and months to be working with Congress on a viable solution,” he added.

The post Lawmaker eyes bill to codify NIST AI center first appeared on Federal News Network.

© Federal News Network

NIST

Filing: Human rights proposals win more than 25% of votes at Microsoft shareholder meeting

9 December 2025 at 18:43
Microsoft’s logo on the company’s Redmond campus. (GeekWire File Photo)

Two human rights proposals at Microsoft’s annual shareholder meeting drew support from more than a quarter of voting shares — far more than any other outside proposals this year.

The results, disclosed Monday in a regulatory filing, come amid broader scrutiny of the company’s business dealings in geopolitical hotspots. The proposals followed a summer of criticism and protests over the use of Microsoft technology by the Israeli military. 

The filing shows the vote totals for six outside shareholder proposals that were considered at the Dec. 5 meeting. Microsoft had announced shortly after the meeting that shareholders rejected all outside proposals, but the numbers had not previously been disclosed.

According to the filing, two proposals received outsized support: 

  • Proposal 8, filed by an individual shareholder, called for a report on Microsoft’s data center expansion in Saudi Arabia and nations with similar human rights records. It asked the company to evaluate the risk that its technology could be used for state surveillance or repression, and received more than 27% support.
  • Proposal 9, seeking an assessment of Microsoft’s human rights due diligence efforts, won more than 26% of votes. The measure called for Microsoft to assess the effectiveness of its processes in preventing customer misuse of its AI and cloud products in ways that violate human rights or international humanitarian law.

Proposal 9 had received support from proxy advisor Institutional Shareholder Services — a rare endorsement for a first-time filing. Proxy advisor Glass Lewis recommended against it.

The measure attracted 58 co-filers and sparked opposing campaigns. JLens, an investment advisor affiliated with the Anti-Defamation League, said Proposal 9 was aligned with the Boycott, Divestment and Sanctions movement, which pressures companies to cut ties with Israel. Ekō, an advocacy group that backed the proposal, said the vote demonstrated growing concerns about Microsoft’s contracts with the Israeli military.

In September, Microsoft cut off an Israeli military intelligence unit’s access to some Azure services after finding evidence supporting a Guardian report in August that the technology was being used for surveillance of Palestinian civilians.

Microsoft’s board recommended shareholders vote against all six outside proposals at the Dec. 5 annual meeting. Here’s how the other four proposals fared: 

  • Proposals 5 and 6, focused on censorship risks from European security partnerships and AI content moderation, drew less than 1% support.
  • Proposal 7, which asked for more transparency and oversight on how Microsoft uses customer data to train and operate its AI systems, topped 13% support.
  • Proposal 10, calling for a report on climate and transition risks tied to AI and machine‑learning tools used by oil and gas companies, received 8.75%.

See Microsoft’s proxy statement and our earlier coverage for more information.

❌
❌