Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

eBay bans illicit automated shopping amid rapid rise of AI agents

22 January 2026 at 10:56

On Tuesday, eBay updated its User Agreement to explicitly ban third-party "buy for me" agents and AI chatbots from interacting with its platform without permission, first spotted by Value Added Resource. On its face, a one-line terms of service update doesn't seem like major news, but what it implies is more significant: The change reflects the rapid emergence of what some are calling "agentic commerce," a new category of AI tools designed to browse, compare, and purchase products on behalf of users.

eBay's updated terms, which go into effect on February 20, 2026, specifically prohibit users from employing "buy-for-me agents, LLM-driven bots, or any end-to-end flow that attempts to place orders without human review" to access eBay's services without the site's permission. The previous version of the agreement contained a general prohibition on robots, spiders, scrapers, and automated data gathering tools but did not mention AI agents or LLMs by name.

At first glance, the phrase "agentic commerce" may sound like aspirational marketing jargon, but the tools are already here, and people are apparently using them. While fitting loosely under one label, these tools come in many forms.

Read full article

Comments

© Westend61 via Getty Images

DLA turns to AI, ML to improve military supply forecasting

The Defense Logistics Agency — an organization responsible for supplying everything from spare parts to food and fuel — is turning to artificial intelligence and machine learning to fix a long-standing problem of predicting what the military needs on its shelves.

While demand planning accuracy currently hovers around 60%, DLA officials aim to push that baseline figure to 85% with the help of AI and ML tools. Improved forecasting will ensure the services have access to the right items exactly when they need them. 

“We are about 60% accurate on what the services ask us to buy and what we actually have on the shelf.  Part of that, then, is we are either overbuying in some capacity or we are under buying. That doesn’t help the readiness of our systems,” Maj. Gen. David Sanford, DLA director of logistics operations, said during the AFCEA NOVA Army IT Day event on Jan. 15.

Rather than relying mostly on historical purchase data, the models ingest a wide range of data that DLA has not previously used in forecasting. That includes supply consumption and maintenance data, operational data gleaned from wargames and exercises, as well as data that impacts storage locations, such as weather.

The models are tied to each weapon system and DLA evaluates and adjusts the models on a continuing basis as they learn. 

“We are using AI and ML to ingest data that we have just never looked at before. That’s now feeding our planning models. We are building individual models, we are letting them learn, and then those will be our forecasting models as we go forward,” Sanford said.

Some early results already show measurable improvements. Forecasting accuracy for the Army’s Bradley Infantry Fighting Vehicle, for example, has improved by about 12% over the last four months, a senior DLA official told Federal News Network.

The agency has made the most progress working with the Army and the Air Force and is addressing “some final data-interoperability issues” with the Navy. Work with the Marine Corps is also underway. 

“The Army has done a really nice job of ingesting a lot of their sustainment data into a platform called Army 360. We feed into that platform live data now, and then we are able to receive that live data. We are ingesting data now into our demand planning models not just for the Army. We’re on the path for the Navy, and then the Air Force is next. We got a little more work to do with Marines. We’re not as accurate as where we need to be, and so this is our path with each service to drive to that accuracy,” Sanford said.

Demand forecasting, however, varies widely across the services — the DLA official cautioned against directly comparing forecasting performance.

“When we compare services from a demand planning perspective, it’s not an apples-to-apples comparison.  Each service has different products, policies and complexities that influence planning variables and outcomes. Broadly speaking, DLA is in partnership with each service to make improvements to readiness and forecasting,” the DLA official said.

The agency is also using AI and machine learning to improve how it measures true administrative and production lead times. By analyzing years of historical data, the tools can identify how industry has actually performed — rather than how long deliveries were expected to take — and factor that into DLA stock levels.  

“When we put out requests, we need information back to us quickly. And then you got to hold us accountable to get information back to you too quickly. And then on the production lead times, they’re not as accurate as what they are. There’s something that’s advertised, but then there’s the reality of what we’re getting and is not meeting the target that that was initially contracted for,” Sanford said.

The post DLA turns to AI, ML to improve military supply forecasting first appeared on Federal News Network.

© Federal News Network

DEFENSE_04

Wikipedia volunteers spent years cataloging AI tells. Now there's a plugin to avoid them.

21 January 2026 at 07:15

On Saturday, tech entrepreneur Siqi Chen released an open source plugin for Anthropic's Claude Code AI assistant that instructs the AI model to stop writing like an AI model. Called "Humanizer," the simple prompt plugin feeds Claude a list of 24 language and formatting patterns that Wikipedia editors have listed as chatbot giveaways. Chen published the plugin on GitHub, where it has picked up over 1,600 stars as of Monday.

"It's really handy that Wikipedia went and collated a detailed list of 'signs of AI writing,'" Chen wrote on X. "So much so that you can just tell your LLM to... not do that."

The source material is a guide from WikiProject AI Cleanup, a group of Wikipedia editors who have been hunting AI-generated articles since late 2023. French Wikipedia editor Ilyas Lebleu founded the project. The volunteers have tagged over 500 articles for review and, in August 2025, published a formal list of the patterns they kept seeing.

Read full article

Comments

© Getty Images

ロボットはなぜ失敗するのか。世界モデルで「やる前にわかる」を作る

19 January 2026 at 10:01

制御と強化学習の間にある“空白”を埋める発想

ロボットを動かす技術は大きく分けると、モデルに基づく制御と、経験に基づく学習に分かれる。前者の代表は古典的な制御工学で、ロボットの力学モデルやセンサーの挙動がある程度わかっていることを前提に、目標との差分を埋めるように入力を調整する。後者の代表は強化学習で、最初から正しいモデルがなくても、試行錯誤の結果としてうまい行動を見つけていく。どちらも強力だが、現場に出るロボットはこの二つのど真ん中でつまずきやすい。

制御工学は、モデルが正しければ非常に安定している。しかし現実の現場は、モデルがいつも正しいとは限らない。床が濡れている、荷物の重心が少しずれている、空調の風で軽い部品が動く、同じ型番の部材でも摩擦が微妙に違う。こうした“地味なズレ”は、数式のモデルに載せづらい一方で、ロボットの成否を左右する。結果として、現場で制御を丁寧にチューニングしても、想定外の状況で破綻してしまう。

強化学習は、そうしたズレも含めて学習で吸収できそうに見える。ところが、現実のロボットに試行錯誤をさせるのは高い。転倒させるわけにはいかないし、失敗で周囲を傷つける可能性もある。データを集めるだけでも時間がかかる。つまり、強化学習が得意な「大量の試行」は、現実世界では制約が強すぎる。シミュレーター上で学習してから実機へ移す手法も広く使われるが、シミュレーターと現実の差、いわゆる“シム・トゥ・リアル”の壁にぶつかる。ここでまた、モデルの不正確さが問題として戻ってくる。

このジレンマを埋める発想が世界モデルだ。世界モデルは、現実のデータから「環境がどう反応するか」を学び、内部で予測できるようにする。重要なのは、世界モデルが“完全な物理法則”を手に入れることより、「意思決定に使えるレベルで未来を見積もる」ことを目指す点だ。たとえば、箱を押したときに数ミリ単位で正確にどこへ行くかよりも、押し方を変えれば倒れる危険が上がるのか、滑りやすいから押すより持つべきなのか、そういった判断ができれば価値がある。

さらに言えば、世界モデルは「行動の結果を先読みしてから動く」という意味で、制御と学習をつなぐ。制御の世界では、未来を少し先まで予測しながら入力を決める枠組みが昔からある。学習の世界では、その予測モデル自体をデータから獲得できる。世界モデルは、その二つを合体させたものとして理解すると腑に落ちやすい。ロボットが失敗しやすいのは、まさにこの“予測して選ぶ”能力が弱いからであり、世界モデルはそこを補うための設計思想だ。

予測してから動く。モデル予測制御と世界モデルの接続

ロボットに「やる前にわかる」を持たせるとき、分かりやすい比喩は脳内シミュレーションだ。人間も、コップを取ろうとするときに、頭の中で一瞬だけ「この角度だとぶつかるな」とか「ここを持てば滑らないな」と想像してから手を伸ばしている。世界モデルは、これを計算として実装するための部品になる。

このとき中心にあるのが、モデル予測制御(MPC)に近い考え方だ。MPCは、現在の状態から未来を短い地平で予測し、その予測の中で目的を最も満たす入力列を選ぶ。選んだ入力をすべて実行するのではなく、最初の一手だけを実行し、次の瞬間にまた観測して計画を更新する。こうすることで、モデルが少し不正確でも、観測で軌道修正しながら安定に動ける。世界モデルが入るのは、ここで使う「未来予測のモデル」を、解析的な物理モデルだけに頼らず、データから学習したものに置き換える部分だ。

ロボットでこの接続が効くのは、短期予測が十分役に立つ場面が多いからだ。たとえば、障害物回避では数秒先まで見通せれば安全性が上がる。把持や押し操作では、触れた直後の反応が読めれば失敗が減る。長期的に完璧な未来が読めなくても、「次の一手」を賢くするだけで成果が出る。世界モデルは、その短期予測をデータ駆動で鍛え、MPC的な枠組みに流し込めるのが強みになる。

ただし、ここで問題が出る。ロボットの世界は、観測できない状態が多い。カメラからは見えても、摩擦係数や内部応力、接触面の微細な凹凸は見えない。さらには、センサー自体が遅れたりノイズが乗ったりする。だから世界モデルでは、観測そのものではなく、潜在状態と呼ばれる内部表現を持つことが多い。潜在状態は、見えない要因も含めて「いま本当はこういう状況だろう」という推定をまとめたものだ。そして潜在状態の遷移を学ぶことで、観測に揺らぎがあっても、内部では滑らかに世界が動いているように扱える。

ここで大事なのは、世界モデルは“何を予測するか”を設計する必要があるという点だ。カメラ画像の画素を丸ごと予測するのは重いし、ロボットが必要とするのは画素そのものではないことが多い。物体の位置、姿勢、接触の有無、力の向きといった、行動選択に直接効く要素がわかれば十分な場合が多い。つまり、世界モデルは「行動に必要な抽象度」で世界を表現できるほど強くなる。

そして、現実に欠かせないのが不確実性の扱いだ。ロボットの操作は、同じ行動でも結果がぶれる。箱を押すとき、わずかな角度の違いで回転したり滑ったりする。もし世界モデルが一つの未来を断定すると、その未来が外れた瞬間に計画が崩れる。だから世界モデルは、未来に幅を持たせる必要がある。複数の可能性を出し、その中で「最悪のケースでも安全」な手を選ぶ、あるいは「不確実性が小さくなる行動」を先に選ぶ、といった戦略が取りやすくなる。ここが、単なる予測器ではなく、意思決定のための世界モデルとして価値が出るところだ。

一方で、長期予測を前提にすると誤差が積み上がる問題が出る。内部で一歩ずつ未来を生成していくと、少しのズレが雪だるま式に増え、数秒後には現実と別の世界に入り込むことがある。これを避けるために、短い地平で回し続けたり、観測で頻繁に補正したり、モデルの“信頼できる範囲”を明示的に扱う設計が重要になる。世界モデルをロボットに載せるというのは、モデルを作ること以上に、モデルと現実の付き合い方を作ることでもある。

現実世界での落とし穴と、安全に寄せる設計

世界モデルは、ロボットの失敗を減らす道具になりうる。しかし、導入すればすぐに賢くなるような魔法ではない。むしろ、世界モデルを入れた途端に失敗の種類が変わることがある。典型は「もっともらしい誤予測」に引っ張られる失敗だ。ロボットが内部シミュレーションで「この動きなら成功する」と判断しても、現実では摩擦が違って滑り、障害物に触れてしまう。ここで怖いのは、内部では整合が取れているので、失敗の理由が見えにくいことだ。外部から見ると、ロボットが自信満々に危険な動きを選んだように見える。

この問題の根っこには、データの偏りがある。世界モデルは観測されたデータから学ぶので、データが薄い領域の予測は弱い。現場のロボットは、業務でよく出る状況のデータは溜まる一方、危険な状況や異常系のデータは意図的に避けるため、そもそも学びにくい。結果として、いざ例外が起きると世界モデルは脆くなる。現実は皮肉で、安全のために避けたデータが、安全を作るために必要だったりする。

だから設計としては、世界モデルに頼り切らない構造が重要になる。まず基本は、保守的な制約を別レイヤーで置くことだ。速度や力の上限、立ち入り禁止領域、衝突が確実な行動の禁止など、世界モデルがどう判断しようと越えてはいけない柵を作る。世界モデルはその柵の中で最適化を行う。この分業があるだけで、誤予測が致命傷になりにくい。

次に大事なのが、異常を検知して“疑う”能力だ。世界モデルが予測した結果と、実際に観測された結果の差が大きくなったら、「いまモデルが当たっていない」サインになる。ここで計画を短くする、保守的な動きに切り替える、あるいは一旦停止して再推定する。こうしたモード切り替えがあると、世界モデルの弱点を運用でカバーできる。ここは、現場での安全文化に近い。航空機や工場設備が、異常時にフェイルセーフへ移るのと同じ発想で、世界モデルにも“降り方”を用意する。

また、現実的にはデータ収集の設計が世界モデルの成否を決める。ロボットが日常業務で集めるログだけでは足りないことが多い。わざと微妙に違う条件で動かしてみる、操作対象を変えてみる、センサーを変えてみる。そうして「モデルが揺さぶられる」データを確保すると、世界モデルは頑健になる。もちろん、危険な揺さぶりはできないから、安全な範囲での探索や、シミュレーターでの補助が必要になる。世界モデルは“安全に賢くなるための装置”だが、賢くするためにはやはり学習が必要で、その学習を安全に行うための設計が要る。

最後に、世界モデルは“責任の分界”も変える。従来のロボットは、制御則やルールベースのロジックで動き、動作の理由が比較的追いやすかった。世界モデルを入れると、行動が内部シミュレーションの結果で決まるため、説明や監査が難しくなる。現場で運用するなら、世界モデルがどの情報を根拠にどんな予測をしているか、少なくともデバッグ可能な形で可視化する必要がある。現場の人が「今日は床が滑るから危ない」と感じたとき、モデルがそれを反映できるのか、反映できないなら手動で保守モードに入れられるのか。こうした人間とのインターフェースを設計しないと、技術としては進んでも現場で嫌われる。

ロボットが失敗するのは、世界が複雑で、しかもその複雑さが“例外”として現れるからだ。世界モデルは、その複雑さを内部に取り込み、行動の前に結果を想像させることで、失敗の確率を下げる。しかし同時に、世界モデルが間違えば、間違った未来を信じて突っ込む危険も生む。だから鍵は、世界モデルを賢い脳として神格化するのではなく、不確実な仮説を作る道具として位置づけ、その仮説を現実の観測と制約で矯正し続ける設計にある。ロボットに「やる前にわかる」を与えるとは、未来を当てることではなく、当てられない未来と安全に付き合う仕組みを作ることなのだ。

OpenAI to test ads in ChatGPT as it burns through billions

16 January 2026 at 16:20

On Friday, OpenAI announced it will begin testing advertisements inside the ChatGPT app for some US users in a bid to expand its customer base and diversify revenue. The move represents a reversal for CEO Sam Altman, who in 2024 described advertising in ChatGPT as a "last resort" and expressed concerns that ads could erode user trust, although he did not completely rule out the possibility at the time.

The banner ads will appear in the coming weeks for logged-in users of the free version of ChatGPT as well as the new $8 per month ChatGPT Go plan, which OpenAI also announced Friday is now available worldwide. OpenAI first launched ChatGPT Go in India in August 2025 and has since rolled it out to over 170 countries.

Users paying for the more expensive Plus, Pro, Business, and Enterprise tiers will not see advertisements.

Read full article

Comments

© OpenAI / Benj Edwards

Lego's latest educational kit seeks to teach AI as part of computer science, not to build a chatbot

16 January 2026 at 13:46

Last week at CES, Lego introduced its new Smart Play system, with a tech-packed Smart Brick that can recognize and interact with sets and minifigures. It was unexpected and delightful to see Lego come up with a way to modernize its bricks without the need for apps, screens or AI. 

So I was a little surprised this week when the Lego Education group announced its latest initiative is the Computer Science and AI Learning Solution. After all, generative AI feels like the antithesis of Lego’s creative values. But Andrew Silwinski, Lego Education’s head of product experience, was quick to defend Lego’s approach, noting that being fluent in the tools behind AI is not about generating sloppy images or music and more about expanding what it means by teaching computer science.

“I think most people should probably know that we started working on this before ChatGPT [got big],” Silwinski told Engadget earlier this week. “Some of the ideas that underline AI are really powerful foundational ideas, regardless of the current frontier model that's out this week. Helping children understand probability and statistics, data quality, algorithmic bias, sensors, machine perception. These are really foundational core ideas that go back to the 1970s.” 

To that end, Lego Education designed courses for grades K-2, 3-5 and 6-8 that incorporate Lego bricks, additional hardware and lessons tailored to introducing the fundamentals of AI as an extension of existing computer science education. The kits are designed for four students to work together, with teacher oversight. Much of this all comes from learnings Lego found in a study it commissioned showing that teachers often find they don’t have the right resources to teach these subjects. The study showed that half of teachers globally say “current resources leave students bored” while nearly half say “computer science isn’t relatable and doesn’t connect to students’ interests or day to day.” Given kids’ familiarity with Lego and the multiple decades of experience Lego Education has in putting courses like this together, it seems like a logical step to push in this direction. 

In Lego’s materials about the new courses, AI is far from the only subject covered. Coding, looping code, triggering events and sequences, if/then conditionals and more are all on display through the combination of Lego-built models and other hardware to motorize it. It feels more like a computer science course that also introduces concepts of AI rather than something with an end goal of having kids build a chatbot.

In fact, Lego set up a number of “red lines” in terms of how it would introduce AI. “No data can ever go across the internet to us or any other third party,” Silwinski said. “And that's a really hard bar if you know anything about AI.” So instead of going to the cloud, everything had to be able to do local inference on, as Silwinski said, “the 10-year-old Chromebooks you’ll see in classrooms.” He added that “kids can train their own machine learning models, and all of that is happening locally in the classroom, and none of that data ever leaves the student's device.”

Lego also says that its lessons never anthropomorphize AI, one of the things that is so common in consumer-facing AI tools like ChatGPT, Gemini and many more. “One of the things we're seeing a lot of with generative AI tools is children have a tendency to see them as somehow human or almost magical. A lot of it's because of the conversational interface, it abstracts all the mechanics away from the child.” 

Lego also recognized that it had to build a course that’ll work regardless of a teacher’s fluency in such subjects. So a big part of developing the course was making sure that teachers had the tools they needed to be on top of whatever lessons they’re working on. “When we design and we test the products, we're not the ones testing in the classroom,” Silwinski said. “We give it to a teacher and we provide all of the lesson materials, all of the training, all of the notes, all the presentation materials, everything that they need to be able to teach the lesson.” Lego also took into account the fact that some schools might introduce its students to these things starting in Kindergarten, whereas others might skip to the grade 3-5 or 6-8 sets. To alleviate any bumps in the courses for students or teachers, Lego Education works with school districts and individual schools to make sure there’s an on-ramp for those starting from different places in their fluency.

While the idea of “teaching AI” seemed out of character for Lego initially, the approach it’s taking here actually reminds me a bit of Smart Play. With Smart Play, the technology is essentially invisible — kids can just open up a set, start building, and get all the benefits of the new system without having to hook up to an app or a screen. In the same vein, Silwinski said that a lot of the work you can do with the Computer Science and AI kit doesn’t need a screen, particularly the lessons designed for younger kids. And the sets themselves have a mode that acts similar to a mesh, where you connect numerous motors and sensors together to build “incredibly complex interactions and behaviors” without even needing a computer.

For educators interested in checking out this latest course, Lego has single kits up for pre-order starting at $339.95; they’ll start shipping in April. That’s the pricing for the K-2 sets, the 3-5 and 6-8 sets are $429.95 and $529.95, respectively. A single kit covers four students. Lego is also selling bundles with six kits, and school districts can also request a quote for bigger orders. 


This article originally appeared on Engadget at https://www.engadget.com/ai/legos-latest-educational-kit-seeks-to-teach-ai-as-part-of-computer-science-not-to-build-a-chatbot-184636741.html?src=rss

©

TSMC says AI demand is “endless” after record Q4 earnings

16 January 2026 at 11:55

On Thursday, Taiwan Semiconductor Manufacturing Company (TSMC) reported record fourth-quarter earnings and said it expects AI chip demand to continue for years. During an earnings call, CEO C.C. Wei told investors that while he cannot predict the semiconductor industry's long-term trajectory, he remains bullish on AI.

TSMC manufactures chips for companies including Apple, Nvidia, AMD, and Qualcomm, making it a linchpin of the global electronics supply chain. The company produces the vast majority of the world's most advanced semiconductors, and its factories in Taiwan have become a focal point of US-China tensions over technology and trade. When TSMC reports strong demand and ramps up spending, it signals that the companies designing AI chips expect years of continued growth.

"All in all, I believe in my point of view, the AI is real—not only real, it's starting to grow into our daily life. And we believe that is kind of—we call it AI megatrend, we certainly would believe that," Wei said during the call. "So another question is 'can the semiconductor industry be good for three, four, five years in a row?' I'll tell you the truth, I don't know. But I look at the AI, it looks like it's going to be like an endless—I mean, that for many years to come."

Read full article

Comments

© BING-JHEN HONG via Getty Images

Wikipedia signs major AI firms to new priority data access deals

15 January 2026 at 10:25

On Thursday, the Wikimedia Foundation announced API access deals with Microsoft, Meta, Amazon, Perplexity, and Mistral AI, expanding its effort to get major tech companies to pay for high-volume API access to Wikipedia content, which these companies use to train AI models like Microsoft Copilot and ChatGPT.

The deals mean that most major AI developers have now signed on to the foundation's Wikimedia Enterprise program, a commercial subsidiary that sells high-speed API access to Wikipedia's 65 million articles at higher speeds and volumes than the free public APIs provide. Wikipedia's content remains freely available under a Creative Commons license, but the Enterprise program charges for faster, higher-volume access to the data. The foundation did not disclose the financial terms of the deals.

The new partners join Google, which signed a deal with Wikimedia Enterprise in 2022, as well as smaller companies like Ecosia, Nomic, Pleias, ProRata, and Reef Media. The revenue helps offset infrastructure costs for the nonprofit, which otherwise relies on small public donations while watching its content become a staple of training data for AI models.

Read full article

Comments

© Wikipedia

Bandcamp bans purely AI-generated music from its platform

14 January 2026 at 12:46

On Tuesday, Bandcamp announced on Reddit that it will no longer permit AI-generated music on its platform. "Music and audio that is generated wholly or in substantial part by AI is not permitted on Bandcamp," the company wrote in a post to the r/bandcamp subreddit. The new policy also prohibits "any use of AI tools to impersonate other artists or styles."

The policy draws a line that some in the music community have debated: Where does tool use end and full automation begin? AI models are not artists in themselves, since they lack personhood and creative intent. But people do use AI tools to make music, and the spectrum runs from using AI for minor assistance (cleaning up audio, suggesting chord progressions) to typing a prompt and letting a model generate an entire track. Bandcamp's policy targets the latter end of that spectrum while leaving room for human artists who incorporate AI tools into a larger creative process.

The announcement emphasized the platform's desire to protect its community of human artists. "The fact that Bandcamp is home to such a vibrant community of real people making incredible music is something we want to protect and maintain," the company wrote. Bandcamp asked users to flag suspected AI-generated content through its reporting tools, and the company said it reserves "the right to remove any music on suspicion of being AI generated."

Read full article

Comments

© Malte Mueller via Getty Images

12 Most Popular Deep Learning Libraries 2026

By: Balaji
14 January 2026 at 02:25

Deep learning libraries are essentially sets of functions and routines written in a given programming language. A large set of deep learning libraries can make it quite simpler for data engineers, data scientists and developers to perform tasks of any complexity without having to rewrite vast lines of code. Artificial intelligence (AI) has been rapidly […]

The post 12 Most Popular Deep Learning Libraries 2026 appeared first on GBHackers Security | #1 Globally Trusted Cyber Security News Platform.

Hegseth wants to integrate Musk’s Grok AI into military networks this month

13 January 2026 at 16:13

On Monday, US Defense Secretary Pete Hegseth said he plans to integrate Elon Musk's AI tool, Grok, into Pentagon networks later this month. During remarks at the SpaceX headquarters in Texas reported by The Guardian, Hegseth said the integration would place "the world's leading AI models on every unclassified and classified network throughout our department."

The announcement comes weeks after Grok drew international backlash for generating sexualized images of women and children, although the Department of Defense has not released official documentation confirming Hegseth's announced timeline or implementation details.

During the same appearance, Hegseth rolled out what he called an "AI acceleration strategy" for the Department of Defense. The strategy, he said, will "unleash experimentation, eliminate bureaucratic barriers, focus on investments, and demonstrate the execution approach needed to ensure we lead in military AI and that it grows more dominant into the future."

Read full article

Comments

© Bloomberg via Getty Images

Microsoft vows to cover full power costs for energy-hungry AI data centers

13 January 2026 at 15:05

On Tuesday, Microsoft announced a new initiative called "Community-First AI Infrastructure" that commits the company to paying full electricity costs for its data centers and refusing to seek local property tax reductions.

As demand for generative AI services has increased over the past year, Big Tech companies have been racing to spin up massive new data centers for serving chatbots and image generators that can have profound economic effects on the surrounding areas where they are located. Among other concerns, communities across the country have grown concerned that data centers are driving up residential electricity rates through heavy power consumption and by straining water supplies due to server cooling needs.

The International Energy Agency (IEA) projects that global data center electricity demand will more than double by 2030, reaching around 945 TWh, with the United States responsible for nearly half of total electricity demand growth over that period. This growth is happening while much of the country's electricity transmission infrastructure is more than 40 years old and under strain.

Read full article

Comments

© Bloomberg via Getty Images

Google removes some AI health summaries after investigation finds “dangerous” flaws

12 January 2026 at 16:47

On Sunday, Google removed some of its AI Overviews health summaries after a Guardian investigation found people were being put at risk by false and misleading information. The removals came after the newspaper found that Google's generative AI feature delivered inaccurate health information at the top of search results, potentially leading seriously ill patients to mistakenly conclude they are in good health.

Google disabled specific queries, such as "what is the normal range for liver blood tests," after experts contacted by The Guardian flagged the results as dangerous. The report also highlighted a critical error regarding pancreatic cancer: The AI suggested patients avoid high-fat foods, a recommendation that contradicts standard medical guidance to maintain weight and could jeopardize patient health. Despite these findings, Google only deactivated the summaries for the liver test queries, leaving other potentially harmful answers accessible.

The investigation revealed that searching for liver test norms generated raw data tables (listing specific enzymes like ALT, AST, and alkaline phosphatase) that lacked essential context. The AI feature also failed to adjust these figures for patient demographics such as age, sex, and ethnicity. Experts warned that because the AI model's definition of "normal" often differed from actual medical standards, patients with serious liver conditions might mistakenly believe they are healthy and skip necessary follow-up care.

Read full article

Comments

© Getty Images

Tech Moves: AWS VP switches roles; Seattle’s new economic development head; Microsoft Teams exec departs

9 January 2026 at 16:16
Uwem Ukpong. (LinkedIn Photo)

Amazon’s Uwem Ukpong has a new title, moving from vice president of Global Services to VP of AWS Industries.

Ukpong has been with the tech giant for more than four years, joining from energy technology company Baker Hughes.

Ukpong’s resume is dominated by a 22-year stretch at Schlumberger, a Houston-based software and internet company that has offices internationally.

Alicia Teel is now acting director of the City of Seattle’s Office of Economic Development. She was previously deputy director of the department, which supports small businesses and economic growth.

Alicia Teel. (LinkedIn Photo)

Teel began her career at the Seattle Metropolitan Chamber of Commerce where she worked for more than 15 years.

At the Office of Economic Development, “[o]ur talented team is dedicated to leading projects and making investments that open up access to economic opportunities across our city, reduce the racial wealth gap, and encourage innovation and growth,” Teel said in a statement.

In announcing the appointment, Seattle Mayor Katie Wilson thanked former director Markham McIntyre “for his leadership supporting small business recovery after the pandemic.”

McIntyre was in the role for four years. He also previously held leadership positions with the Chamber of Commerce, leaving the title of executive VP.

Manik Gupta. (LinkedIn Photo)

Manik Gupta is leaving his role as corporate VP of Microsoft Teams.

“With Teams, I had the opportunity to combine my consumer DNA with learning the scale and complexity of the enterprise. The lessons, playbooks, and friendships I’ve gained will stay with me always,” Gupta said on LinkedIn.

Gupta, who is based in California, joined Microsoft in 2021. He said he’s exploring career options in AI, adding that “I’m convinced that the hardest and most interesting work in AI now lies in turning powerful models into products people can rely on every day.”

ESS appointed Drew Buckley as CEO of the Oregon-based, long-duration energy storage company. Buckley joined the battery company in August as leader of its investor relations and capital market strategy. He previously spent 17 years as a technology-focused partner at the financial services firm William Blair.

Drew Buckley. (LinkedIn Photo)

“Drew brings an incredible track record of success, with the experience and industry relationships necessary to lead ESS to its next stage, manufacturing and delivery of our first Energy Base projects, and broader commercialization expected to commence this year,” said Harry Quarls, ESS board chairman.

ESS also named Kate Suhadolnik as chief financial officer from her current role as interim CFO. Suhadolnik has been with the publicly-traded company for more than four years.

Eric Dresselhuys resigned as ESS CEO in February and Kelly Goodman, who had been vice president of legal, became the interim chief executive. Goodman is now chief strategy officer and general counsel.

Savanna Thompson. (LinkedIn Photo)

Savanna Thompson is chief business operations officer role at fusion company Helion Energy after serving as VP of people & workplace Operations. She has been with the Everett, Wash., business for more than three years.

“As we move from building fusion machines to deploying fusion power plants, this role reflects the importance of scaling our teams, systems, and infrastructure that support our ambitious goals,” said Helion CEO David Kirtley in announcing the promotion.

Thompson joined Helion from 98point6, a Seattle telehealth company.

Jackie Ostlie. (LinkedIn Photo)

Jackie Ostlie has returned to Microsoft, taking the role of director of AI initiatives in Microsoft Learning.

“I am incredibly grateful to Rachel Richardson for the opportunity and am excited to be back with some of the world’s smartest, kindest, most supportive humans in tech,” Ostlie said on LinkedIn.

Ostlie rejoins the company after a leadership role at Google Cloud Learning. Her career has included positions with multiple Seattle-area organizations including Veeam Software, Expedia and the nonprofit World Vision.

Emma Day. (LinkedIn Photo)

— After recently landing a $40 million investment, Seattle AI roleplay startup Yoodli appointed two new leaders.

  • Emma Day is principal recruiter at Yoodli, leaving a comparable role at Seattle-based tech hiring platform Karat. “Yoodli has the rare and beautiful combination of an incredible mission — to help people communicate with confidence, a world-class team and a TON of growth ahead,” Day said on LinkedIn.
  • Grayson Hay is principal software engineer, building on similar past roles at CodeSee, Tableau Software and Microsoft. Hay’s varied career also includes cinematography and bungee fitness instructor.

— Seattle cryptocurrency company Coinme named Hazen Baron as its general counsel. Baron is based in Arkansas and past employers include Walmart, fintech company Stronghold, and others.

Late last month Coinme announced an agreement with Washington state regulators to pause a temporary cease-and-desist order, clearing the way for the company to resume operations in the state.

Jason Cavness, a Seattle-based U.S. market development partner for TechBank, is now a fellow with Earth Venture Capital, a Vietnam-based firm investing in climate tech internationally.

— The Microsoft Alumni Network, which represents more than 290,000 former Microsoft employees, has expanded its board of trustees, appointing eight new members:

  • Declan Bradshaw, a 22-year employee based in Dublin and Redmond, Wash., who led Xbox’s European launch.
  • George Durham, a leader of community engagement, global Technology for Good programs, and other roles after joining in 2004.
  • Erendira Gonzalez, who over three decades led multicultural teams and launched the first Microsoft Technology Center in Latin America.
  • Bill Kirst, who served as the director of change for Commercial Systems & Business Intelligence.
  • Laura Luethe, who leads strategic content and communications as Microsoft HR’s director of communications.
  • Somanna Palacanda, a 23-year employee who leads International Social Impact for Microsoft Elevate.
  • Michelle September, who spent nearly 20 years at Microsoft and worked in account management, industry leadership, among other roles.
  • Andrew Winnemore, VP of Microsoft for HR People Operations.

In addition, Larry Hryb, a longtime Xbox leader, was named vice chair of the Microsoft Alumni Network board.

ChatGPT Health lets you connect medical records to an AI that makes things up

8 January 2026 at 13:00

On Wednesday, OpenAI announced ChatGPT Health, a dedicated section of the AI chatbot designed for "health and wellness conversations" intended to connect a user's health and medical records to the chatbot in a secure way.

But mixing generative AI technology like ChatGPT with health advice or analysis of any kind has been a controversial idea since the launch of the service in late 2022. Just days ago, SFGate published an investigation detailing how a 19-year-old California man died of a drug overdose in May 2025 after 18 months of seeking recreational drug advice from ChatGPT. It's a telling example of what can go wrong when chatbot guardrails fail during long conversations and people follow erroneous AI guidance.

Despite the known accuracy issues with AI chatbots, OpenAI's new Health feature will allow users to connect medical records and wellness apps like Apple Health and MyFitnessPal so that ChatGPT can provide personalized health responses like summarizing care instructions, preparing for doctor appointments, and understanding test results.

Read full article

Comments

© Pakin Songmor via Getty Images

Passing the AI Torch: Empowering leaders across the organization to drive AI strategy

7 January 2026 at 14:48

AI is everywhere, and CIOs can’t lead AI strategy alone. With 62% of organizations experimenting with AI, its reach is too broad for oversight to live solely with IT. Nearly half (48%) of CIOs still shoulder responsibility for leading AI strategy, even though 88% of generative AI usage happens outside their teams. The result: AI implementation is skyrocketing, but few projects across the business deliver real impact.

The solution is not more IT oversight, but distributed leadership. Department leaders know their teams best. They observe firsthand which processes slow teams down, where AI can automate, and how workflows truly function. This deep expertise makes them uniquely suited to lead AI strategy across their respective departments and realize AI’s full potential. CIOs need to pass the torch and empower them to lead.

The AI Bottleneck: Even the strongest CIOs can’t carry the entire AI agenda alone

CIOs are the champions of innovation – expected to deliver real ROI from AI while keeping the enterprise secure, aligned, and ahead of the curve. But when every AI request, experiment, and implementation lands on their desk, even the best leaders face impossible bottlenecks. On top of this, most generative AI usage now resides outside IT, across finance, marketing, HR, and more.

The consequence is a growing “adoption-value gap.” AI initiatives exist throughout the business, but only 5% deliver measurable ROI. When CIOs try to own every AI project, innovation stalls. To get real value, responsibility must shift to department leaders – those closest to the work who drive meaningful results.

Distributed Leadership: The New Model for AI Success

The most successful and impactful CIOs don’t try to own everything – they orchestrate. Department leaders who understand AI tools and are comfortable using them can step up and take ownership within their teams, relieving the CIO burden. At Freshworks, we’re putting this into practice: AI works alongside our people to remove busy work, accelerate productivity, and unlock higher-value work.

Our teams are seeing measurable efficiency gains across the organization:

  • Customer Support: AI agents now handle 34% of chat tickets, allowing human agents to focus on complex, high-value conversations. Productivity per agent has increased 25%, and new agent ramp time has been reduced from six months to three months.
  • Engineering & Quality: Developers use AI tools to write code, while quality engineers leverage AI for test cases and automation. Cycle times have dropped by up to 50%, and debugging efficiency has improved from hours to minutes in some cases.
  • Web & Digital Teams: Building new web pages now takes hours instead of weeks, freeing teams to focus on higher-impact initiatives.
  • IT Teams: AI automates ticketing, categorizes issues, and resolves requests faster, improving employee experience across the business.
  • HR & Recruiting: AI-powered Slack integrations help review resumes quickly and accurately, streamlining recruiting and onboarding.

Shifting ownership to department leaders unlocks each team’s potential. CIOs move from “owners” to enablers, setting frameworks and guardrails. This approach isn’t about cost-cutting – it frees talent to drive innovation, growth, and problem-solving, benefiting business outcomes and employee engagement.

Building AI-Native Leaders Across the Business

Non-technical leaders may find taking the reins daunting. CIOs can support them by introducing simple, intuitive AI tools, offering literacy programs, and creating “AI champion” groups to share best practices. Teams can explore use cases tied to KPIs—financial forecasting, talent analytics, or operational efficiency—while clear policies encourage responsible experimentation.

From Ownership to Orchestration: The CIO as the Conductor

Think of the CIO as a conductor, not a player. They set the vision, ensure harmony, and provide structure, while department leaders apply their expertise strategically. The result: an AI-fluent organization where experimentation happens faster, and value grows organically.

AI success comes from collaboration across the business. CIOs who empower leaders while providing clear governance unlock AI’s true potential—making it work for people, not against them.

For CIOs seeking concrete examples of driving measurable ITSM value with AI, learn more about Freshservice here.

❌
❌