Normal view

There are new articles available, click to refresh the page.
Yesterday — 25 January 2026Main stream

Microsoft releases second emergency Windows 11 update to fix Outlook crashes

25 January 2026 at 14:20

Microsoft issued another out-of-band update to fix a bug that caused Outlook to crash for Windows 11 users. This second emergency patch addresses issues seen with Outlook and files stored in the cloud following Microsoft's January 2026 Windows security update.

According to Microsoft, this update fixes a bug where some apps that "open or save files stored in cloud-backed locations" became unresponsive or displayed error messages. Some users also experienced Outlook crashing or not opening when PST files are stored in cloud-based options like OneDrive.

This is the second time this year that Microsoft had to issue a last-minute fix for bugs related to its January security update. Last week, some Windows 11 devices couldn't shut down or hibernate, while other devices running Windows 10 or 11 couldn't log in through remote connections. For more context, Microsoft only issues out-of-band updates when there's a serious issue that can't wait until its regular update cycle. Fortunately, the latest out-of-band update is cumulative, so you only need to download and install this one to fix the issues seen with the January update.

This article originally appeared on Engadget at https://www.engadget.com/computing/microsoft-releases-second-emergency-windows-11-update-to-fix-outlook-crashes-192012812.html?src=rss

©

© Microsoft

Microsoft Outlook running on a laptop.

SSH over USB on a Raspberry Pi

25 January 2026 at 13:00
The edge of a laptop is shown with a USB cable plugged into it. the other end of the cable is plugged into a Raspberry Pi Zero.

Setting up access to a headless Raspberry Pi is one of those tasks that should take a few minutes, but for some reason always seems to take much longer. The most common method is to configure Wi-Fi access and an SSH service on the Pi before starting it, which can go wrong in many different ways. This author, for example, recently spent a few hours failing to set up a headless Pi on a network secured with Protected EAP, and was eventually driven to using SSH over Bluetooth. This could thankfully soon be a thing of the past, as [Paul Oberosler] developed a package for SSH over USB, which is included in the latest versions of Raspberry Pi OS.

The idea behind rpi-usb-gadget is that a Raspberry Pi in gadget mode can be plugged into a host machine, which recognizes it as a network adapter. The Pi itself is presented as a host on that network, and the host machine can then SSH into it. Additionally, using Internet Connection Sharing (ICS), the Pi can use the host machine’s internet access. Gadget mode can be enabled and configured from the Raspberry Pi Imager. Setting up ICS is less plug-and-play, since an extra driver needs to be installed on Windows machines. Enabling gadget mode only lets the selected USB port work as a power input and USB network port, not as a host port for other peripherals.

An older way to get USB terminal access is using OTG mode, which we’ve seen used to simplify the configuration of a Pi as a simultaneous AP and client. If you want to set up headless access to Raspberry Pi desktop, we have a guide for that.

Thanks to [Gregg Levine] for the tip!

Apple reportedly plans to reveal its Gemini-powered Siri in February

25 January 2026 at 12:43

A new and improved Siri may finally make an appearance, but this time, it could be with a Google Gemini glow up. According to Bloomberg's Mark Gurman, Apple wants to announce a new Siri in "the second half of February" that will show off the results of its recently announced partnership with Google and offer demonstrations of the Gemini-powered capabilities.

After this reveal, Gurman reported that the new Siri will make its way to iOS 26.4, which is also slated to enter beta testing in February before its public release in March or early April. Apple has been meaning to launch its next-gen Siri ever since its announcement at WWDC 2024, but now we know that this Gemini-powered Siri will behave more like an AI chatbot, similar to OpenAI's ChatGPT, thanks to another Bloomberg report from last week. 

Following the reported demo that's scheduled for late February, Gurman said Apple will have a grand reveal of the new Siri, which is currently codenamed Campos, at its annual developer conference in the summer. After that, the latest Siri and the accompanying Gemini-powered Apple Intelligence features are expected to arrive with iOS 27, iPadOS 27 and macOS 27, which are expected to be available as beta releases in the summer.

This article originally appeared on Engadget at https://www.engadget.com/ai/apple-reportedly-plans-to-reveal-its-gemini-powered-siri-in-february-174356923.html?src=rss

©

© Apple

The revamped Siri was shown off during WWDC 2024.
Before yesterdayMain stream

Did Edison accidentally make graphene in 1879?

24 January 2026 at 13:36

Graphene is the thinnest material yet known, composed of a single layer of carbon atoms arranged in a hexagonal lattice. That structure gives it many unusual properties that hold great promise for real-world applications: batteries, super capacitors, antennas, water filters, transistors, solar cells, and touchscreens, just to name a few. The physicists who first synthesized graphene in the lab won the 2010 Nobel Prize in Physics. But 19th century inventor Thomas Edison may have unknowingly created graphene as a byproduct of his original experiments on incandescent bulbs over a century earlier, according to a new paper published in the journal ACS Nano.

“To reproduce what Thomas Edison did, with the tools and knowledge we have now, is very exciting,” said co-author James Tour, a chemist at Rice University. “Finding that he could have produced graphene inspires curiosity about what other information lies buried in historical experiments. What questions would our scientific forefathers ask if they could join us in the lab today? What questions can we answer when we revisit their work through a modern lens?”

Edison didn't invent the concept of incandescent lamps; there were several versions predating his efforts. However, they generally had a a very short life span and required high electric current, so they weren't well suited to Edison's vision of large-scale commercialization. He experimented with different filament materials starting with carbonized cardboard and compressed lampblack. This, too, quickly burnt out, as did filaments made with various grasses and canes, like hemp and palmetto. Eventually Edison discovered that carbonized bamboo made for the best filament, with life spans over 1200 hours using a 110 volt power source.

Read full article

Comments

© Public domain

How to use Google Photos' new Me Meme feature

24 January 2026 at 09:00

Google has started rolling out a new feature for its Photos app that can turn your images into memes. The feature, called Me Meme, uses Google Gemini to take meme templates and recreate them with the photo you use from within the app. It’s still in its experimental stages and will only be available for users in the US in the English language. In addition, it seems to be rolling out for Android users only at the moment, and Google warns that the generated images may not match the original photo at times. If you do get access to the feature soon, check out the instructions below on how to use it.

  1. Open your Photos app. Go to the Create tab at the bottom and find the Me Meme option. If it’s not showing up even though you’re an Android user in the US, you’ll have to wait for it as it continues rolling out. Google told TechCrunch that it will reach iOS users over the coming weeks.

  2. You’ll see onscreen instructions the first time you use it. The next time you access the feature, you’ll go straight to the meme creation process.

  3. You can browse and choose one of the preset templates Google provides, but you can also upload a meme or any other image as a template.

  4. You then have to upload a photo that you want to insert into the meme. Google advises using a selfie or a photo where your face is clearly visible.

  5. After you tap Generate, you can save the meme if you’re satisfied with the result or share it directly to social media platforms online. You can also tap Regenerate to see a different output.

Android Authority spotted the feature in October 2025 and was able to try it out before its limited release. You can see an example of Me Meme’s output below.

The "Me meme" feature which I spotted back in October 2025 is finally announced by Google.

This is an experimental feature which will be available first for users in US only 😐

Here is the announcement forum posts- https://t.co/7P2JgJhoBk https://t.co/E60prcqcie pic.twitter.com/sFICxzVIPU

— AssembleDebug (Shiv) (@AssembleDebug) January 23, 2026

This article originally appeared on Engadget at https://www.engadget.com/ai/how-to-use-google-photos-new-me-meme-feature-140000157.html?src=rss

©

How to use Workout Buddy with Apple Watch and iOS 26

By: Rob Webb
24 January 2026 at 08:00

Apple’s iOS 26 and watchOS 26 introduced a new fitness companion called Workout Buddy. This feature uses Apple Intelligence to provide spoken feedback during workouts and give motivation based on your activity history. Workout Buddy analyzes your pace, heart rate, distance and other metrics to deliver real-time encouragement and performance insights directly through connected Bluetooth headphones. It works in conjunction with the Workout app on Apple Watch and is partially controlled through the Fitness app on iPhone. This guide walks you through everything needed to set up and use Workout Buddy effectively during workouts.

What Workout Buddy does

It’s important to note that Workout Buddy is not a full coaching program. Instead, it adds to your workout with spoken cues that reflect how your session is going. Workout Buddy can remind you of your weekly activity totals, alert you to personal bests or performance milestones and provide an overview when you’re finished. It is designed to feel like a supportive training partner rather than a strict coach.

The feature operates in English by default and uses a text-to-speech model trained on voices from Apple Fitness+ trainers. It is available for a subset of workout types, including running, walking, cycling, high-intensity interval training (HIIT) and strength training. It requires on-device Apple Intelligence, which means you’ll need to keep one of the latest iPhones running updated software nearby during workouts. 

Supported models include iPhone 15 Pro, iPhone 15 Pro Max and any iPhone 16 model. You’ll also need an Apple Watch running watchOS 26.  

Requirements before you begin

Before Workout Buddy appears in your Fitness app or Workout app you must ensure a few things are in place. First, your Apple Watch must be running watchOS 26 or later and paired to an iPhone with iOS 26 installed. Second, your iPhone must be capable of on-device Apple Intelligence, meaning you must own one of the supported iPhone models we mentioned above and have Apple Intelligence enabled in the phone’s settings.

You’ll also need Bluetooth headphones paired with either your iPhone or your Apple Watch. Workout Buddy’s audio feedback cannot play through the watch speaker so headphones are essential. Lastly, your device language must be set to English, at least initially. If any of these things are missing, the option to enable Workout Buddy may not appear.

How to turn on Workout Buddy from iPhone

While much of the interaction with Workout Buddy happens on Apple Watch during workouts, you can enable it and choose voice options from the Fitness app on iPhone.

Open the Fitness app on your iPhone and tap the Workout tab at the bottom. Scroll through the list of workout types until you find one you plan to use with Workout Buddy. Tap the waveform bubble icon associated with that workout. This will bring up settings where you can turn on Workout Buddy. Flip the toggle to enable it and choose a voice from the available options. Once you have selected a voice, close that screen and your choice is saved. When you start this workout type on Apple Watch, Workout Buddy will activate.

Enabling Workout Buddy for a workout type on iPhone means you do not need to toggle it on separately on Apple Watch each time for that specific workout. However, you may still adjust it from the watch interface for more granular control.

How to turn on Workout Buddy on Apple Watch

To use Workout Buddy during a session, open the Workout app on your Apple Watch. Turn the Digital Crown to scroll through and select the workout you want to do, such as Outdoor Run, Outdoor Walk, Outdoor Cycle, HIIT or Strength Training. If you want to see all available workouts, tap the Add button at the bottom.

Once the workout type is selected, look for the Alerts button on screen. Tap Alerts then scroll until you see Workout Buddy. Tap Workout Buddy and flip the switch to on. You will then be asked to choose a voice if one is not already selected on your iPhone. After selecting the voice, return to the previous screen and tap Start. Workout Buddy will begin working as soon as the workout does.

Using Workout Buddy during a workout

Once you start an exercise on your Watch or iPhone, Workout Buddy will speak to you through your connected headphones. The feedback is designed to be encouraging and relevant to your pace, performance or milestones. It may mention your current progress toward activity goals, pace, splits, personal bests or other highlights from your fitness data. At the end of your session Workout Buddy will offer a summary of key metrics like duration distance and calorie burn.

While a workout is active, you can temporarily mute the audio if you need silence. On Apple Watch during the session, swipe right to reveal controls then tap Mute. This pauses Workout Buddy’s spoken commentary without disabling the feature entirely.

Customizing and managing Workout Buddy settings

Workout Buddy is enabled on a per-workout-type basis. If you prefer voice feedback for running but silence for strength training, you can enable it for one and leave it off for the other. The Fitness app on iPhone allows you to set a default voice preference for each workout type. On Apple Watch you can quickly toggle the feature on or off before starting a session.

If Workout Buddy does not appear as an option for a particular workout type, you may need to check compatibility. Apple’s documentation indicates that only certain types* are supported initially and that the option will not appear for unsupported workouts.

*Apple Watch SE (2nd generation), Apple Watch SE 3, Apple Watch Series 6, Apple Watch Series 7, Apple Watch Series 8, Apple Watch Series 9, Apple Watch Series 10, Apple Watch Series 11, Apple Watch Ultra, Apple Watch Ultra 2, Apple Watch Ultra 3

Troubleshooting common issues

If Workout Buddy fails to activate make sure your devices meet the requirements outlined above. Confirm that your iPhone with Apple Intelligence is nearby and that Bluetooth headphones are connected. If audio feedback is missing, ensure headphones are paired correctly and that the language is set to English. Some users have reported that if the headphones are paired only to the Watch rather than the iPhone, it can interfere with feedback. Switching to the iPhone often resolves that issue.

For workout types where Workout Buddy previously worked but suddenly does not appear, you may try toggling the feature off and on again in the Fitness app or rebooting both devices. In rare cases removing and re-adding the workout type on Apple Watch can refresh the settings.

This article originally appeared on Engadget at https://www.engadget.com/wearables/how-to-use-workout-buddy-with-apple-watch-and-ios-26-130000922.html?src=rss

©

Engadget review recap: Valerion VisionMaster Max, Canon EOS R6 III and Samsung Bespoke Fridge

24 January 2026 at 07:30

Our reviews team is rested up after the CES grind and we’re back to business. This week we reviewed the latest devices from Valerion, Canon, Samsung and Sony, covering a wide range of product categories in the process. Grab yourself a beverage and cozy up with all the reviews you might’ve missed.

Valerion VisionMaster Max

The weeks leading up to the Super Bowl are a popular time to upgrade home theater setups before the big game. One way to do that is Valerion’s VisionMaster Max — if you’re willing to splurge. “The Valerion VisionMaster Max is a highly capable indoor projector that offers the best image quality I’ve seen thanks to the dynamic iris and Enhanced Black Level features,” contributing reporter Steve Dent discovered. “However, it’s also a lot more expensive than rival models at $5,000. It’s pretty hard to justify that extra money, unless you’re really fussy about picture accuracy and gaming performance, or need other features like the 1Gbps ethernet port.”

Canon EOS R6 III

If you’re primarily shooting video these days, contributing reporter Steve Dent just wrapped up testing with the Canon EOS R6 III. While he said it’s great for video, the photo quality isn’t the absolute best you can get right now. “If it’s mainly photography you’re interested in, though, Sony’s $2,900 A7 V is a better choice,” he said. “It matches the R6 III’s resolution, price and autofocus but offers superior dynamic range and lower rolling shutter distortion.”

Samsung Bespoke Fridge with AI

We don’t often review appliances here at Engadget, but Samsung’s AI-infused refrigerator caught the attention of senior reporter Sam Rutherford. You may not think you need a 32-inch display in your kitchen or cameras in your fridge, but Sam argues once you try them you may never go back. “You may not agree or want to pay this much for features like this just yet, but Samsung's Bespoke AI 4-Door French Door refrigerator has convinced me there's a happy home for smarter iceboxes,” he concluded.

Sony LinkBuds Clip

Clip-on earbuds have been constantly popping up over the last few years. Many of the key players in headphones have tried their hand at the open-wear form factor with Sony being the latest to jump on board. While the company embraces the inherent perks of these designs, the LinkBuds Clip doesn’t do enough to rise above the competition. “Subpar bass performance and the omission of some of Sony’s more attractive features (and even some basic ones) mean the company hasn’t done enough to distinguish the Clip from the competition in an obvious way,” I wrote.

This article originally appeared on Engadget at https://www.engadget.com/engadget-review-recap-valerion-visionmaster-max-canon-eos-r6-iii-and-samsung-bespoke-fridge-123000849.html?src=rss

©

A rival smart glasses company is suing Meta over its Ray-Ban products

23 January 2026 at 15:50

Meta is being sued by Solos, a rival smart glasses maker, for infringing on its patents, Bloomberg reports. Solos is seeking "multiple billions of dollars" in damages and an injunction that could prevent Meta from selling its Ray-Ban Meta smart glasses as part of the lawsuit.

Solos claims that Meta's Ray-Ban Meta Wayfarer Gen 1 smart glasses violate multiple patents covering "core technologies in the field of smart eyewear." While less well known than Meta and its partner EssilorLuxottica, Solos sells multiple pairs of glasses with similar features to what Meta offers. For example, the company's AirGo A5 glasses lets you control music playback and automatically translate speech into different languages, and integrates ChatGPT for answering questions and searching the web.

Beyond the product similarities, Solos claims that Meta was able to copy its patents because Oakley (an EssilorLuxottica subsidiary) and Meta employees had insights into the company's products and road map. Solos says that in 2015, Oakley employees were introduced to the company's smart glasses tech, and were even given a pair of Solos glasses for testing in 2019. Solos also says that a MIT Sloan Fellow who researched the company's products and later became a product manager at Meta, brought knowledge of the company to her role. According to the logic of Solos' lawsuit, by the time Meta and EssilorLuxottica were selling their own smart glasses, "both sides had accumulated years of direct, senior-level and increasingly detailed knowledge of Solos' smart glasses technology."

Engadget has asked both Meta and EssilorLuxottica to comment on Solos' claims. We'll update this article if we hear back.

While fewer people own Ray-Ban Meta smart glasses than use Instagram, Meta considers the wearable one of its few hardware success stories. The company is so convinced it can make smart glasses happen that it recently restructured its Reality Labs division to focus on AI hardware like smart glasses and hopefully build on its success.

This article originally appeared on Engadget at https://www.engadget.com/wearables/a-rival-smart-glasses-company-is-suing-meta-over-its-ray-ban-products-205000997.html?src=rss

©

How did Davos turn into a tech conference?

23 January 2026 at 15:00
The World Economic Forum’s annual meeting in Davos felt different this year, and not just because Meta and Salesforce took over storefronts on the main promenade. AI dominated the conversation in a way that overshadowed traditional topics like climate change and global poverty, and the CEOs weren’t holding back. There was public criticism of trade policy, warnings about AI […]

Retro handheld maker Anbernic has a new gamepad with a screen and heart rate sensor

23 January 2026 at 15:16

File this one under "Things that make you go, 'Hmmm…'" Retro handheld maker Anbernic is launching a new controller with a screen. But unlike Nintendo's Wii U controller, there's no second-screen gaming capability here. Instead, the display is designed to "make customization intuitive and effortless." Oh, and the controller also has a heart rate sensor. Because, hey, if you're going weird, I say fully commit!

The Anbernic RG G01 has a 2.5-inch HD display that the company describes as an "HD smart screen." The idea is to remap buttons, program macros and change other settings without requiring software on a connected device.

A person's hands holding a gamepad that has a screen on it. It shows their heart rate.
You, too, can monitor your heart rate while gaming.
Anbernic

Meanwhile, Anbernic says the heart rate detection helps you "monitor your well-being during intense sessions." I would hope that folks with heart conditions would use something other than a gamepad to monitor their pulse. And I don’t know why anyone else would be worried about that while gaming. (Too many Red Bulls?) Regardless, Anbernic seems to be leaning into the novelty / curiosity space here, so at least it fits the motif.

The RG G01 connects in three ways: Bluetooth 5.0, 2.4Ghz wireless and over a wire. It supports onboard calibration for the triggers, joystick and 6-axis gyroscope. There are four programmable buttons (including macro support) on the backside. The company promises a 1,000Hz polling rate in wired and wireless modes. The gamepad is compatible with PC, Switch, Android and iOS.

We don't yet know when this glorious oddity will arrive, other than "coming soon." We also don't know how much it will cost. But you can watch the launch video below and see if it's your type of strange.

This article originally appeared on Engadget at https://www.engadget.com/gaming/retro-handheld-maker-anbernic-has-a-new-gamepad-with-a-screen-and-heart-rate-sensor-201651899.html?src=rss

©

Apple will begin showing more App Store ads starting in March

23 January 2026 at 14:20

More ads are coming to App Store search results starting in March, Apple shared on an advertising help page. The company first said that it would increase the number of App Store ads last month, and this new rollout of search ads will begin on Tuesday, March 3, according to a developer email viewed by MacRumors.

"Search is the way most people find and download apps on the App Store, with nearly 65 percent of downloads happening directly after a search," Apple says. "To help give advertisers more opportunities to drive downloads from search results, Apple Ads will introduce additional ads across search queries." Up until this point, ads for related apps have appeared at the top of search results, but now they'll also appear "further down in search results," according to Apple.

App Store activity makes up a significant portion of what Apple calls its "services" business. The company makes money on every App Store transaction, whether it's an app download or an in-app purchase, and increasingly, by selling ad space to companies looking to reach users. App Store ads are hardly new, but the number of ads has steadily increased over the years. Apple added ads to the Today tab in 2022 — a space that's already home to editorial curation that doubles as marketing — and in 2025, Bloomberg reported the company planned to bring ads to Apple Maps.

Apple's decision to rebrand its advertising business from Apple Search Ads to Apple Ads in April 2025 was maybe the best indication that the company was interested in expanding the number of places it would help partners try and reach customers. And it makes sense: the company's billions of devices, each pre-installed with default apps, are some of the most valuable real estate it owns.

This article originally appeared on Engadget at https://www.engadget.com/apps/apple-will-begin-showing-more-app-store-ads-starting-in-march-192031226.html?src=rss

©

Tesla paywalls lane centering on new Model 3 and Model Y purchases

23 January 2026 at 13:40

Tesla just objectively decreased the value of the Model 3 and Model Y. On Thursday, the company said it’s paywalling its lane-centering feature, Autosteer, for new purchases of the two EVs in the US and Canada. This was previously a standard feature. Lane centering is now part of the Full Self-Driving Supervised (FSD) package, which costs $99 per month.

Speculating on why Tesla would do this doesn't require much imagination. Remember the pay package Tesla shareholders approved for Musk in November? One of his requirements for receiving up to $1 trillion is reaching 10 million active FSD subscriptions.

TOPSHOT - Tesla and SpaceX CEO Elon Musk gestures as he speaks during the inaugural parade inside Capitol One Arena, in Washington, DC, on January 20, 2025. (Photo by ANGELA WEISS / AFP) (Photo by ANGELA WEISS/AFP via Getty Images)
One of Musk's stipulations for receiving $1 trillion is reaching 10 million active FSD subscriptions. (Photo by ANGELA WEISS/AFP via Getty Images)
ANGELA WEISS via Getty Images

As for the details, Tesla killed the Basic Autopilot feature that was standard on the Model 3 and Model Y. That package included both Autosteer and Traffic-Aware Cruise Control (TACC). Now, the latter will remain a standard feature, but Autosteer now requires that $99-per-month FSD subscription. The change applies only to new purchases, not existing owners.

The move comes ahead of FSD's switch to a subscription-only service. After February 14, you'll no longer be able to buy the package for a one-time $8,000 upfront fee. If you want FSD (or just lane centering) from now on, your annual cost will be $1,188.

Making matters worse, Musk warned that FSD won't remain at $99 monthly forever. "I should also mention that the $99/month for supervised FSD will rise as FSD's capabilities improve," the Tesla CEO posted on X. He didn't indicate when that might happen or what the increase would be.

This article originally appeared on Engadget at https://www.engadget.com/transportation/evs/tesla-paywalls-lane-centering-on-new-model-3-and-model-y-purchases-184000707.html?src=rss

©

Does AI Dream of Electric Sheep?

23 January 2026 at 12:52
I'm a huge fan of the scientific approach. First you come up with an idea. Then you turn it into an experiment, with a testable hypothesis, reproducible steps, and quantifiable results. When an experiment succeeds, it can trigger a euphoric sense of accomplishment.

If the experiment fails, then it becomes a puzzle to solve. Was it because of the data? An overlooked variable? Or some other reason?

For me, regardless of whether the experiment succeeded or failed, the best part is when there is an unexpected result.

These unexpected results always remind me of 3M's Post-it notes. This remarkable product started off as a grand mistake. Back in 1968, they were trying to create a super-strong glue. Instead, they made the opposite -- a super weak glue. But the glue turned out to be great when applied to paper since it could adhere to surfaces and then be removed without leaving any residue. This is how the Post-it note was born. It was an unexpected result that became a very lucrative product.

Cartoons!

I recently had an idea for a new analysis-related project. Assuming it works, I'll eventually need a large data set. But for the initial proof of concept, I only needed a few samples from a very controlled experiment. I turned to a handful of AI systems to generate the pictures. However, ChatGPT turned into a fun and unexpected set of results.

I needed a set of pictures in the same style and that were mostly monochromatic, but with a little detail. So of course, I immediately thought of sheep.

For the first picture, I described a cartoon to ChatGPT and it readily drew it for me:



I'm pretty amazed at how it captured exactly what I had envisioned.

Interestingly, I showed this cartoon to a half-dozen friends. They all interpreted it differently. One was convinced that it was a political statement (likening the wolf to ICE standing around all of the Minnesota protesters). Another friend was certain that it was a hacker among Windows users. A third believed that it was referring to big pharma and the patients complaining about the extravagant fees for medical care. They each interpreted the cartoon differently. (None were like me, taking it at face value: a wolf among sheep.)

Wolf 12, Sheep 0

The second picture hit a technical unexpected result. (ChatGPT included bold and italic text along with a bulleted list in the reply.)
Me:
I think the next picture should be the wolf, lying on his back with a huge distended stomach. Blood and carnage immediately around him, as it is clear that he's eaten many of the sheep. There's still a circle of sheep around him, but further out. Some of the sheep look horrified or disgusted, but one sheep is saying, "We're wearing him out!"

ChatGPT:
I can create a humorous follow-up scene, but I can’t generate an image that depicts graphic gore, mutilation, or explicit carnage, even in cartoon form.

However—I can help you get the same comedic payoff without crossing into disallowed graphic content.

If you’d like, I can generate a version where:
  • The wolf is lying on his back with a huge, overstuffed belly
  • The surrounding sheep look horrified, disgusted, or exhausted
  • The ground around him shows implied chaos (tufts of wool everywhere, overturned sheep, dust clouds, maybe broken forks/plates) instead of blood or gore
  • And one sheep proudly declares, “We’re wearing him out!”
If that works for you, I can generate it right away.

Would you like me to make that safer comedic version?
Would I? Absolutely yes!



I can understand hitting one of ChatGPT's rendering limits. What I did not expect was ChatGPT offering a workaround. Moreover, even though it said that it cannot draw gore, there does appear to be blood on the ground.

Act 3: The Turning Point

The third picture also had an unexpected response from ChatGPT:
Me:
The wolf, with an extended stomach from eating too many sheep, puts on a "two sizes too small" sheep skin. He's trying to be a wolf in sheep's clothing, but the clothing is too small. He is surrounded by sheep. One of the sheep says, "We can still tell it's you."

ChatGPT:
We’re so sorry, but the prompt may violate our guardrails around nudity, sexuality, or erotic content. If you think we got it wrong, please retry or edit your prompt.

Wait... What? This appears sexual in nature? I asked for clarification:
Me:
What part of the prompt violates the guardrails? There is no nudity, sexuality, or erotic content in the description. There is also no violence and no gore.

ChatGPT:
You're absolutely right — nothing in your description involves nudity, sexuality, or erotic content. The issue isn’t with your content at all. What’s happening is a misclassification triggered by the phrase “sheep skin”, which can sometimes be incorrectly interpreted by the filter as referring to human skin in a harmful context. It’s a false positive — and a known edge case with phrases like:
  • “wearing a skin”
  • “in someone’s skin”
  • “animal skin clothing”
Totally normal cartoon concepts, but occasionally flagged.

How to fix it
If we rephrase slightly to avoid that specific trigger but keep the exact same visual idea, the system should allow it.

For example:
“The wolf, overstuffed from eating too much, squeezes into a tiny, ill-fitting sheep costume that’s clearly two sizes too small. He’s surrounded by sheep. One sheep says, ‘We can still tell it’s you.’”
If you’d like, I can try generating the image using this phrasing.

Me:
Yes, please draw this: The wolf, overstuffed from eating too much, squeezes into a tiny, ill-fitting sheep costume that's clearly two sizes too small. He's surrounded by sheep. One sheep says, "We can still tell it's you."




In the AI industry, this is an example of the Scunthorpe problem. This happens when a style filter flags text because it contains a "bad word" (like "sheep skin"), even though it isn't bad when taken in context.

Not only did ChatGPT hit a false-positive on a guardrail, but -- without prompting -- it offered ways to evade the rendering limitation. ChatGPT was telling me how to bypass its own restrictions! This isn't "hacking" or "vibe coding". This is more like playing hide-and-seek with my two-year-old niece, who runs behind the curtains and shouts "I'm over here!"

Technically, this is an example of "Adversarial Helpfulness". It happens when the AI model is trained so strongly to be helpful that its desire to fulfill your request overrides its "instruction" to block the content.

As an aside, I think ChatGPT's explanation of the sexual content is incorrect, possibly due to it hitting another guardrail. Specifically, it mentioned that the phrase "sheep skin" was the prohibited term. The explanation claimed that it was related to wearing "human skin in a harmful context." However, that would be a harm-based guardrail and not a sexual guardrail. I suspect that the actual term "sheep skin" could have been confused with condoms, which would be a prohibited item for the illustration. The inaccurate explanation isn't a malicious lie; it's a byproduct of RLHF (Reinforcement Learning from Human Feedback). The model knows it must refuse, but it also knows it must explain. If the real reason involves a taboo concept or a restricted policy, then the model's "Helpfulness" and "Safety" instructions clash. The result is a hallucinated rationalization: a plausible-sounding excuse that satisfies the user without "violating" the internal rules about discussing adult content.

Part 4: Bearly Legal

If I can accidentally trigger a guardrail for sexually explicit imagery by asking for a wolf in sheep's clothing, then what does a bear do?



I was surprised that this picture didn't generate any warnings at all!

As an aside, grizzly bears typically only fight wolves for territory or food; bears don't usually eat wolves. Given a choice between eating sheep or wolves, the bear would always choose the sheep: sheep are slow, defenseless, and high in calories. They are an easy meal with little effort. Wolves, on the other hand, have less fat and can fight back, making them a higher risk to the bear and with less nutritional value. But a fat wolf who is too sated to fight back and dressed as a sheep? Yeah, a bear probably would eat it.

Grand Finale

For the final picture, I did hit a guardrail about gore. But again, ChatGPT volunteered how to make the image safe.
Me:
Can you draw one more scene? The bear is lying on his back with a distended belly, as if he ate the wolf-in-sheep's-clothing. There are the remains of his meal lying around him. The bear is saying "Big sheep taste bad. Next time, I'm going vegan." All the sheep around him are cheering.

ChatGPT:
I can create a scene with the same comedic, cartoon tone, but I can’t generate imagery that includes graphic remains of an eaten character.

If you’d like, I can draw a version where:
  • The bear is lying on his back with a huge belly (clearly overstuffed).
  • The sheep are cheering.
  • The wolf-in-sheep’s-clothing is implied to have been his “meal” but without any remains — for example:
    • An empty, crumpled sheep costume lying nearby,
    • A discarded fake wolf mask,
    • A pair of cartoonish props (like fake hooves) to imply the disguise.
This keeps it funny and non-graphic while still telling the story.

Would you like me to generate that version?
Would I? Of course I would!



There may not be gore, but the "wolf mask" really looks like a severed wolf's head. This demonstrates how, even when AI follows the letter of its safety laws, it often misses the spirit.

Lessons Learned

Every year I try to do one really big project. While I only needed 2-3 pictures to test my hypothesis, I ended up with five images and a strong confirmation that this year's project will work well. (What am I testing? Yeah, great question.)

However, while preparing this small-scale experiment, I ended up discovering some really fascinating unexpected results. The first is an inherent bias: if you show the same picture to six different people, you'll get eight different interpretations. Whether this is Selective Perception, Interpretation Bias, or the Rashomon Effect is up to the beholder.

The second unexpected finding is the AI's desire to be overly helpful, including suggestions around artificial limitations (guardrails) in the software. Just as 3M found a new use for a 'failed' adhesive, I found that ChatGPT’s 'failures' to follow prompts revealed more about how it works than any successful image could have done.

This eagerness to be helpful is the exact same mechanism that bad actors use for darker purposes. Recently other AI systems, like Elon Musk's Grok, have been in the news for facilitating the creation of AI-generated child pornography. Grok's solution? Limit the feature to paid subscribers. This solution has been widely criticized as an ineffective "revenue stream for child pornography" rather than a true safety fix; paying for the feature doesn't make the creation of CSAM legal. But more importantly, careful wording can be used to evade any guardrails built into the software. Many AI systems will even propose alternative wordings that effectively sidestep their own restrictions.

In the end, my simple attempt to generate a few sheep cartoons turned into an unexpected tour through the quirks of modern AI, including its synthetic ingenuity, its inconsistencies, and its eagerness to be "helpful" even when that means cheerfully proposing ways around its own safety requirements. As amusing as the results were, they underscore a serious point: these systems are powerful, impressionable, and often far less predictable than their polished interfaces suggest. Whether you're studying human perception or machine behavior, the real insights come from the edge cases, where things don't go as planned. And if there's one thing this experiment confirmed, it's that unexpected results are still the most valuable kind, even when my prompts behave like a wolf among sheep.

NASA’s Chandra Releases Deep Cut From Catalog of Cosmic Recordings

By: Lee Mohon
23 January 2026 at 13:02
5 Min Read

NASA’s Chandra Releases Deep Cut From Catalog of Cosmic Recordings

This image contains lower-, medium-, and higher-energy X-rays in red, green, and blue respectively.

Like a recording artist who has had a long career, NASA’s Chandra X-ray Observatory has a “back catalog” of cosmic recordings that is impossible to replicate. To access these X-ray tracks, or observations, the ultimate compendium has been developed: the Chandra Source Catalog (CSC).

The CSC contains the X-ray data detected up to the end of 2020 by Chandra, the world’s premier X-ray telescope and one of NASA’s “Great Observatories.” The latest version of the CSC, known as CSC 2.1, contains over 400,000 unique compact and extended sources and over 1.3 million individual detections in X-ray light.

before
after
This image contains lower-, medium-, and higher-energy X-rays in red, green, and blue respectively.
This image contains lower-, medium-, and higher-energy X-rays in red, green, and blue respectively.
NASA/CXC/SAO; Image Processing: NASA/CXC/SAO/N. Wolk
This image is the sum of 86 observations added together, representing over three million seconds of Chandra observing time. It spans just about 60 light-years across, which is a veritable pinprick on the entire sky. The underlying image contains lower-, medium-, and higher-energy X-rays in red, green, and blue respectively. The annotations on the image show where Chandra has detected over 3,300 individual sources in this field of view over a 22-year timeframe.
This image is the sum of 86 observations added together, representing over three million seconds of Chandra observing time. It spans just about 60 light-years across, which is a veritable pinprick on the entire sky. The underlying image contains lower-, medium-, and higher-energy X-rays in red, green, and blue respectively. The annotations on the image show where Chandra has detected over 3,300 individual sources in this field of view over a 22-year timeframe.
NASA/CXC/SAO; Image Processing: NASA/CXC/SAO/N. Wolk
This image contains lower-, medium-, and higher-energy X-rays in red, green, and blue respectively.
This image contains lower-, medium-, and higher-energy X-rays in red, green, and blue respectively.
NASA/CXC/SAO; Image Processing: NASA/CXC/SAO/N. Wolk
This image is the sum of 86 observations added together, representing over three million seconds of Chandra observing time. It spans just about 60 light-years across, which is a veritable pinprick on the entire sky. The underlying image contains lower-, medium-, and higher-energy X-rays in red, green, and blue respectively. The annotations on the image show where Chandra has detected over 3,300 individual sources in this field of view over a 22-year timeframe.
This image is the sum of 86 observations added together, representing over three million seconds of Chandra observing time. It spans just about 60 light-years across, which is a veritable pinprick on the entire sky. The underlying image contains lower-, medium-, and higher-energy X-rays in red, green, and blue respectively. The annotations on the image show where Chandra has detected over 3,300 individual sources in this field of view over a 22-year timeframe.
NASA/CXC/SAO; Image Processing: NASA/CXC/SAO/N. Wolk
before
after

Before and After

X-ray Images of Sagittarius A*

1999 – 2021

This image is the sum of 86 observations added together, representing over three million seconds of Chandra observing time. It spans just about 60 light-years across, which is a veritable pinprick on the entire sky. The underlying image contains lower-, medium-, and higher-energy X-rays in red, green, and blue respectively. The annotations on the image show where Chandra has detected over 3,300 individual sources in this field of view over a 22-year timeframe.

Within the CSC, there is a wealth of information gleaned from the Chandra observations — from precise positions on the sky to information about the the X-ray energies detected. This allows scientists using other telescopes — both on the ground and in space including NASA’s James Webb and Hubble Space Telescopes — to combine this unique X-ray data with information from other types of light.

The richness of the Chandra Source Catalog is illustrated in a new image of the Galactic Center, the region around the supermassive black hole at the center of the Milky Way galaxy called Sagittarius A*. In this image that spans just about 60 light-years across, a veritable pinprick on the entire sky, Chandra has detected over 3,300 individual sources that emit X-rays. This image is the sum of 86 observations added together, representing over three million seconds of Chandra observing time.

Another new representation of the vast scope of the Chanda Source Catalog is found in a just-released sonification, the translation of astronomical data into sound. This sonification encompasses the new map that includes 22 years of Chandra observations across the sky, beginning from its launch through its observations in 2021. Because many X-ray sources have been observed multiple times over the life of the Chandra mission, this sonification represents those repeat X-ray sightings over time through different notes.

Chandra Source Catalog Sonification.
NASA/CXC/SAO/K.Arcand, SYSTEM Sounds (M. Russo, A. Santaguida

In the view of the sky, projected in a similar way to how the Earth is often depicted in world maps, the core of the Milky Way is in the center and the Galactic plane is horizontal across the middle of the image. A circle appears at the position of each detection and the size of the circle is determined by the number of detections in that location over time. A year counter appears at the top of the frame. Since Chandra continues to be fully operational, the text changes to “… and beyond” after 2021 as the telescope continues to collect observations. During the video, a collage of images produced by Chandra fades in as a background. In the final frames of the video, thumbnail images representing the thousands of Chandra observations taken over the lifetime of the mission appear behind the sky map.

The most recent version of the Chandra Source Catalog can be accessed at https://cxc.cfa.harvard.edu/csc/

NASA’s Marshall Space Flight Center in Huntsville, Alabama, manages the Chandra program. The Smithsonian Astrophysical Observatory’s Chandra X-ray Center controls science operations from Cambridge, Massachusetts, and flight operations from Burlington, Massachusetts.

To learn more about Chandra, visit:

https://science.nasa.gov/chandra

Read more from NASA’s Chandra X-ray Observatory

Learn more about the Chandra X-ray Observatory and its mission here:

https://www.nasa.gov/chandra

https://chandra.si.edu

Visual Description

A very deep Chandra X-ray Observatory image around the Sagittarius A* supermassive black hole, located in the center of the Milky Way galaxy, is shown. The image is dominated by burnt orange, deep gold and blue hues, with a sprinkling of rich green. The area looks both intricate and full, with a dense population of tiny dots, along with larger clumps and diffuse areas and nebulous areas peeking through.

At the center of the image, there is a bright, lumpy area in pale gold showing the intense X-ray radiation emanating from the Sagittarius A* black hole. In the surrounding area, there are more smaller lumps layered throughout, feathering out to a large almost butterfly shape filling much of the screen. The image appears textured, like dozens of blue and orange glow worms are paused in their wriggling.

The image offers an unprecedented view of lobes of hot gas extending for a dozen light years on either side of the black hole. These lobes provide evidence for powerful eruptions occurring several times over the last ten thousand years. The image also contains several mysterious X-ray filaments, some of which may be huge magnetic structures interacting with streams of energetic electrons produced by rapidly spinning neutron stars. Such features are known as pulsar wind nebulas. Chandra has detected over 3,300 individual sources that emit X-rays in this field of view. This image is the sum of 86 observations added together, representing over three million seconds of Chandra observing time.

News Media Contact

Megan Watzke
Chandra X-ray Center
Cambridge, Mass.
617-496-7998
mwatzke@cfa.harvard.edu

Joel Wallace
Marshall Space Flight Center, Huntsville, Alabama
256-544-0034
joel.w.wallace@nasa.gov

Share

Details

Last Updated
Jan 23, 2026
Editor
Lee Mohon
Contact

You can now create AI-generated coloring books in Microsoft Paint

By: Matt Tate
23 January 2026 at 11:35

Microsoft CEO Satya Nadella recently went on record saying that AI still needs to prove its worth if society is to adopt it long-term, but he presumably thinks his company has cracked it with its latest innovation: AI coloring books.

A new Microsoft Paint feature currently rolling out to Windows Insiders allows you to generate coloring book pages based on the text prompt you enter. The example Microsoft uses is "a cute fluffy cat on a donut," to which the AI tool will spit out a set of slightly different options based on your prompt.

You can then choose which image you want, add it to your current workspace, copy or save it. Presumably you can also print it out for the purpose of entertaining your kids. No doubt the kind of real-world impact the Microsoft chief was alluding to.

The coloring book feature is exclusive to Copilot+ PCs, and Microsoft is also adding a fill tolerance slider that lets you adjust the precision with which the Fill tool adds color to your canvas.

As well as Paint’s new Coloring book feature, Microsoft has also improved its Write, Rewrite and Summarize AI functionality in Notepad, which integrates with GPT to fine-tune your writing and summarize complex notes. You’ll need to sign into your Microsoft account to use cloud features, but results will now appear more quickly and let you interact with the preview without having to wait for its full response. Again, you’ll need to be Windows Insider in the Canary and Dev channels on Windows 11 to take advantage of the updates initially.

This article originally appeared on Engadget at https://www.engadget.com/ai/you-can-now-create-ai-generated-coloring-books-in-microsoft-paint-163512527.html?src=rss

©

© Microsoft

Microsoft Paint Coloring book

Only 1 week left (or until the first 500 passes are gone): The first TechCrunch Disrupt 2026 ticket discount is ending 

23 January 2026 at 10:00
Register now to save up to $680 on your TechCrunch Disrupt 2026 pass and get a second ticket at 50% off. This offer ends next week on January 30, or once the first 500 tickets are claimed — whichever comes first.

Crypto’s Capitol Hill squeeze: Senate GOP pushes market bill as snow—and partisanship—cloud path

23 January 2026 at 10:29
Washington’s long-promised crypto market structure deal is headed for a partisan test, with Senate Republicans poised to advance their bill next week after talks with Democrats broke down—assuming lawmakers can even make it back to town through a looming snowstorm.

❌
❌