❌

Normal view

There are new articles available, click to refresh the page.
Yesterday β€” 24 January 2026Main stream

This is the missing automation designer that Home Assistant needs

24 January 2026 at 13:00

Home Assistant is arguably the best choice for anyone looking to start a smart home, but this is especially true for power users. If you want unhindered freedom to decide how your smart home functions and you’re not afraid to get your hands dirty, there’s no better choice.

Before yesterdayMain stream

5 Subtle Signs Your Job Is Slowly Being Automated

22 January 2026 at 14:25

Automation doesn’t always look dramatic at first. Spot five subtle signs your work is shifting, from oversight to self-serve, and how it adds up over time.

The post 5 Subtle Signs Your Job Is Slowly Being Automated appeared first on TechRepublic.

5 Subtle Signs Your Job Is Slowly Being Automated

22 January 2026 at 14:25

Automation doesn’t always look dramatic at first. Spot five subtle signs your work is shifting, from oversight to self-serve, and how it adds up over time.

The post 5 Subtle Signs Your Job Is Slowly Being Automated appeared first on TechRepublic.

DLA turns to AI, ML to improve military supply forecasting

The Defense Logistics Agency β€” an organization responsible for supplying everything from spare parts to food and fuel β€” is turning to artificial intelligence and machine learning to fix a long-standing problem of predicting what the military needs on its shelves.

While demand planning accuracy currently hovers around 60%, DLA officials aim to push that baseline figure to 85% with the help of AI and ML tools. Improved forecasting will ensure the services have access to the right items exactly when they need them.Β 

β€œWe are about 60% accurate on what the services ask us to buy and what we actually have on the shelf.Β  Part of that, then, is we are either overbuying in some capacity or we are under buying. That doesn’t help the readiness of our systems,” Maj. Gen. David Sanford, DLA director of logistics operations, said during the AFCEA NOVA Army IT Day event on Jan. 15.

Rather than relying mostly on historical purchase data, the models ingest a wide range of data that DLA has not previously used in forecasting. That includes supply consumption and maintenance data, operational data gleaned from wargames and exercises, as well as data that impacts storage locations, such as weather.

The models are tied to each weapon system and DLA evaluates and adjusts the models on a continuing basis as they learn.Β 

β€œWe are using AI and ML to ingest data that we have just never looked at before. That’s now feeding our planning models. We are building individual models, we are letting them learn, and then those will be our forecasting models as we go forward,” Sanford said.

Some early results already show measurable improvements. Forecasting accuracy for the Army’s Bradley Infantry Fighting Vehicle, for example, has improved by about 12% over the last four months, a senior DLA official told Federal News Network.

The agency has made the most progress working with the Army and the Air Force and is addressing β€œsome final data-interoperability issues” with the Navy. Work with the Marine Corps is also underway.Β 

β€œThe Army has done a really nice job of ingesting a lot of their sustainment data into a platform called Army 360. We feed into that platform live data now, and then we are able to receive that live data. We are ingesting data now into our demand planning models not just for the Army. We’re on the path for the Navy, and then the Air Force is next. We got a little more work to do with Marines. We’re not as accurate as where we need to be, and so this is our path with each service to drive to that accuracy,” Sanford said.

Demand forecasting, however, varies widely across the services β€” the DLA official cautioned against directly comparing forecasting performance.

β€œWhen we compare services from a demand planning perspective, it’s not an apples-to-apples comparison.Β  Each service has different products, policies and complexities that influence planning variables and outcomes. Broadly speaking, DLA is in partnership with each service to make improvements to readiness and forecasting,” the DLA official said.

The agency is also using AI and machine learning to improve how it measures true administrative and production lead times. By analyzing years of historical data, the tools can identify how industry has actually performed β€” rather than how long deliveries were expected to take β€” and factor that into DLA stock levels.Β Β 

β€œWhen we put out requests, we need information back to us quickly. And then you got to hold us accountable to get information back to you too quickly. And then on the production lead times, they’re not as accurate as what they are. There’s something that’s advertised, but then there’s the reality of what we’re getting and is not meeting the target that that was initially contracted for,” Sanford said.

The post DLA turns to AI, ML to improve military supply forecasting first appeared on Federal News Network.

Β© Federal News Network

DEFENSE_04

My favorite thing about Home Assistant isn't the tech

21 January 2026 at 13:00

Home Assistant is free, open-source smart home software that allows you to connect and control a huge number of smart home devices, regardless of their brand or ecosystem. It's a technological marvel, but my favorite thing about Home Assistant has nothing to do with the tech at all.

Why Multi-Chain DeFi Is Hardβ€Šβ€”β€ŠAnd How AI Agents Can Help

By: Duredev
21 January 2026 at 06:10

Why Multi-Chain DeFi Is Hardβ€Šβ€”β€ŠAnd How AI Agents CanΒ Help

Why Multi-Chain DeFi Is Hardβ€Šβ€”β€ŠAnd How AI Agents CanΒ Help

Last year, our team started exploring a case study on a DeFi project that had strong fundamentals but a complex technical challenge.

Users were distributed across Ethereum, Solana, and Aptos. While adoption looked promising, syncing smart contracts across multiple chains quickly became a bottleneck. πŸ˜…

What sounded like a growth strategy turned into an operational headache.

At Duredev, we often see this pattern when teams move from single-chain products to cross-chain DeFi platforms.

🌐 Why Multi-Chain DeFi Looks Simple (ButΒ Isn’t)

On paper, multi-chain means more users, more liquidity, and better reach. In reality, each blockchain behaves like a different system altogether.

Ethereum, Solana, and Aptos differΒ in:

  • Smart contract standards
  • Deployment pipelines
  • Upgrade mechanisms
  • Monitoring and debugging tools

This is why many teams struggle with cross chain app development once they move beyondΒ MVPs.

To solve this, projects increasingly rely on cross-chain multichain app development strategies instead of chain-specific fixes.

πŸ˜… Where Multi-Chain DeFi Breaks for Developers

In the case study we analyzed, developers faced recurring issues:

  • Feature updates deployed on one chain but delayed onΒ others
  • Configuration mismatches betweenΒ networks
  • Manual monitoring across dashboards
  • Rising maintenance costs

The biggest challenge wasn’t writing smart contractsβ€Šβ€”β€Š
it was maintaining consistency acrossΒ chains.

This is where many multi-chain products slow down or silentlyΒ fail.

πŸ€– The Role of AI Agents in Cross-Chain DeFi

Instead of adding more tools, the solution explored a smarter layer: AIΒ agents.

AI agents can act as intelligent coordinators for cross chain smart contract management, helping teams automate tasks that usually require constant human oversight.

In multi-chain DeFi systems, AI agentsΒ can:

  • Automate deployments across multipleΒ chains
  • Monitor smart contract behavior in realΒ time
  • Detect inconsistencies between chainΒ states
  • Trigger alerts or rollbacks when anomalies appear

This transforms chaotic workflows into predictable, developer-friendly systems.

πŸ’‘ From Automation to Intelligence

Traditional automation follows rules.
AI agents understand patterns.

That difference matters in multichain DeFi development, where systems are dynamic and conditions change frequently.

The goal becomesΒ clear:

One platform. Multiple chains.
A seamless user experience.
A headache-free workflow for developers. πŸ’‘

This is the direction modern Web3 infrastructure is movingΒ toward.

πŸ› οΈ Building Cross-Chain Systems ThatΒ Scale

At blockchain development company Duredev, we design platforms that are built to scale across chains from dayΒ one.

Our approach combines:

  • AI-driven monitoring
  • Secure smart contract architecture
  • Modular cross chain application development
  • Long-term scalability planning

Through our core blockchain development services, we help teams avoid patchwork solutions and focus on stable foundations.

πŸ”— Why Cross-Chain Matters for the Future ofΒ DeFi

Users don’t care which chain they’re on.
They care about speed, reliability, andΒ trust.

As DeFi matures, platforms that rely on manual multi-chain coordination will struggle to keep up. Intelligent systems powered by AI agents will define the next generation of decentralized products.

This shift is already visibleΒ across:

  • DeFi protocols
  • Infrastructure layers
  • Enterprise blockchain platforms

At Duredev, this evolution is central to how we build AI + Web3Β systems.

🧩 Why Teams Choose Duredev

Choosing the right technology partner is critical for multi-chain success.

Teams work with Duredev because we focusΒ on:

  • Practical cross-chain multichain app development
  • Reduced operational complexity
  • Secure, auditable smart contracts
  • AI-enabled monitoring and automation

You can learn more about our approach and team through about Duredev or start a conversation directly via contactΒ Duredev.

πŸš€ FinalΒ Takeaway

Multi-chain DeFi is hardβ€Šβ€”β€Šnot because the idea is flawed, but because coordination isΒ complex.

AI agents offer a realistic way forwardΒ by:

  • Reducing humanΒ error
  • Automating cross-chain workflows
  • Making multi-chain systems sustainable

As DeFi moves toward a truly multi-chain future, platforms that combine AI intelligence with decentralized infrastructure will lead theΒ way.

And that’s exactly what Duredev is building for the next generation ofΒ Web3.

πŸ”— Important Links


Why Multi-Chain DeFi Is Hardβ€Šβ€”β€ŠAnd How AI Agents Can Help was originally published in Coinmonks on Medium, where people are continuing the conversation by highlighting and responding to this story.

Cybersecurity in the Age of AIOps: Proactive Defense Strategies for IT Leaders

20 January 2026 at 12:27
Cybersecurity Appsec

There is a rise in cybersecurity threats in today’s rapidly changing digital landscape. Organizations have struggled to safeguard sensitive data and systems from ransomware and breaches. In fact, about 87% of security professionals report that AI-based cyberattacks are plaguing organizations worldwide. Traditional cybersecurity solutions are effective to a degree. However, they tend to be limited..

The post Cybersecurity in the Age of AIOps: Proactive Defense Strategies for IT Leaders appeared first on Security Boulevard.

In an era where everything gets worse, Home Assistant bucks the trend

20 January 2026 at 11:30

It's easy to look back on the past with rose-colored glasses and remember everything as being better than it actually was. It's hard to deny, however, that a lot of apps and services are getting demonstrably worse. The Home Assistant smart home software is thankfully a clear exception to this rule.

From static workflows to intelligent automation: Architecting the self-driving enterprise

20 January 2026 at 05:15

I want you to think about the most fragile employee in your organization. They don’t take coffee breaks, they work 24/7 and they cost a fortune to recruit. But if a button on a website moves a few pixels to the right, this employee has a complete mental breakdown and stops working entirely.

I am talking, of course, about your RPA (robotic process automation) bots.

For the last few years, I have observed IT leaders, CIOs and business leaders pour millions into what we call automation. We’ve hired armies of consultants to draw architecture diagrams and map out every possible scenario. We’ve built rigid digital train tracks, convinced that if we just laid enough rail, efficiency would follow.

But we didn’t build resilience. We built fragility.

As an AI solution architect, I see the cracks in this foundation every day. The strategy for 2026 isn’t just about adopting AI; it is about attacking the fragility of traditional automation. The era of deterministic, rule-based systems is ending. We are witnessing the death of determinism and the rise of probabilistic systems β€” what I call the shift from static workflows to intelligent automation.

The fragility tax of old automation

There is a painful truth we need to acknowledge: Your current bot portfolio is likely a liability.

In my experience and architectural practice, I frequently encounter what I call the fragility tax. This is the hidden cost of maintaining deterministic bots in a dynamic world. The industry rule of thumb Β β€” Β and one that I see validated in budget sheets constantly β€” is that for every $1 you spend on BPA licenses, you end up spending $3 on maintenance.

Why? Because traditional BPA is blind. It doesn’t understand the screen it is looking at; it only understands coordinates (x, y). It doesn’t understand the email it is reading; it only scrapes for keywords. When the user interface updates or the vendor changes an invoice format, the bot crashes.

I recall a disaster with an enterprise client who had an automated customer engagement process. It was a flagship project. It worked perfectly until the third-party system provider updated their solution. The submit button changed from green to blue. The bot, which was hardcoded to look for green pixels at specific coordinates, failed silently.

But fragility isn’t just about pixel colors. It is about the fragility of trust in external platforms.

We often assume fragility only applies to bad code, but it also applies to our dependencies. Even the vanguard of the industry isn’t immune. In September 2024, OpenAI’s official newsroom account on X (formerly Twitter) was hijacked by scammers promoting a crypto token.

Think about the irony: The company building the most sophisticated intelligence in human history was momentarily compromised not by a failure of their neural networks, but by the fragility of a third-party platform. This is the fragility tax in action. When you build your enterprise on deterministic connections to external platforms you don’t control, you inherit their vulnerabilities. If you had a standard bot programmed to Retweet@OpenAINewsroom, you would have automatically amplified a scam to your entire customer base.

The old way of scripting cannot handle this volatility. We spent years trying to predict the future and hard-code it into scripts. But the world is too chaotic for scripts. We need architecture that can heal itself.

The architectural pivot: From rules to goals

To capture the value of intelligent automation (IA), you must frame it as an architectural paradigm shift, not just a software upgrade. We are moving from task automation (mimicking hands) to decision automation (mimicking brains).

When I architect these systems, I look not only for rules but also for goals.

In the old paradigm, we gave the computer a script: Click button A, then type text B, then wait 5 seconds. In the new paradigm, we use cognitive orchestrators. We give the AI a goal: Perform this goal.

The difference is profound. If the submit button turns blue, a goal-based system using a large language model (LLM) and vision capabilities sees the button. It understands that despite the color change, it is still the submission mechanism. It adjusts its own path to achieving the goal.

Think of it like the difference between a train and an off-road vehicle. A train is fast and efficient, but it requires expensive infrastructure (tracks) and cannot steer around a rock on the line. Intelligent automation is the off-road vehicle. It uses sensors to perceive the environment. If it sees a rock, it doesn’t derail; it decides to go around it.

This isn’t magic; it’s a specific architectural pattern. The tech stack required to support this is fundamentally different from what most CIOs currently have installed. It is no longer just a workflow engine. The new stack requires three distinct components working in concert:

  1. The workflow engine: The hands that execute actions.
  2. The reasoning layer (LLM): The brain that figures out the steps dynamically and handles the logic.
  3. The vector database: The memory that stores context, past experiences and embedded data to reduce hallucinations.

By combining these, we move from brittle scripts to resilient agents.

Breaking the unstructured data barrier

The most significant limitation of the old way was its inability to handle unstructured data. We know that roughly 80% of enterprise data is unstructured, locked away in PDFs, email threads, Slack and MS Teams chats, and call logs. Traditional business process automation cannot touch this. It requires structured inputs: rows and columns.

This is where the multi-modal understanding of intelligent automation changes the architecture.

I urge you to adopt a new mantra: Data entry is dead. Data understanding is the new standard.

I am currently designing architectures where the system doesn’t just move a PDF from folder A to folder B. It reads the PDF. It understands the sentiment of the email attached to it. It extracts the intent from the call log referenced in the footer.

Consider a complex claims-processing scenario. In the past, a human had to manually review a handwritten accident report, cross-reference it with a policy PDF and check a photo of the damage. A deterministic bot is useless here because the inputs are never the same twice.

Intelligent automation changes the equation. It can ingest the handwritten note (using OCR), analyze the photo (using computer vision) and read the policy (using an LLM). It synthesizes these disparate, messy inputs into a structured claim object. It turns chaos into order.

This is the difference between digitization (making it electronic) and digitalization (making it intelligent).

Human-in-the-loop as a governance pattern

Whenever we present this self-driving enterprise concept to clients, the immediate reaction is β€œYou want an LLM to talk to our customers?” This is a valid fear. But the answer isn’t to ban AI; it is to architect confidence-based routing.

We don’t hand over the keys blindly. We build governance directly into the code. In this pattern, the AI assesses its own confidence level before acting.

This brings us back to the importance of verification. Why do we need humans in the loop? Because trusted endpoints don’t always stay trusted.

Revisiting the security incident I mentioned earlier: If you had a fully autonomous sentient loop that automatically acted upon every post from a verified partner account, your enterprise would be at risk. A deterministic bot says: Signal comes from a trusted source -> execute.

A probabilistic, governed agent says: Signal comes from a trusted source, but the content deviates 99% from their semantic norm (crypto scam vs. tech news). The confidence score is low. Alert human.

That is the architectural shift we need.

  • Scenario A: The AI is 99% confident it understands the invoice, the vendor matches the master record and the semantics align with past behavior. The system auto-executes.
  • Scenario B: The AI is only 70% confident because the address is slightly different, the image is blurry or the request seems out of character (like the hacked tweet example). The system routes this specific case to a human for approval.

This turns automation into a partnership. The AI handles the mundane, high-volume work and your humans handle the edge cases. It solves the black box problem that keeps compliance officers awake at night.

Kill the zombie bots

If you want to prepare your organization for this shift, you don’t need to buy more software tomorrow. You need to start with an audit.

Look at your current automation portfolio. Identify the zombie bots, which are the scripts that are technically alive but require constant intervention to keep moving. These bots fail whenever vendors update their software. These are the bots that are costing you more in fragility tax than they save in labor.

Stop trying to patch them. These are the prime candidates for intelligent automation.

The future belongs to the probabilistic. It belongs to architectures that can reason through ambiguity, handle unstructured chaos and self-correct when the world changes. As leaders, we need to stop building trains and start building off-road vehicles.

The technology is ready. The question is, are you ready to let go of the steering wheel?

Disclaimer: This and any related publications are provided in the author’s personal capacity and do not represent the views, positions or opinions of the author’s employer or any affiliated organization.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

Your smart home needs these outdoor sensors (here’s why)

19 January 2026 at 07:00

Most smart home sensors are designed for use inside the house, which is where their presence is most useful. But there’s no reason the data they gather and triggers they enable can’t have utility outside, too. Here are a few all-weather examples of outdoor sensors that you might have overlooked.

DIY, Full-Stack Farm Automation

17 January 2026 at 01:00

Recently, [Vinnie] aka [vinthewrench] moved from Oregon to Arkansas to start a farmstead. This is a style of farming that focuses not just on a profitable farm where produce is sold at market, but also on a homestead where much of one’s own food is grown on the farm as well. Like any farm, though, it’s extremely hard work that takes a tremendous amount of time. Automation and other technology can make a huge impact in these situations, and [Vinnie] is rolling out his own software stack to help with this on his farm.

He calls his project the Pi Internet of Things, or PioT, and as its name suggests is based around the Raspberry Pi. Since this will all be outdoors and exposed to the extremes of Arkansas weather, everything built under the auspices of this project prioritizes ruggedness, stability, and long-term support, all while avoiding any cloud service. The system also focuses on being able to ride through power outages. The server side, called piotserver, uses a REST API to give the user access to the automation systems through a web interface

[Vinnie] also goes into detail about why existing systems like Home Assistant and Open Sprinkler wouldn’t work in his situation, and why a ground-up solution like this is more appropriate for his farm. This post is largely an overview of his system, but some of his other posts go into more detail about things like integrating temperature sensors, rainfall monitoring, controlling irrigation systems, and plenty of other farm automation tasks that are useful for any farmer or gardener.

We’ve also seen some other projects of his here like this project which converts a common AC sprinkler system to an easier-to-use DC system, and a DIY weather station that operates in the 915 MHz band. He’s been a great resource for anyone looking to have technology help them out with their farm or garden, but if you’re just getting started on your green thumb be sure to take a look at this starter guide as well.

❌
❌