Normal view

There are new articles available, click to refresh the page.
Today — 8 December 2025Main stream

Cardano Founder Reveals Midnight Launch Plan, Teases New Goodies Every 3 Months

By: Lele Jima
8 December 2025 at 05:16

Cardano Founder Reveals Midnight Launch Plan, Teases New Goodies Every 3 Months

Cardano founder Charles Hoskinson has shared new insights into the development and rollout of the ecosystem’s privacy-focused project, Midnight. Following the launch of Midnight’s native token, NIGHT, on Cardano, Hoskinson featured on the Gokhshtein News Network to outline the project’s technical roadmap, phased deployment strategy, and the ecosystem growth expected to follow.

Visit Website

Ethereum’s First ZK-Rollup ZKsync Lite to Shut Down in 2026

8 December 2025 at 04:56

ZKsync has announced plans to deprecate ZKsync Lite, Ethereum’s first zero-knowledge rollup, in 2026 as the protocol shifts its focus entirely toward the ZKsync network and ZK Stack-powered chains.

The original Layer 2 solution, which launched in December 2020 as a groundbreaking proof-of-concept, will undergo an orderly sunset after serving its purpose of validating critical ideas for production ZK systems.

No immediate action is required from users, as ZKsync Lite continues to operate normally, with funds remaining secure and withdrawals to Ethereum’s Layer 1 functioning throughout the deprecation process.

The ZKsync Association will share detailed migration guidance, specific dates, and a comprehensive deprecation plan in the coming year.

📌In 2026, we plan to deprecate ZKsync Lite (aka ZKsync 1.0), the original ZK-rollup we launched on Ethereum.

This is a planned, orderly sunset for a system that has served its purpose and does not affect any other ZKsync systems.

— ZKsync (@zksync) December 7, 2025

From Pioneer to Legacy System

ZKsync Lite emerged as the first zero-knowledge rollup on Ethereum, pioneering technology that would later evolve into ZKsync Era and the Elastic Network.

The protocol addressed Ethereum’s fundamental challenges of high transaction fees and slow transaction processing by executing transactions off-chain and submitting cryptographic proofs of validity back to Layer 1.

The project gained significant momentum in November 2025 when Ethereum co-founder Vitalik Buterin publicly endorsed ZKsync following its Atlas upgrade, describing the work as “underrated and valuable.

ZKsync has been doing a lot of underrated and valuable work in the ethereum ecosystem. Excited to see this come from them! https://t.co/coZKCfsb8h

— vitalik.eth (@VitalikButerin) November 1, 2025

His backing catalyzed institutional adoption, triggering a 50% surge in ZK token prices while positioning ZKsync as central to Ethereum’s “Lean Ethereum” scaling strategy.

ZKsync evolved from its initial Lite version to ZKsync Era in March 2023, becoming the first publicly available zkEVM.

The June 2024 ZKsync 3.0 upgrade transformed the ecosystem from a single Layer 2 into the Elastic Network, an interconnected system of autonomous ZK chains sharing liquidity and security through cryptographic proofs rather than traditional bridges.

Institutional Traction Validates ZK Technology

While ZKsync Lite phases out, the broader ZKsync ecosystem has attracted major institutional interest.

Deutsche Bank is developing an Ethereum Layer 2 blockchain using ZKsync technology as part of Project Dama 2, which involves 24 financial institutions testing the blockchain for asset tokenization under Singapore’s regulatory sandbox.

UBS also conducted a proof-of-concept for its Key4 Gold product using ZKsync Validium, testing the platform’s ability to support tokenized gold investments with privacy and scalability.

Tradable has also tokenized $2.1 billion in institutional-grade private credit on ZKsync, accounting for nearly 90% of the network’s market share for real-world asset protocols.

ZKsync Lite to Shut Down - Tradable Metrics Chart
Source: RWA[dot]xyz

The Ethereum Foundation launched “Ethereum for Institutions” in October 2024, providing enterprises with structured pathways to blockchain adoption using zero-knowledge proofs, fully homomorphic encryption, and trusted execution environments.

Projects like Chainlink, RAILGUN, and Aztec Network pioneer privacy-preserving smart contracts that secure counterparty information while maintaining transparency.

Security Incidents Test Platform Resilience

The deprecation announcement follows two significant security breaches in 2025 involving ZKsync’s protocols.

In April, an attacker exploited admin access to the airdrop distribution contract, minting 111 million unclaimed ZK tokens worth approximately $5 million during the protocol’s token distribution to ecosystem participants.

The hacker agreed to return 90% of the stolen assets in exchange for a 10% bounty, transferring nearly $5.7 million back to the ZKsync Security Council within the designated 72-hour safe harbor window.

The recovered amount exceeded the original stolen value due to token price increases, with ZK gaining 16.6% and ETH rising 8.8% following the incident.

🤝 The @TheZKNation has recovered $5 million worth of stolen tokens following a security breach on April 15.#ZKsync #Hackhttps://t.co/sb7iC0RqoR

— Cryptonews.com (@cryptonews) April 24, 2025

Just one month later, hackers compromised the official X accounts of ZKsync and Matter Labs, spreading false regulatory warnings claiming SEC investigations and Treasury Department sanctions.

The attackers also published phishing links promoting a fake ZK token airdrop designed to drain users’ wallets, causing the token price to drop approximately 5% despite a prior 38.5% weekly rally.

The breach occurred through compromised delegated accounts with limited posting privileges, which have since been disconnected.

These back-to-back incidents contributed to broader industry concerns, as crypto hacks resulted in $1.6 billion in losses during the first quarter of 2025 alone. The quarter was among the worst for crypto security breaches in history.

The post Ethereum’s First ZK-Rollup ZKsync Lite to Shut Down in 2026 appeared first on Cryptonews.

CIOs shift from ‘cloud-first’ to ‘cloud-smart’

8 December 2025 at 05:01

Common wisdom has long held that a cloud-first approach will gain CIOs benefits such as agility, scalability, and cost-efficiency for their applications and workloads. While cloud remains most IT leaders’ preferred infrastructure platform, many are rethinking their cloud strategies, pivoting from cloud-first to “cloud-smart” by choosing the best approach for specific workloads rather than just moving everything off-premises and prioritizing cloud over other considerations for new initiatives.

Cloud cost optimization is one factor motivating this rethink, with organizations struggling to control escalating cloud expenses amid rapid growth. An estimated 21% of enterprise cloud infrastructure spend, equivalent to $44.5 billion in 2025, is wasted on underutilized resources — with 31% of CIOs wasting half of their cloud spend, according to a recent survey from VMware.

The full rush to the cloud is over, says Ryan McElroy, vice president of technology at tech consultancy Hylaine. Cloud-smart organizations have a well-defined and proven process for determining which workloads are best suited for the cloud.

For example, “something that must be delivered very quickly and support massive scale in the future should be built in the cloud,” McElroy says. “Solutions with legacy technology that must be hosted on virtual machines or have very predictable workloads that will last for years should be deployed to well-managed data centers.”

The cloud-smart trend is being influenced by better on-prem technology, longer hardware cycles, ultra-high margins with hyperscale cloud providers, and the typical hype cycles of the industry, according to McElroy. All favor hybrid infrastructure approaches.

However, “AI has added another major wrinkle with siloed data and compute,” he adds. “Many organizations aren’t interested in or able to build high-performance GPU datacenters, and need to use the cloud. But if they’ve been conservative or cost-averse, their data may be in the on-prem component of their hybrid infrastructure.”

These variables have led to complexity or unanticipated costs, either through migration or data egress charges, McElroy says.

He estimates that “only 10% of the industry has openly admitted they’re moving” toward being cloud-smart. While that number may seem low, McElroy says it is significant.

“There are a lot of prerequisites to moderate on your cloud stance,” he explains. “First, you generally have to be a new CIO or CTO. Anyone who moved to the cloud is going to have a lot of trouble backtracking.”

Further, organizations need to have retained and upskilled the talent who manage the datacenter they own or at the co-location facility. They must also have infrastructure needs that outweigh the benefits the cloud provides in terms of raw agility and fractional compute, McElroy says.

Selecting and reassessing the right hyper-scaler

Procter & Gamble embraced a cloud-first strategy when it began migrating workloads about eight years ago, says Paola Lucetti, CTO and senior vice president. At that time, the mandate was that all new applications would be deployed in the public cloud, and existing workloads would migrate from traditional hosting environments to hyperscalers, Lucetti says.

“This approach allowed us to modernize quickly, reduce dependency on legacy infrastructure, and tap into the scalability and resilience that cloud platforms offer,” she says.

Today, nearly all P&G’s workloads run on cloud. “We choose to keep selected workloads outside of the public cloud because of latency or performance needs that we regularly reassess,” Lucetti says. “This foundation gave us speed and flexibility during a critical phase of digital transformation.”

As the company’s cloud ecosystem has matured, so have its business priorities. “Cost optimization, sustainability, and agility became front and center,” she says. “Cloud-smart for P&G means selecting and regularly reassessing the right hyperscaler for the right workload, embedding FinOps practices for transparency and governance, and leveraging hybrid architectures to support specific use cases.”

This approach empowers developers through automation, AI, and agentic to drive value faster, Lucetti says. “This approach isn’t just technical — it’s cultural. It reflects a mindset of strategic flexibility, where technology decisions align with business outcomes.”

AI is reshaping cloud decisions

AI represents a huge potential spend requirement and raises the stakes for infrastructure strategy, says McElroy.

“Renting servers packed with expensive Nvidia GPUs all day every day for three years will be financially ruinous compared to buying them outright,” he says, “but the flexibility to use next year’s models seamlessly may represent a strategic advantage.”

Cisco, for one, has become far more deliberate about what truly belongs in the public cloud, says Nik Kale, principal engineer and product architect. Cost is one factor, but the main driver is AI data governance.

“Being cloud-smart isn’t about repatriation — it’s about aligning AI’s data gravity with the right control plane,” he says.

IT has parsed out what should be in a private cloud and what goes into a public cloud. “Training and fine-tuning large models requires strong control over customer and telemetry data,” Kale explains. “So we increasingly favor hybrid architectures where inference and data processing happen within secure, private environments, while orchestration and non-sensitive services stay in the public cloud.”

Cisco’s cloud-smart strategy starts with data classification and workload profiling. Anything with customer-identifiable information, diagnostic traces, and model feedback loops are processed within regionally compliant private clouds, he says.

Then there are “stateless services, content delivery, and telemetry aggregation that benefit from public-cloud elasticity for scale and efficiency,” Kale says.

Cisco’s approach also involves “packaging previously cloud-resident capabilities for secure deployment within customer environments — offering the same AI-driven insights and automation locally, without exposing data to shared infrastructure,” he says. “This gives customers the flexibility to adopt AI capabilities without compromising on data residency, privacy, or cost.”

These practices have improved Cisco’s compliance posture, reduced inference latency, and yielded measurable double-digit reductions in cloud spend, Kale says.

One area where AI has fundamentally changed their approach to cloud is in large-scale threat detection. “Early versions of our models ran entirely in the public cloud, but once we began fine-tuning on customer-specific telemetry, the sensitivity and volume of that data made cloud egress both costly and difficult to govern,” he says. “Moving the training and feedback loops into regional private clouds gave us full auditability and significantly reduced transfer costs, while keeping inference hybrid so customers in regulated regions received sub-second response times.”

IT saw a similar issue with its generative AI support assistant. “Initially, case transcripts and diagnostic logs were processed in public cloud LLMs,” Kale says. “As customers in finance and healthcare raised legitimate concerns about data leaving their environments, we re-architected the capability to run directly within their [virtual private clouds] or on-prem clusters.”

The orchestration layer remains in the public cloud, but the sensitive data never leaves their control plane, Kale adds.

AI has also reshaped how telemetry analytics is handled across Cisco’s CX portfolio. IT collects petabyte-scale operational data from more than 140,000 customer environments.

“When we transitioned to real-time predictive AI, the cost and latency of shipping raw time-series data to the cloud became a bottleneck,” Kale says. “By shifting feature extraction and anomaly detection to the customer’s local collector and sending only high-level risk signals to the cloud, we reduced egress dramatically while improving model fidelity.”

In all instances, “AI made the architectural trade-offs clear: Specific workloads benefit from public-cloud elasticity, but the most sensitive, data-intensive, and latency-critical AI functions need to run closer to the data,” Kale says. “For us, cloud-smart has become less about repatriation and more about aligning data gravity, privacy boundaries, and inference economics with the right control plane.”

A less expensive execution path

Like P&G, World Insurance Associates believes cloud-smart translates to implementing a FinOps framework. CIO Michael Corrigan says that means having an optimized, consistent build for virtual machines based on the business use case, and understanding how much storage and compute is required.

Those are the main drivers to determine costs, “so we have a consistent set of standards of what will size our different environments based off of the use case,” Corrigan says. This gives World Insurance what Corrigan says is an automated architecture.

“Then we optimize the build to make sure we have things turned on like elasticity. So when services aren’t used typically overnight, they shut down and they reduce the amount of storage to turn off the amount of compute” so the company isn’t paying for it, he says. “It starts with the foundation of optimization or standards.”

World Insurance works with its cloud providers on different levels of commitment. With Microsoft, for example, the insurance company has the option to use virtual machines, or what Corrigan says is a “reserved instance.” By telling the provider how many machines they plan to consume or how much they intend to spend, he can try to negotiate discounts.

“That’s where the FinOps framework has to really be in place … because obviously, you don’t want to commit to a level of spend that you wouldn’t consume otherwise,” Corrigan says. “It’s a good way for the consumer or us as the organization utilizing those cloud services, to get really significant discounts upfront.”

World Insurance is using AI for automation and alerts. AI tools are typically charged on a compute processing model, “and what you can do is design your query so that if it is something that’s less complicated, it’s going to hit a less expensive execution path” and go to a small language model (SLM), which doesn’t use as much processing power, Corrigan says.

The user gets a satisfactory result, and “there is less of a cost because you’re not consuming as much,” he says.

That’s the tactic the company is taking — routing AI queries to the less expensive model. If there is a more complicated workflow or process, it will be routed to the SLM first “and see if it checks the box,” Corrigan says. If its needs are more complex, it is moved to the next stage, which is more expensive, and generally involves an LLM that requires going through more data to give the end user what they’re looking for.

“So we try to manage the costs that way as well so we’re only consuming what’s really needed to be consumed based on the complexity of the process,” he says.

Cloud is ‘a living framework’

Hylaine’s McElroy says CIOs and CTOs need to be more open to discussing the benefits of hybrid infrastructure setups, and how the state of the art has changed in the past few years.

“Many organizations are wrestling with cloud costs they know instinctively are too high, but there are few incentives to take on the risky work of repatriation when a CFO doesn’t know what savings they’re missing out on,” he says.

Lucetti characterizes P&G’s cloud strategy as “a living framework,” and says that over the next few years, the company will continue to leverage the right cloud capabilities to enable AI and agentic for business value.

“The goal is simple: Keep technology aligned with business growth, while staying agile in a rapidly changing digital landscape,” she says. “Cloud transformation isn’t a destination — it’s a journey. At P&G, we know that success comes from aligning technology decisions with business outcomes and by embracing flexibility.”

Get data, and the data culture, ready for AI

8 December 2025 at 05:00

When it comes to AI adoption, the gap between ambition and execution can be impossible to bridge. Companies are trying to weave the tech into products, workflows, and strategies, but good intentions often collapse under the weight of the day-to-day realities from messy data and lack of a clear plan.

“That’s the challenge we see most often across the global manufacturers we work with,” says Rob McAveney, CTO at software developer Aras. “Many organizations assume they needAI, when the real starting point should be defining the decision you want AI to support, and making sure you have the right data behind it.”

Nearly two-thirds of leaders say their organizations have struggled to scale AI across the business, according to a recent McKinsey global survey. Often, they can’t move beyond tests of pilot programs, a challenge that’s even more pronounced among smaller organizations. Often, pilots fail to mature, and investment decisions become harder to justify.

A typical issue is the data simply isn’t ready for AI. Teams try to build sophisticated models on top of fragmented sources or messy data, hoping the technology will smooth over the cracks.

“From our perspective, the biggest barriers to meaningful AI outcomes are data quality, data consistency, and data context,” McAveney says. “When data lives in silos or isn’t governed with shared standards, AI will simply reflect those inconsistencies, leading to unreliable or misleading outcomes.”

It’s an issue that impacts almost every sector. Before organizations double down on new AI tools, they must first build stronger data governance, enforce quality standards, and clarify who actually owns the data meant to fuel these systems.

Making sure AI doesn’t take the wheel

In the rush to adopt AI, many organizations forget to ask the fundamental questionofwhat problem actually needs to be solved. Without that clarity, it’s difficult to achieve meaningful results.

Anurag Sharma, CTO of VyStar Credit Union believes AI is just another tool that’s available to help solve a given business problem, and says every initiative should begin with a clear, simple statement of the business outcome it’s meant to deliver. He encourages his team to isolate issues AI could fix, and urges executives to understand what will change and who will be affected before anything moves forward.

“CIOs and CTOs can keep initiatives grounded by insisting on this discipline, and by slowing down the conversation just long enough to separate the shiny from the strategic,” Sharma says.

This distinction becomes much easier when an organization has an AI COE or a dedicated working group focused on identifying real opportunities. These teams help sift through ideas, set priorities, and ensure initiatives are grounded in business needs rather than buzz.

The group should also include the people whose work will be affected by AI, along with business leaders, legal and compliance specialists, and security teams. Together, they can define baseline requirements that AI initiatives must meet.

“When those requirements are clear up front, teams can avoid pursuing AI projects that look exciting but lack a real business anchor,” says Kayla Underkoffler, director of AI security and policy advocacy at security and governance platform Zenity.

She adds that someone in the COE should have a solid grasp of the current AI risk landscape. That person should be ready to answer critical questions, knowing what concerns need to be addressed before every initiative goes live.

“A plan could have gaping cracks the team isn’t even aware of,” Underkoffler says. “It’s critical that security be included from the beginning to ensure the guardrails and risk assessment can be added from the beginning and not bolted on after the initiative is up and running.”

In addition, there should be clear, measurable business outcomes to make sure the effort is worthwhile. “Every proposal must define success metrics upfront,” says Akash Agrawal, VP of DevOps and DevSecOps at cloud-based quality engineering platform LambdaTest, Inc. “AI is never explored, it’s applied.”

He recommends companies build in regular 30- or 45-day checkpoints to ensure the work continues to align with business objectives. And if the results don’t meet expectations, organizations shouldn’t hesitate to reassess and make honest decisions, he says. Even if that means walking away from the initiative altogether.

Yet even when the technology looks promising, humans still need to remain in the loop. “In an early pilot of our AI-based lead qualification, removing human review led to ineffective lead categorization,” says Shridhar Karale, CIO at sustainable waste solutions company, Reworld. “We quickly retuned the model to include human feedback, so it continually refines and becomes more accurate over time.”

When decisions are made without human validation, organizations risk acting on faulty assumptions or misinterpreted patterns. The aim isn’t to replace people, but to build a partnership in which humans and machines strengthen one other.

Data, a strategic asset

Ensuring data is managed effectively is an often overlooked prerequisite for making AI work as intended. Creating the right conditions means treating data as a strategic asset: organizing it, cleaning it, and having the right policies in place so it stays reliable over time.

“CIOs should focus on data quality, integrity, and relevance,” says Paul Smith, CIO at Amnesty International. His organization works with unstructured data every day, often coming from external sources. Given the nature of the work, the quality of that data can be variable. Analysts sift through documents, videos, images, and reports, each produced in different formats and conditions. Managing such a high volume of messy, inconsistent, and often incomplete information has taught them the importance of rigor.

“There’s no such thing as unstructured data, only data that hasn’t yet had structure applied to it,” Smith says. He also urges organizations to start with the basics of strong, everyday data-governance habits. That means checking whether the data is relevant, and ensuring it’s complete, accurate, and consistent, and outdated information can skew results.

Smith also emphasizes the importance of verifying data lineage. That includes establishing provenance — knowing where the data came from and whether its use meets legal and ethical standards — and reviewing any available documentation that details how it was collected or transformed.

In many organizations, messy data comes from legacy systems or manual entry workflows. “We strengthen reliability by standardizing schemas, enforcing data contracts, automating quality checks at ingestion, and consolidating observability across engineering,” says Agrawal.

When teams trust the data, their AI outcomes improve. “If you can’t clearly answer where the data came from and how trustworthy is it, then you aren’t ready,” Sharma adds. “It’s better to slow down upfront than chase insights that are directionally wrong or operationally harmful, especially in the financial industry where trust is our currency.”

Karale says that at Reworld, they’ve created a single source of truth data fabric, and assigned data stewards to each domain. They also maintain a living data dictionary that makes definitions and access policies easy to find with a simple search. “Each entry includes lineage and ownership details so every team knows who’s responsible, and they can trust the data they use,” Karale adds.

A hard look in the organizational mirror

AI has a way of amplifying whatever patterns it finds in the data — the helpful ones, but also the old biases organizations would rather leave behind. Avoiding that trap starts with recognizing that bias is often a structural issue.

CIOs can do a couple of things to prevent problems from taking root. “Vet all data used for training or pilot runs and confirm foundational controls are in place before AI enters the workflow,” says Underkoffler.

Also, try to understand in detail how agentic AI changes the risk model. “These systems introduce new forms of autonomy, dependency, and interaction,” she says. “Controls must evolve accordingly.”

Underkoffler also adds that strong governance frameworks can guide organizations on monitoring, managing risks, and setting guardrails. These frameworks outline who’s responsible for overseeing AI systems, how decisions are documented, and when human judgment must step in, providing structure in an environment where the technology is evolving faster than most policies can keep up.

And Karale says that fairness metrics, such as disparate impact, play an important role in that oversight. These measures help teams understand whether an AI system is treating different groups equitably or unintentionally favoring one over another. These metrics could be incorporated into the model validation pipeline.

Domain experts can also play a key role in spotting and retraining models that produce biased or off-target outputs. They understand the context behind the data, so they’re often the first to notice when something doesn’t look right. “Continuous learning is just as important for machines as it is for people,” says Karale.

Amnesty International’s Smith agrees, saying organizations need to train their people continuously to help them pick out potential biases. “Raise awareness of risks and harms,” he says. “The first line of defense or risk mitigation is human.”

Kongsberg to equip Swedish CV90s with RS4 systems

8 December 2025 at 04:11
Kongsberg Defence & Aerospace has signed a new contract with the Swedish Defence Materiel Administration (FMV) to deliver PROTECTOR remote weapon stations (RWS) for use on several Swedish Army platforms, including the CV90 (local name Stridsfordon 90) infantry fighting vehicle. According to a company announcement issued on December 8, the contract is valued at over […]

Off-Axis Rotation For Amiga-Themed Levitating Lamp

8 December 2025 at 04:00

Do you remember those levitating lamps that were all the rage some years ago? Floating light bulbs, globes, you name it. After the initial craze of expensive desk toys, a wave of cheap kits became available from the usual suspects. [RobSmithDev] wanted to make a commemorative lamp for the Amiga’s 40th anniversary, but… it was missing something. Sure, the levitating red-and-white “boing” ball looked good, but in the famous demo, the ball is spinning at a jaunty angle. You can’t do that with mag-lev… not without a hack, anyway.

The hack [RobSmith] decided on is quite simple: the levitator is working in the usual manner, but rather than mount his “boing ball” directly to the magnet, the magnet is glued to a Dalek-lookalike plinth. The plinth holds a small motor, which is mounted at an angle to the base. Since the base stays vertical, the motor’s shaft provides the jaunty angle for the 3D-printed boing ball’s rotation. The motor is powered by the same coil that came with the kit to power the LEDs– indeed, the original LEDs are reused. An interesting twist is that the inductor alone was not able to provide enough power to run even the motor by itself: [Rob] had to add a capacitor to tune the LC circuit to the ~100 kHz frequency of the base coil. While needing to tune an antenna shouldn’t be any sort of surprise, neither we nor [Rob] were thinking of this as an antenna, so it was a neat detail to learn.

With the hard drive-inspired base — which eschews insets for self-tapping screws — the resulting lamp makes a lovely homage to the Amiga Computer in its 40th year.

We’ve seen these mag-lev modules before, but the effect is always mesmerizing.  Of course, if you want to skip the magnets, you can still pretend to levitate a lamp with tensegrity.

❌
❌