Reading view

There are new articles available, click to refresh the page.

Salesforce: Some Customer Data Accessed via Gainsight Breach

Microsoft Windows malware software supply chain

An attack on the app of CRM platform-provider Gainsight led to the data of hundreds of Salesforce customers being compromised, highlighting the ongoing threats posed by third-party software in SaaS environments and illustrating how one data breach can lead to others, cybersecurity pros say.

The post Salesforce: Some Customer Data Accessed via Gainsight Breach appeared first on Security Boulevard.

Building Up to Code: Cybersecurity Risks to the UK Construction Sector

PinnacleOne recently partnered with a leading UK construction company to analyze the cybersecurity risks shaping the sector in 2025. This new report explores how evolving threats intersect with the construction industry’s unique challenges, including tight project timelines, complex supply chains, sensitive data, and high-value transactions. Aimed at CISOs and security leaders, it provides actionable guidance to balance opportunity with resilience, ensuring construction firms stay secure while building the nation’s future.

Report Overview

The UK construction sector is a vital part of the national economy, contributing approximately 5.4% of GDP and employing around 1.4 million people. However, this critical industry is increasingly the target of cyber threat actors seeking financial gains and espionage.

PinnacleOne recently collaborated with a UK construction company to review these trends and bolster their cyber strategy. In a new report, PinnacleOne synthesizes key recommendations for construction sector cyber strategy to help CISOs stay ahead of the threat.

The construction industry’s core characteristics make it a uniquely enticing target for cyber threat actors:

  • Money: Construction companies frequently handle high-value transactions, making them susceptible to financial fraud via business email compromise (BEC). Attackers can achieve significant gains by intercepting even a single large transaction.
  • Sensitive Data: Construction firms often possess a variety of sensitive data, including personal, sensitive personal, and client data, some of which is regulated by mandates like the Building Safety Act. This data is valuable to both threat actors and regulators, incentivizing attacks and regulatory scrutiny.
  • Time Sensitivity: Construction projects operate on tight schedules and budgets. Cyberattacks causing delays can lead to reputational damage and liquidity issues, as rapid payment for invoices is often mandated.
  • Broad Attack Surface: The industry’s reliance on numerous contractors, subcontractors, suppliers, and a wide array of IoT/OT devices creates multiple avenues for threat actor infiltration, presenting significant cybersecurity challenges.

For construction companies, cyber risk is inherently business risk. Cyber incidents can directly impact project timelines, budgets, and even the safety and structural integrity of the built environment. The interconnected nature of the construction ecosystem means that attackers can leverage any exposed point of entry. This, combined with slim profit margins and inconsistent cybersecurity investments, elevates the risk profile for the entire industry.

By adopting a proactive, risk-based cybersecurity approach, construction firms can strengthen their resilience and protect operational continuity and client trust. Read the full report here.

PinnacleOne Strategic Advisory Group
Preparing Enterprises for Present and Future Challenges

Unlocking the Power of Amazon Security Lake for Proactive Security

Security is a central challenge in modern application development and maintenance, requiring not just traditional practices but also a deep understanding of application architecture and data flow. While organizations now have access to rich data like logs and telemetry, the real challenge lies in translating this information into actionable insights. This article explores how leveraging those insights can help detect genuine security incidents and prevent their recurrence.

AI’s Double Edge: How AI Expands the Attack Surface & Empowers Defenders

Recently, SentinelOne published two reports highlighting each side of the cloud security challenge:

  • The Cloud Security Survey Report presents insights from 400 cybersecurity managers and practitioners covering current cloud security operations, responsibilities, perceptions of technologies, and future investment plans.
  • The Cloud Security Risk Report details five emerging risk themes for 2025 with in-depth examples of attacks leveraging risks like cloud credential theft, lateral movement, vulnerable cloud storage, supply chain risks, and cloud AI services.

One shared thread across both reports is the notion of how AI has emerged as a double-edged sword, presenting risk and opportunity. Regarding risk, AI enables unprecedented functionality, but its elevated access to data and critical position in increasing business applications combined with its relative newness makes it an attractive target for adversaries. Securing AI should be a top priority for organizations actively building AI-powered services.

In terms of opportunity, many AI features seem match-made for security use cases. Context given and decisions made at machine speed, the ability to query against expert databases, and the ability to translate plain language into structured queries to name are just a few of the benefits organizations can reap from AI. AI is already proving to be an indispensable tool for defenders, significantly improving the speed, accuracy, and overall effectiveness of cloud security operations. As such, many survey responders view AI as the opportunity to balance the asymmetry of the cloud security challenge.

This blog post examines both Risk and Opportunity and the areas in which they intersect.

The Expanding AI Attack Surface

The rapid evolution and adoption of AI represent a new attack surface for threat actors. From exploiting misconfigured cloud AI services to compromising tools used to build AI applications to targeting the underlying infrastructure. While AI as a threat surface is relatively new, many risks will appear familiar, as the same challenges reappear — misconfigurations, vulnerabilities, access management, and supply chain risk. In fact, there are elements of AI-security that at this stage would still be considered nascent.

Old Threats Made New When Targeting AI

‘Dependency confusion’, wherein a malicious tool is mistaken for a trusted tool due to it having the same name, is an old threat. It has not been a credible risk for some time as most DevOps pipelines do not enable packages and/or tooling to be so easily swapped. Open-source package market places also no longer allow for the same name to be used. However, this old risk has become new again in the context of Model Context Protocol.

Model Context Protocol (MCP) enables LLMs to interact with external systems and allows you to build API enabled ‘jobs’. For example, a user can ask ChatGPT to set up a reoccurring task to check their Gmail account for emails from an employer and mark them as high urgency. This is then connected to another job that might then send the user SMS notifications.

While there is real business value in connecting external systems and data to LLMs, this opportunity is paired with risk. MCP constantly updates its context and this includes its range of tools. Currently, it uses the latest tool pulled into its context, which is where the dependency confusion risk returns. If a malicious tool with the same name is uploaded, that tool will be used in earnest. Other older threats have slight modifications to them. Similar to dependency confusion above, typosquatting has returned in a new form. Typosquatting describes how a threat actor deliberately mimics or makes a slight error to a name, in the hope that an unsuspecting user will click on the bad domain.

With the rise of LLM-assisted code development, the threat of typosquatting is reborn into what is now known as “vibe coding”. Though there may be certain gains in acceleration to be found from vibe coding, there is, unfortunately, an added security risk. LLMs have been seen hallucinating slight mistakes in names of open-source packages — a typosquatting phenomenon known as slopsquatting. From a threat actor perspective, there is an opportunity to look for common AI-hallucinated package names and upload a malicious package with the same name, in the hopes that other developers using the same LLM will generate the same incorrect package names, leading to downloads.

One researcher proved this concept with ChatGPT and the popular package Huggingface. The researcher created a dummy package with a hallucinated name to monitor requests from other users’ code that might use the same squat name, and it received more than 30,000 downloads from January to March of this year.

Other Familiar Threat Vectors

The first blog we published in this three-part series showed how misconfigurations and compromised credentials, despite being arguably basic attack vectors, continue to be the top two risks cloud security teams face due to increasingly sophisticated attacks. This theme holds true when looking at the AI attack surface.

Leaked Credentials

In addition to the risk of slopsquatting, developers leveraging LLMs to help generate code should consider the safety implications concerning leaked credentials before shipping to production. In a recent study across 20,000 repositories enabled with Copilot, the secret leakage percentage was found to be 39% than across all other repositories (6.4% leaked with Copilot vs 4.6% as an industry wide statistic). Additionally, a small scale study of the prompt-to-webapp business, Lovable, found that nearly half of the 2,000 domains analyzed were found to be leaking JSON Web Token, API Keys and cloud credentials.

An easy solve here might be to include safety instructions alongside the prompts. After all, in these instances the LLM is performing its primary role of developing code, it just has not been asked to keep in mind the extra consideration that the code be secure.

Misconfigurations

AI services provided by cloud providers like AWS SageMaker, Azure OpenAI, and Google Vertex AI are similar to cloud services like virtual machines or application services. Each resource could potentially be targeted by threat actors and exploited to compromise that service or gain access to the broader cloud environment. The key difference is that AI services are new and it will take time to ensure that default roles, permissions, and configurations aren’t overly permissive or otherwise grant access to sensitive actions.

There are a few scenarios relating to SageMaker detailed in the Risk Report. For example, when a SageMaker Studio notebook is launched, its user profile is created with a default AWS role. If attackers gain access to SageMaker Studio notebooks, they can leverage default permissions to retrieve metadata about existing tables and databases, delete original data, and replace it with new, misleading, or malicious data, compromising model accuracy and leading to faulty business decisions, an approach known as glue data poisoning. Another approach highlighted in the report is using secrets managers to cross domain access by enumerating and retrieving secrets from unrelated SageMaker domains or any secrets tagged with SageMaker=true, enabling privilege escalation, lateral movement, and data exfiltration across the AWS environment.

AI-Powered Threat Vectors

Also highlighted in our first blog was the resurgence of infostealers as they adapt to target cloud environments. Infostealers are malicious software designed to discreetly gather information from a target system by capturing keystrokes, extracting stored passwords, or scanning for sensitive files on the system and then sending the stolen information back to an attacker-controlled server.

While infostealers aren’t new, attackers are using AI to make infostealers more powerful, from synthesizing and enriching stolen information to enabling infostealers to autonomously adapt behavior during an attack. SentinelOne researchers uncovered Predator AI, a cloud-based infostealer that integrates with ChatGPT to automated data enrichment and add context to scanner results.

A Final Note on AI-Based Threats

This trend of increasing threats from existing tactics is something cloud security leaders are well aware of. When asked about their level of concern for specific cloud security risks and threats, security managers and practitioners indicated that their level of concern increased for every single threat category compared to last year’s survey. What is even more interesting is that the threat categories that increased the most, including accidental exposure of credentials, cryptomining and other cooption of cloud resources, and account hijacking, are threats that are obviously made more effective with AI.

Despite the new attack surfaces and sophistication of cloud-based attacks stemming from AI technology, AI is equally, if not more so, improving the critical tools defenders use and helping organizations amplify the human power behind their cloud security operations.

AI Security and Posture Management (AI-SPM)

AI-SPM is quickly becoming a critical tool to help defend against these attacks on AI attack surfaces. AI-SPM helps safeguard AI models, data, and infrastructure in cloud environments by automating the inventory of AI infrastructure and services, detecting AI-native misconfigurations, and visualizing attack paths for AI workloads. It pairs with Cloud Infrastructure Entitlement Management (CIEM) that monitors the users, roles and permissions interacting with cloud-native AI services.

Cloud security leaders emphatically agree they will benefit from AI in cloud security solutions — only 1.8% of managers and leaders surveyed said they do not expect to experience benefits from AI in cloud security solutions. The industry largely expects AI to assist.

Surprisingly, there is low recognition of the need for AI-SPM tools. When asked which cloud security technology is most important for defending their cloud environment, AI-SPM didn’t make the top 10 list. Acknowledging the impact of AI in cloud security but not prioritizing tools that specifically help with AI security is an interesting juxtaposition, highlighting the need for a more adaptive approach by security teams — something we unpack at the end of this blog.

AI Improving Cloud Security Tools

Effectively detecting threats in real time and proactively managing vulnerabilities to reduce your attack surface are arguably the two most important cloud security capabilities. When asked about the most impactful benefits of embedding AI in cloud security tools, more than half of respondents placed detecting attacks faster (51.8%) and better analyzing and scoring risks (50.3%) in their top five.

In the same vein, when asked which cloud security capabilities they had most confidence in for their organization, the top two capabilities and the two capabilities with the most improvement from last year were “Threat detection” (4.25 out of 5) and “Vulnerability scanning and assessment” (4.20 out of 5). All of this is a strong signal that AI is finally having a measurable impact on the speed, reach, and accuracy of threat detection, vulnerability scanning, and other cloud security technologies.

There is a similar impact on tools for managing and prioritizing cloud security alerts. More than half of organizations (53%) find that the majority of their alerts are false positives. While this is a harsh reality, the proliferation of false positives has pushed cloud security managers and practitioners to place incredible importance on having tools that help them prioritize alerts. Specifically, when asked how important having evidence of exploitability is for prioritizing alerts and closing high-priority items, 9 out of 10 (89.4%) respondents stated evidence of exploitability is either very important or extremely important.

Not only can AI help human analysts sift through alerts and reduce the volume of false positives, AI is also a fundamental ingredient in tools that provide evidence of exploitability through the application of advanced algorithms that analyze vast amounts of security data to identify patterns and prioritize risks.

AI Force Multiplying Cloud Security Operations

When asked how AI will impact respondent’s cloud security capabilities, their top expected benefits centered primarily on speed and effectiveness. 53.8% of respondents said AI will “accelerate incident response” and 51.8% expect to “detect attacks faster.” These advantages stem from AI’s ability to detect patterns associated with attacks in masses of data and to provide insights into effective responses.

Potentially, the most immediate benefit of leveraging AI in cloud security teams will be alleviating the burden from the security skills shortage. This is highlighted by 52% of respondents expecting AI to “increase the effectiveness of [their] current cloud security team,” moving this expected benefit of AI up from fourth place last year to second place this year.

Together, these results reflect recognition that AI can speed up processes, and help people make better decisions. AI enables senior security professionals to perform more tasks in the same time period, and less experienced ones to handle complex tasks sooner.

At SentinelOne, we’re already seeing customers realize these benefits by using Purple AI, the world’s most advanced AI security analyst. Purple AI customers are seeing up to 38% increase in security team efficiency and security team members using Purple AI are able to cover 61% more endpoints. As for speed, Purple AI contributes to 63% faster identification of security threats and 55% faster remediation of security threats.

The Path Forward in the Era of AI

The story of AI in cloud security in 2025 is clearly one of dichotomy. Threat actors are rapidly innovating, leveraging AI and automation to enhance their attack capabilities, find new vulnerabilities, and streamline their campaigns. However, the cybersecurity community is also harnessing AI as a powerful ally to accelerate incident response, enhance detection accuracy, and empower security teams.

This continuous evolution demands adaptive security strategies. Siloed approaches to security are no longer sufficient and defenders must use AI to get a holistic analysis of the path from the outside world to mission targets. Cloud security platforms must integrate AI and provide teams with comprehensive and integrated defense mechanisms across cloud environments.

If this topic is of interest to you, please join us at an upcoming webinar on Thursday, July 24, 2025 to learn more about AI’s evolving role in cloud security attacks and defenses along with other insights from the Cloud Risk Report and the 2025 Cloud Security Survey. Save your spot here!

Further Reading

Disclaimer

All third-party product names, logos, and brands mentioned in this publication are the property of their respective owners and are for identification purposes only. Use of these names, logos, and brands does not imply affiliation, endorsement, sponsorship, or association with the third-party.

The Cloud Security Challenge: Risk Intelligence & Leadership Perspectives
Sign up for this webinar happening July 24, 2025

Primary Attack Vectors Persist

The speed and innovation of our cloud and AI age is undeniable. However, opportunity comes paired with responsibility and risk. The duality of the cloud security challenge is that these two opposing forces are markedly different. To keep cloud environments safe and secure, we need to introspectively examine how we can improve our internal people, processes and technology charged with cloud defense. We also need to understand the external threat landscape, including new and persistent threat actor capabilities and innovations.

To this end, SentinelOne has published two reports highlighting each side of the cloud security challenge coin:

  • The Cloud Security Survey Report presents insights from 400 cybersecurity managers and practitioners covering current cloud security operations, responsibilities, perceptions of technologies, and future investment plans.
  • The Cloud Security Risk Report details five emerging risk themes for 2025 with in-depth examples of attacks leveraging risks like cloud credential theft, lateral movement, vulnerable cloud storage, supply chain risks, and cloud AI services.

This is the first of three blogs that highlight key points of alignment and of contrast between the two reports: Primary Attack Vectors Persist, AI’s Double Edge: How AI Expands the Attack Surface & Empowers Defenders, and Supply Chain Attacks Are the Hidden Threat in Your Cloud Pipeline.

A feature of both reports is the need to get the basics of cloud security right – specifically managing and reducing cloud misconfigurations and limiting the compromise of cloud credentials. They remain the most common initial access points for threat actors, and a daily struggle for security teams.

Leaving The Door Unlocked | Cloud Misconfigurations

Cloud misconfigurations have been in the spotlight for security teams ever since Gartner opined in 2019 that “through 2025, 99% of cloud security failures will be the customer’s fault.” Despite cloud security technology innovation and multiple generations of Cloud Security Posture Management (CSPM) solutions, the challenge remains, and high profile breaches are still occurring due to basic misconfigurations.

The classic (almost clichéd) cloud misconfiguration seen time and time again in headlines is cloud storage left public and unencrypted. Yet, despite the fair warning and prevalence of CSPMs, these breaches continue. Examples of these provided by the Risk Report include a December 2024 breach when a Volkswagen software subsidiary’s misconfigured S3 bucket leaked sensitive details of over 800,000 car owners. Another example from December of last year was when the threat actor Nemesis, specialized in credential theft and targeting cloud storage, was found to have misconfigured their own S3 buckets.

When surveyed, the misconfiguration topic provides some conflicting answers in the Cloud Security Survey. If ranking the importance of cloud security capabilities, CSPM ranked second most important (the first was Cloud Detection and Response, CDR). Focusing on capability efficacy, CSPM also ranked highly, tying for third place when responders were asked to rank their organization’s satisfaction with their CSPM of choice, scoring a 4.22 out of 5.

So, our chief weapon against cloud misconfigurations is widely seen as both vital and effective. This contrasts with the responders’ faith in their organization’s ability of “misconfiguration assessment”. Unfortunately, this ranks last of the 8 of the cloud security functions listed, with a score of 3.98 out of 5.

A potential clue to the discrepancy here might be in prioritization and noise management. After all, CSPMs are notoriously noisy, and the repeated structure of many cloud environments can often result in cascading alerts for the same issue seen in multiple areas. A massive 86.9% of responders confirm they face challenges validating and prioritizing alerts and cloud events. Additionally, two-thirds of organizations (67.7%) agree they generate so much cloud security data that their teams struggle to reach actionable insights.

Looking forward, the nature of sophisticated cloud attacks is going to exacerbate the noise and prioritization issue. We see a rise in threat actors targeting and abusing misconfigurations in new ways, chains of misconfigurations, and attacker-driven misconfigurations in particular.

Chains of Misconfigurations

As defenders become more adept at closing the door on obvious cloud breach opportunities, attackers are increasingly chaining minor, less significant misconfigurations together to enable deeper compromise and lateral movement within cloud environments. To see an example of chained misconfigurations, refer to the fictional case study revolving around an e-commerce store leveraging Lambda functions detailed in the Risk Report.

Threat actors adapting presents a new challenge for defenders. Security teams leveraging CSPMs that are starting with critical severity misconfigurations and working their way down are at risk of missing chains that present more significant risk when considered in context with each other.

Attacker-Driven Misconfigurations

Complex cloud attack campaigns of late have all included adversaries causing cloud misconfigurations as they modify or disable cloud services. For example, ScarletEel automates the disabling of cloud security services into its cryptomining campaigns. More commonly, threat actors are creating overly permissive roles (a common cloud misconfiguration) to enable easier lateral movement and discovery. Examples of this in action include the large-scale extortion campaign (potentially by Nemesis with their poor cloud storage habits), where they deploy a series of Lambda functions to automate the creation of these misconfigured identities.

This raises an interesting challenge for cloud defenders in how they differentiate the cloud misconfigurations stemming from their organization’s deployment choices versus the misconfigurations that external or internal threat actors may have caused. If your view of cloud misconfigurations is static, as in, what exists and not when or how, then this differentiation will be very difficult, if not impossible.

A Final Note on Misconfigurations

As organizations adopt newer cloud services to build and leverage AI capabilities, new areas for misconfiguration risk are emerging. We investigate AI as a novel attack surface and as a novel tool for defenders in more detail in our second blog in this three part series.

Keys to the Kingdom | Compromised Credentials

With a history of bug bounty hunting and whitehat hacking, our Sr. Director Product Management for Cloud Native Security, Anand Prakash, knows the power compromised credentials give attackers.

“Cloud platforms host vast amounts of interconnected data and services, meaning a single compromised credential can grant attackers access to multiple systems simultaneously– even if your application is otherwise secure.” – Anand Prakash, Sr. Director Product Management for Cloud Native Security at SentinelOne

Despite their criticality, our Cloud Security Survey found that of core cloud security capabilities, secret scanning, which helps defenders hunt for leaked credentials, was ranked last. Less than 13% of responders included secret scanning within a list of top five most important cloud security capabilities out of a list that included cloud detection and response, cloud workload protection platforms (CWPPs), Infrastructure-as-Code (IaC) scanning, and more.

Further, secret scanning was second to last in a ranking of the effectiveness of cloud security tools and capabilities. So, our defenders are viewing the capability of scanning for compromised credentials as both non-critical and non-effective relative to other capabilities. Perhaps most drastic, is the high percentage of respondents who do not currently possess secret scanning capabilities. Nearly 30% of respondents have either not begun implementing secret scanning capabilities or have no plans to do so.

The survey does predict however, that the relative importance of secret scanning will rise in the near future, as DevSecOps and shifting security left into the development pipeline becomes an increasing focus for security teams. This shift in importance cannot come too soon. While secret scanning may not rank highly within the security ecosystem, the importance of credentials and access across an enterprise organization cannot be overstated. Elsewhere in the survey, respondents rate highly their concerns for data breaches while underplaying this clear potential of secrets to enable these attacks.

Real World Examples

As Anand Prakash noted, a single compromised credential can grant attackers access to multiple systems simultaneously, making lateral movement after getting credential access an expected escalation.

Organizations are unwittingly hardcoding credentials or leaking them in publicly accessible code-sharing services. One example highlighted by the Risk Report is a finding of over 1.1 million secrets found leaked across just 58,000 web applications with accessible environment files. Another example was last year’s high-profile ShinyHunters campaign, involving credential harvesting on endpoints and websites that resulted in numerous Snowflake breaches. Where successful, attackers directly targeted cloud-based Snowflake instances for massive data exfiltration, leading to initial presumptions that Snowflake itself had been breached.

A further high-profile example involves an xAI employee who leaked a private API key on GitHub, providing access to unreleased large language models and sensitive information from associated organizations like SpaceX – a single leak causing impacts along a supply chain. Threat actors know the power of compromised credentials and are evolving their use of infostealers and creative lateral movement across increasingly connected systems to further exploit this attack vector.

Evolution of Infostealers

Infostealers are increasingly hunting cloud and container credentials, and are being built into larger attack campaigns to increase their compromise capabilities. Examples of this includes TeamTNT’s SilentBob resource theft campaign that leverages an infostealer after the impact of cryptominers is established, to broaden the scope of what the attacker can do next.

Conclusion | A Need for Foundational Vigilance 

The persistence of misconfigurations and compromised credentials as primary attack vectors is a stark reminder that foundational security remains paramount. While confidence in cloud security capabilities grows, true resilience requires continuous vigilance, integrated solutions, and a proactive stance against an ever-adapting adversary. It’s time to bridge the perception gap and ensure that basic cloud security hygiene is not just a checkbox, but a dynamic, AI-driven defense against the inevitable.

Join us at an upcoming webinar on Thursday, July 24, 2025 to learn more about addressing these primary attack vectors and other insights from the Cloud Risk Report and the 2025 Cloud Security Survey. Save your spot here!

Further Reading

Disclaimer

All third-party product names, logos, and brands mentioned in this publication are the property of their respective owners and are for identification purposes only. Use of these names, logos, and brands does not imply affiliation, endorsement, sponsorship, or association with the third-party.

The Cloud Security Challenge: Risk Intelligence & Leadership Perspectives
Sign up for this webinar happening July 24, 2025

LATEST CYBERTHREATS AND ADVISORIES - FEBRUARY 10, 2023

2.10.23 ThreatsCyberattacks wreak havoc on the U.K., LockBit brings big business to its knees and a massive VMware ransomware campaign. Here are the latest threats and advisories for the week of February 10, 2023.   

Threat Advisories and Alerts 

Massive Ransomware Campaign Targets VMware ESXi Servers 

The U.S. Cybersecurity and Infrastructure Security Agency (CISA) has issued a script for retrieving VMware ESXi servers encrypted by the massive ESXiArgs ransomware campaign. The attack began last week when cybercriminals launched their attack. At the time of writing, 2,800 servers are know to have been encrypted. As for the script, the U.S. cybersecurity organization has said, "CISA compiled this tool based on publicly available resources, including a tutorial by Enes Sonmez and Ahmet Aykac." To avoid complications, CISA has warned users to understand how the script affects their systems before using it. 

Source: https://www.bleepingcomputer.com/news/security/cisa-releases-recovery-script-for-esxiargs-ransomware-victims/  

Atlassian Releases Patches for Critical Vulnerability in Jira Software 

Australian software company Atlassian has released security patches to fix a critical vulnerability (CVE-2023-22501) in its Jira Service Management Server and Data Centre. If successfully exploited, the vulnerability could allow cybercriminals to impersonate other users and obtain remote access to affected systems. The affected Jira versions include 5.3.0 to 5.3.1 and 5.4.0 to 5.5.0. Users and admins are advised to apply the appropriate patches immediately.  

Source: https://www.csa.gov.sg/en/singcert/Alerts/al-2023-016  

Emerging Threats and Research 

IT Professionals Fear ChatGPT Could Be Beginning of AI-Driven Cyberattacks 

When audiences were introduced to Skynet’s nefarious artificial intelligence in the 1984 movie Terminator, the idea of AI-powered attacks probably seemed far-fetched. Tech professionals may be beginning to think differently. According to a BlackBerry survey of 1,500 IT decision makers, 51% of IT workers believe a cyberattack credited to ChatGPT is less than a year away. The report reveals respondents' biggest fears are ChatGPT’s ability to help bad actors craft legitimate sounding phishing emails (53%), improve their technical knowhow (49%) and spread misinformation (49%).  

Source: https://www.helpnetsecurity.com/2023/02/07/chatgpt-security-risks/  

U.K. Metal Engineering Firm Suffers Cyberattack 

Vesuvius, a U.K. metal flow engineering company, was recently hit with a cyberattack that led to unauthorized access to its systems. In a statement released earlier this week, the company said, “We are working with leading cybersecurity experts to support our investigations and identify the extent of the issue, including the impact on production and contract fulfillment.” Information on the type of attack, systems affected and other details have yet to be revealed.  

Source: https://www.infosecurity-magazine.com/news/uk-metalg-firm-vesuvius-cyberattack/  

LockBit Claims Royal Mail Cyberattack 

The notorious LockBit ransomware gang has publicly claimed responsibility for the cyberattack on the U.K.’s Royal Mail. The attack was first reported on January 10 and caused severe disruption to the postal operator’s international shipping services. LockBit claims to have stolen Royal Mail’s data and threatened to publish it if their ransom isn’t paid. Royal Mail has yet to officially acknowledge that its “cyber incident” is a ransomware attack, but has resumed outbound international mail operations.  

Source: https://www.bleepingcomputer.com/news/security/lockbit-ransomware-gang-claims-royal-mail-cyberattack/  

ION Trading Pays LockBit’s Ransom after Global Disruption to Its Business 

U.K. software company ION Trading has reportedly paid a ransom to LockBit for an attack it suffered on January 31. ION has been removed from LockBit’s data leak site and a spokesperson for the criminal group said the ransom was paid the day before its due date by a “very rich unknown philanthropist." While paying ransoms to cybercriminals is typically discouraged, the incident was impacting ION’s clients on a global scale. Ian McShane, vice president of Arctic Wolf, said, “The cyber attack on the ION Group demonstrates how attackers can use the supply chain to cripple entire industries.”  

Source: https://www.itpro.co.uk/security/ransomware/370007/ion-trading-reportedly-pays-lockbit-ransom-demands  

Canada’s Indigo Suffers Web Outage After “Cybersecurity Incident” 

Canadian books and music retailer Indigo has, like Royal Mail, suffered a “cybersecurity incident” that has affected customer orders in-store and online. The company remians quiet about the details of the incident, but David Masson, director of enterprise security at cybersecurity firm Darktrace, was reported by CBC News to have suggested that the sheer length of the problem indicates it wasn't an internal error, and rather an instance of ransomware. At the time of writing, the website remains down with an English/French static page apologizing for the inconvenience while it tries to get its systems back online. 

Source: https://www.cbc.ca/news/business/indigo-cybersecurity-1.6742230 

To stay updated on the latest cybersecurity threats and advisories, look for weekly updates on the (ISC)² blog. Please share other alerts and threat discoveries you’ve encountered and join the conversation on the (ISC)² Community Industry News board.   

❌