Cisco VPNs, Email Services Hit in Separate Threat Campaigns
Strengthen NIS2 compliance by preventing weak and compromised passwords with Enzoic's continuous credential protection.
The post NIS2 Compliance: Maintaining Credential Security appeared first on Security Boulevard.
Session 6C: Sensor Attacks
Authors, Creators & Presenters: Shuguang Wang (City University of Hong Kong), Qian Zhou (City University of Hong Kong), Kui Wu (University of Victoria), Jinghuai Deng (City University of Hong Kong), Dapeng Wu (City University of Hong Kong), Wei-Bin Lee (Information Security Center, Hon Hai Research Institute), Jianping Wang (City University of Hong Kong)
PAPER
NDSS 2025 - Interventional Root Cause Analysis Of Failures In Multi-Sensor Fusion Perception Systems
Autonomous driving systems (ADS) heavily depend on multi-sensor fusion (MSF) perception systems to process sensor data and improve the accuracy of environmental perception. However, MSF cannot completely eliminate uncertainties, and faults in multiple modules will lead to perception failures. Thus, identifying the root causes of these perception failures is crucial to ensure the reliability of MSF perception systems. Traditional methods for identifying perception failures, such as anomaly detection and runtime monitoring, are limited because they do not account for causal relationships between faults in multiple modules and overall system failure. To overcome these limitations, we propose a novel approach called interventional root cause analysis (IRCA). IRCA leverages the directed acyclic graph (DAG) structure of MSF to develop a hierarchical structural causal model (H-SCM), which effectively addresses the complexities of causal relationships. Our approach uses a divide-and-conquer pruning algorithm to encompass multiple causal modules within a causal path and to pinpoint intervention targets. We implement IRCA and evaluate its performance using real fault scenarios and synthetic scenarios with injected faults in the ADS Autoware. The average F1-score of IRCA in real fault scenarios is over 95%. We also illustrate the effectiveness of IRCA on an autonomous vehicle testbed equipped with Autoware, as well as a cross-platform evaluation using Apollo. The results show that IRCA can efficiently identify the causal paths leading to failures and significantly enhance the safety of ADS.
ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.
Our thanks to the Network and Distributed System Security (NDSS) Symposium for publishing their Creators, Authors and Presenter’s superb NDSS Symposium 2025 Conference content on the Organizations' YouTube Channel.
The post NDSS 2025 – Interventional Root Cause Analysis Of Failures In Multi-Sensor Fusion Perception Systems appeared first on Security Boulevard.
The recent discovery of a cryptomining campaign targeting Amazon compute resources highlights a critical gap in traditional cloud defense. Attackers are bypassing perimeter defenses by leveraging compromised credentials to execute legitimate but privileged API calls like ec2:CreateLaunchTemplate, ecs:RegisterTaskDefinition, ec2:ModifyInstanceAttribute, and lambda:CreateFunctionUrlConfig. While detection tools identify anomalies after they occur, they do not prevent execution, lateral […]
The post Preventing This Week’s AWS Cryptomining Attacks: Why Detection Fails and Permissions Matter appeared first on Security Boulevard.

Similar pledges to fight scam networks were made by members of the Association of Southeast Asian Nations in the months leading up to the Bangkok conference.
The post Thailand Conference Launches International Initiative to Fight Online Scams appeared first on SecurityWeek.
Keeper Security has announced the appointment of two new additions to its federal team, with Shannon Vaughn as Senior Vice President of Federal and Benjamin Parrish, Vice President of Federal Operations. Vaughn will lead Keeper’s federal business strategy and expansion, while Parrish will oversee the delivery and operational readiness of Keeper’s federal initiatives, supporting civilian, defence and intelligence agencies as they modernise identity security to defend against pervasive cyber threats.
Vaughn brings more than two decades of private sector, government and military service, with a career focused on securing sensitive data, modernising federal technology environments and supporting mission-critical cybersecurity operations. Prior to joining Keeper, Vaughn served as General Manager of Virtru Federal, where he led business development, operations and delivery for the company’s federal engagements. During his career, he has held multiple senior leadership roles at high-growth technology companies, including Vice President of Technology, Chief Product Owner and Chief Innovation Officer, and has worked closely with U.S. government customers to deploy secure, scalable solutions.
“Federal agencies are operating in an elevated environment with unprecedented cyber risk. Next-generation privileged access management to enforce zero-trust security is essential,” said Darren Guccione, CEO and Co-founder of Keeper Security. “Shannon and Ben bring a unique combination of operational military experience, federal technology leadership and a deep understanding of zero-trust security. They know how agencies operate, how threats evolve and how to translate modern security architecture into real mission outcomes. These exceptional additions to our team will be instrumental as we expand Keeper’s role in securing the federal government’s most critical systems, personnel and warfighters.”
Vaughn is a career member of the U.S. Army with more than 20 years of service and currently holds the rank of Lieutenant Colonel in the Army Reserves. In addition to his operational leadership, Vaughn is a Non-Resident Fellow with the Asia Program at the Foreign Policy Research Institute, where he contributes research and analysis on the intersection of future technology threats and near-peer adversaries. He has a graduate degree from Georgetown University and undergraduate degrees from the University of North Georgia and the Department of Defence Language Institute.
To support execution across federal programs, Parrish oversees the delivery and operational readiness of Keeper’s federal initiatives. Parrish brings extensive experience leading federal operations, software engineering and secure deployments across highly regulated government environments. Prior to joining Keeper, he held senior leadership roles supporting federal customers, where he oversaw cross-functional teams responsible for platform reliability, customer success and large-scale deployments.
Parrish is a retired U.S. Army officer with more than 20 years of service across Field Artillery, Aviation and Cyber operations. His experience includes a combat deployment to Iraq and operational support to national cyber mission forces through the Joint Mission Operations Center. He has supported Department of Defence and Intelligence Community missions, including work with the White House Communications Agency, Joint Special Operations Command, Defence Intelligence Agency and National Reconnaissance Office. Parrish holds a graduate degree in Computer Science from Arizona State University and an undergraduate degree in Computer Science from James Madison University.
In his role at Keeper, Parrish aligns product, engineering, security and customer success teams and works closely with government stakeholders to ensure secure, reliable deployments that meet stringent federal mission, compliance and operational requirements.
“Federal agencies are being asked to modernise faster while defending against increasingly sophisticated, identity-driven attacks,” said Shannon Vaughn, Senior Vice President of Federal at Keeper Security. “I joined Keeper because we are focused on what actually produces tangible cyber benefits: controlling who has access to what, with full auditing and reporting – whether for credentials, endpoint or access management. We are going to win by being obsessive about access control that is easy to deploy and hard to break.”
These appointments come as federal agencies accelerate adoption of zero-trust architectures and modern privileged access controls in response to escalating credential-based attacks. The FedRAMP Authorised, FIPS 140-3 validated Keeper Security Government Cloud platform secures privileged access across hybrid and cloud environments for federal, state and local government agencies seeking to manage access to critical systems such as servers, web applications and databases.
The post Keeper Security Bolsters Federal Leadership to Advance Government Cybersecurity Initiatives appeared first on IT Security Guru.
UK-based AI safety and governance company CultureAI has been named as one of the participants in Microsoft’s newly launched Agentic Launchpad, a technology accelerator aimed at supporting startups working on advanced AI systems. The inclusion marks a milestone for CultureAI’s growth and signals broader industry interest in integrating AI safety and usage control into emerging autonomous AI ecosystems.
The Agentic Launchpad is a collaborative programme from Microsoft, NVIDIA, and WeTransact designed to support software companies in the United Kingdom and Ireland that are developing agentic AI solutions. With more than 500 companies applying, the selected cohort of 13 pioneering organisations represents some of the most forward-thinking solutions shaping the future of AI. The initiative is part of Microsoft’s wider investment in UK AI research and infrastructure, which includes nearly $30 billion committed to developing cloud, AI, and innovation capabilities in the region.
Selected companies in the program receive access to technical resources from Microsoft and NVIDIA, including engineering mentorship, cloud credits via Microsoft Azure, and participation in co-innovation sessions. Participants also gain commercial support, such as marketing assistance, networking opportunities and opportunities to showcase products to enterprise customers and investors.
CultureAI’s inclusion underscores an increasing industry emphasis on safe and compliant AI deployment. The company’s platform focuses on detecting unsafe AI usages, enforcing organisational policies during AI interactions, and providing real-time coaching to guide secure behaviour. This type of AI usage control has drawn interest from sectors with strict data governance and security requirements, including finance, healthcare, and regulated industries.
By working within the Agentic Launchpad cohort, CultureAI gains a strategic opportunity to integrate its usage risk and compliance controls with agentic AI development frameworks — an area where autonomous systems may introduce new vectors for inadvertent data exposure or misuse if not carefully governed.
Agentic AI represents a next stage of artificial intelligence that extends beyond generative tasks like text or image creation toward systems that can plan, act and autonomously execute sequences of decisions. This shift brings potential benefits in efficiency and automation, but also raises new challenges for risk management and governance in production environments.
Experts have noted that while initiatives like the Agentic Launchpad aim to accelerate innovation, they also emphasise robust tooling and ecosystem support to address security, operational governance and compliance in emerging AI applications. In this context, companies specialising in usage control and risk detection, such as CultureAI, might play a growing role as enterprises adopt more autonomous AI technologies.
The inclusion of AI safety-oriented companies like CultureAI in accelerator programmes reflects a broader trend in the industry toward embedding governance and risk mitigation into the core of AI development cycles. As agentic AI systems begin to move from laboratories into real-world use cases, particularly in sensitive or regulated domains, ensuring safe interaction with data and policy compliance may become a key differentiator for enterprise adoption.
“This recognition reflects the urgency organisations face today,” said James Moore, Founder & CEO of CultureAI. “AI is now embedded across everyday workflows, and companies need a safe, scalable way to adopt it. Our mission is to give them that confidence — through visibility, real-time coaching and adaptive guardrails that protect data without slowing innovation.”
The post CultureAI Selected for Microsoft’s Agentic Launchpad Initiative to Advance Secure AI Usage appeared first on IT Security Guru.
Live from AWS re:Invent, Snir Ben Shimol makes the case that vulnerability management is at an inflection point: visibility is no longer the differentiator—remediation is. Organizations have spent two decades getting better at scanning, aggregating and reporting findings. But the uncomfortable truth is that many of today’s incidents still trace back to vulnerabilities that were..
The post Vulnerability Management’s New Mandate: Remediate What’s Real appeared first on Security Boulevard.

via the insightful artistry and dry wit of Randall Munroe, creator of XKCD
The post Randall Munroe’s XKCD ‘Fifteen Years’ appeared first on Security Boulevard.
Amazon is warning organizations that a North Korean effort to impersonate IT workers is more extensive than many cybersecurity teams may realize after discovering the cloud service provider was also victimized. A North Korean imposter was uncovered working as a remote systems administrator in the U.S. after their keystroke input lag raised suspicions. Normally, keystroke..
The post Amazon Warns Perncious Fake North Korea IT Worker Threat Has Become Widespread appeared first on Security Boulevard.
Session 6C: Sensor Attacks
Authors, Creators & Presenters: Yan Jiang (Zhejiang University), Xiaoyu Ji (Zhejiang University), Yancheng Jiang (Zhejiang University), Kai Wang (Zhejiang University), Chenren Xu (Peking University), Wenyuan Xu (Zhejiang University)
PAPER
NDSS 2025 - PowerRadio: Manipulate Sensor Measurement Via Power GND Radiation
Sensors are key components to enable various applications, e.g., home intrusion detection, and environment monitoring. While various software defenses and physical protections are used to prevent sensor manipulation, this paper introduces a new threat vector, PowerRadio, which can bypass existing protections and change the sensor readings at a distance. PowerRadio leverages interconnected ground (GND) wires, a standard practice for electrical safety at home, to inject malicious signals. The injected signal is coupled by the sensor's analog measurement wire and eventually, it survives the noise filters, inducing incorrect measurement. We present three methods that can manipulate sensors by inducing static bias, periodical signals, or pulses. For instance, we show adding stripes into the captured images of a surveillance camera or injecting inaudible voice commands into conference microphones. We study the underlying principles of PowerRadio and find its root causes: (1) the lack of shielding between ground and data signal wires and (2) the asymmetry of circuit impedance that enables interference to bypass filtering. We validate PowerRadio against a surveillance system, broadcast system, and various sensors. We believe that PowerRadio represents an emerging threat that exhibits the pros of both radiated and conducted EMI, e.g., expanding the effective attack distance of radiated EMI yet eliminating the requirement of line-of-sight or approaching physically. Our insights shall provide guidance for enhancing the sensors' security and power wiring during the design phases.
ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.
Our thanks to the Network and Distributed System Security (NDSS) Symposium for publishing their Creators, Authors and Presenter’s superb NDSS Symposium 2025 Conference content on the Organizations' YouTube Channel.
The post NDSS 2025 – PowerRadio: Manipulate Sensor Measurement Via Power GND Radiation appeared first on Security Boulevard.
Formerly “AI shy” cyber pros have done a 180 and become AI power users, as AI forces data security changes, the CSA says. Plus, PwC predicts orgs will get serious about responsible AI usage in 2026, while the NCSC states that, no, prompt injection isn’t the new SQL injection. And much more!
Here are five things you need to know for the week ending December 19.
Who woulda thunk it?
Once seen as artificial intelligence (AI) laggards, cybersecurity teams have become their organizations’ most enthusiastic AI users.
That’s one of the key findings from “The State of AI Security and Governance Survey Report” from the Cloud Security Alliance (CSA) and Google Cloud, published this week.
“AI in security has reached an inflection point. After years of being cautious followers, security teams are now among the earliest adopters of AI, demonstrating both curiosity and confidence,” the report reads.
Specifically, more than 90% of respondents are assessing how AI can enhance detection, investigation, or response processes by either already testing AI security capabilities (48%), or planning to do so within the next year (44%).
“This proactive posture not only improves defensive capabilities but also reshapes the role of security — from a function that reacts to new technologies, to one that helps lead and shape how they are safely deployed,” the report adds.
![]()
(Source: “The State of AI Security and Governance Survey Report” from the Cloud Security Alliance (CSA) and Google Cloud, December 2025)
Here are more findings from the report, which is based on a global survey of 300 IT and security professionals:
“This year’s survey confirms that organizations are shifting from experimentation to meaningful operational use. What’s most notable throughout this process is the heightened awareness that now accompanies the pace of [AI] deployment,” Hillary Baron, the CSA’s Senior Technical Research Director, said in a statement.
Recommendations from the report include:
“Strong governance is how you create stability in the face of rapid change. It’s how you ensure AI accelerates the business rather than putting it at risk,” reads a CSA blog.
For more information about using AI for cybersecurity:
Do the classic pillars of data security – confidentiality, integrity and availability – still hold up in the age of generative AI? According to a new white paper from the Cloud Security Alliance (CSA), they remain essential, but they require a significant overhaul to survive the unique pressures of modern AI.
The paper, titled “Data Security within AI Environments,” maps existing security controls to the AI data lifecycle and identifies critical gaps where current safeguards fall short. It argues that the rise of agentic AI and multi-modal systems creates attack vectors that traditional perimeter security simply cannot address.
![]()
Here are a few key takeaways and recommendations from the report:
"The foundational principles of data security—confidentiality, integrity, and availability—remain essential, but they must be applied differently in modern AI systems," reads the report.
For more information about securing data in AI systems:
Is your organization still treating responsible AI usage as a compliance checkbox, or are you leveraging it to drive growth?
A new prediction from PwC suggests that 2026 will be the year companies finally stop just talking about responsible AI and start making it work for their bottom line.
In its “2026 AI Business Predictions,” PwC forecasts that responsible AI is moving "from talk to traction." This shift is being driven not just by regulatory pressure, but by the realization that governance delivers tangible business value. In fact, almost 60% of executives in PwC's “2025 Responsible AI Survey” reported that their investments in this area are already boosting return on investment (ROI).
![]()
To capitalize on this trend, PwC advises organizations to stop treating AI governance as a siloed function, and to instead take steps including:
“2026 could be the year when companies overcome this challenge and roll out repeatable, rigorous responsible AI practices,” the report states.
For more information about secure and responsible AI use, check out these Tenable resources:
If you thought ransomware activity felt explosive in recent years, the U.S. Treasury Department has the receipts to prove you right.
Ransomware skyrocketed between 2022 and 2024, a three-year period in which incidents and ransom payments grew exponentially compared with the previous nine years.
The finding comes from the U.S. Financial Crimes Enforcement Network (FinCEN) report titled “Ransomware Trends in Bank Secrecy Act Data Between 2022 and 2024.”
Between January 2022 and December 2024, FinCEN received almost 7,400 reports tied to almost 4,200 ransomware incidents totaling more than $2.1 billion in ransomware payments.
By contrast, during the previous nine-year period – 2013 through 2021 – FinCEN received 3,075 reports totaling approximately $2.4 billion in ransomware payments.
The report is based on Bank Secrecy Act (BSA) data submitted by financial institutions to FinCEN, which is part of the U.S. Treasury Department.
![]()
(Source: U.S. Financial Crimes Enforcement Network (FinCEN) report titled “Ransomware Trends in Bank Secrecy Act Data Between 2022 and 2024,” December 2025)
Here are a few key findings from the report:
How can organizations better align their financial compliance and cybersecurity operations to combat ransomware? The report emphasizes the importance of integrating financial intelligence with technical defense mechanisms.
FinCEN recommends the following actions for organizations:
For more information about current ransomware trends:
SQL injection and prompt injection aren’t interchangeable terms, the U.K.’s cybersecurity agency wants you to know.
In the blog post “Prompt injection is not SQL injection (it may be worse),” the National Cyber Security Centre unpacks the key differences between these two types of cyber attacks, saying that knowing the differences is critical.
“On the face of it, prompt injection can initially feel similar to that well known class of application vulnerability, SQL injection. However, there are crucial differences that if not considered can severely undermine mitigations,” the blog reads.
While both issues involve an attacker mixing malicious "data" with system "instructions," the fundamental architecture of large language models (LLMs) makes prompt injection significantly harder to fix.
![]()
The reason is that SQL databases operate on rigid logic where data and commands can be clearly separated via, for example, parameterization. Meanwhile, LLMs operate probabilistically, predicting the "next token" without inherently understanding the difference between a user's input and a developer's instruction.
“Current large language models (LLMs) simply do not enforce a security boundary between instructions and data inside a prompt,” the blog reads.
So how can you mitigate the prompt injection risk? Here are some of the NCSC’s recommendations:
For more information about AI prompt injection attacks:
![]()
The post Cybersecurity Snapshot: Cyber Pros Emerge as Bold AI Adopters, While AI Changes Data Security Game, CSA Reports Say appeared first on Security Boulevard.
Other noteworthy stories that might have slipped under the radar: Trump could use private firms for cyber offensive, China threat to US power grid, RaccoonO365 suspect arrested in Nigeria.
The post In Other News: Docker AI Attack, Google Sues Chinese Cybercriminals, Coupang Hacked by Employee appeared first on SecurityWeek.
Google is shutting down its dark web report tool, which was released in 2023 to alert users when their information was found available on the darknet. However, while the report sent alerts, Google said users found it didn't give them next steps to take if their data was detected.
The post Google Shutting Down Dark Web Report Met with Mixed Reactions appeared first on Security Boulevard.
StackHawk co-founder and CSO Scott Gerlach has spent most of his career running security teams, and his take on application security is shaped by a simple reality: developers are still too often the last to know when their code ships with risk. Gerlach explains why that gap has widened in the age of modern CI/CD,..
The post Why AppSec Can’t Keep Up With AI-Generated Code appeared first on Security Boulevard.
A Stanford study finds the ARTEMIS AI agent beat most human pen testers in vulnerability discovery—at a fraction of the cost.
The post For $18 an Hour Stanford’s AI Agent Bested Most Human Pen Testers in Study appeared first on Security Boulevard.
Lefteris Tzelepis, CISO at Steelmet /Viohalco Companies, was shaped by cybersecurity. From his early exposure to real-world attacks at the Greek Ministry of Defense to building and leading security programs inside complex enterprises, his career mirrors the evolution of the CISO role itself. Now a group CISO overseeing security across multiple organizations, Lefteris brings a [...]
The post CISO Spotlight: Lefteris Tzelepis on Leadership, Strategy, and the Modern Security Mandate appeared first on Wallarm.
The post CISO Spotlight: Lefteris Tzelepis on Leadership, Strategy, and the Modern Security Mandate appeared first on Security Boulevard.
Dec 19, 2025 - Jeremy Snyder - New beginnings, such as new years, provide a nice opportunity to look back at what we have just experienced, as well as look forward to what to expect. 2022 was a year of transition in many ways, and 2023 may well be the same. I wanted to reflect on some of those transitions from a few different perspectives:
* The market and the world
* Venture capital
* Cybersecurity
* FireTail
WHAT IS GOING ON WITH THE MARKET?
ECONOMIC TRANSITION:
2022 started with a strong macroeconomic outlook, after a massive positive swing in 2021, but then delivered a very strong downward performance, -35% for the year:
S&P North America Tech Index performance, 2022
The “Internet” sector (if you can call that one sector) performed even worse for the year, down 45%:
Dow Jones Internet composite index performance, 2022
Perhaps one interesting observation there is that the correction on the internet side happened in late Q1 and throughout Q2, with a pretty flat performance for the second half of the year.
The consensus by the end of year is that the overall economic situation in 2022 was…weird. Layoffs in the tech sector started part way through the year, and continued until the very last days of 2022. Yet, the unemployment rate remains a very low 3.5% in the USA, and tech companies find it difficult to find good job candidates.
For most people, the worst aspect of the economic changes in 2022 will be the return of aggressive inflation.
TRANSITION IN THE WORLD AROUND US
It was also a transition year of “back to normal”, following the possible end of the COVID-19 pandemic.
* Travel got back to almost pre-pandemic levels
* The return to the office started, in preparation for an expected 90% return in 2023. Side note - this also led to the quiet quitting syndrome.
* The peace index was a mixed bag, with increased peace in 6 regions, but continued conflict in MENA, and of course the unprovoked and unjust Russian invasion of Ukraine.
WHAT ABOUT VENTURE CAPITAL?
A lot of innovative companies get their start with the support of venture capital, as does FireTail. In general, venture capital (VC) follows macroeconomic trends. So as you might expect, VC did indeed slow down in 2022. But there is nuance, according to TechCrunch:
> "In the second quarter of 2022, global venture totals dipped, but inside of that slowdown is a shift away from the super-late-stage deals that helped push the value of VC deal-making to all-time highs last year."
AND VENTURE CAPITAL FOR CYBERSECURITY?
Just like the macroeconomic climate, there are adjustments going on.
> "I would say it's more about a year of change, reacting to new realities, figuring out what a new normal looks like. In the end, start-up valuations are based on what the public market is doing. Even acquisitions, M&A activities, are going to follow what’s happening on the public markets."
> "We’ve seen public market valuations grow so quickly and then drop so quickly, and we're still figuring out what the new normal will be. There’s still a lot of uncertainty. I don't think any of us really knows what the rest of the year will look like or what the new normal will be. It’s all part of the ebb and flow of the economy." - Will Lin
That question of valuations rising and falling is especially striking, coming out of 2021. There were a number of so-called “unicorns” created in cybersecurity in 2021. Rumors and whispers, even at the time, suggested that many of these companies hadn’t reached the unwritten rule of $100M recurring revenue, based on a longstanding practice of valuing companies at 10 times their revenues. And what happens to these companies now - are they zombies?
> “We were also skeptical of some of these unicorns, with some receiving a $1B+ valuation at the same time we were hearing rumors of $5M ARR.” - The Cyber Why
How does this all match up? From one author and former industry analyst:
> “June 2022 is the most bizarre month I’ve ever seen. June announced both three new cybersecurity unicorns and 1500 employees laid off from 9 cybersecurity vendors in the same month.” - The Cyber Why
Another analysis shows a decline in cyber funding, so there’s an open question about what that means moving forward. But there’s also contradictory evidence showing a lot of cybersecurity activity in Q4 of 2022.
WHAT’S THE STATE OF CYBERSECURITY HEADING INTO 2023 THEN?
Cybersecurity is still a high priority. Is cybersecurity recession-proof? Perhaps.
> "First, our world is growing smarter and more technological by the minute. For example, the adoption of cloud and artificial intelligence technologies is rapidly increasing. As a result, our reliance on all things cyber to power our society and its critical infrastructure is on an extremely fast pace. Companies are using more and more devices connected to the internet. Information technology budgets have ballooned. A market correction may slow progress, but it will not reverse this trend."
> "A more connected society is also a more vulnerable one. These developments increase the attack surface for cybercriminals to exploit vulnerabilities and result in an increase in the frequency and severity of hacks, especially against critical infrastructure. With more technology and connectivity, there comes greater investment in cybersecurity." - Michael Steed, Paladin Capital
This is echoed by most people in the cybersecurity industry, especially those who have spent decades in the space. Recently, NightDragon held their annual kickoff event, where they wrapped up 2022 and gave some thoughts about 2023. Some of the highlights from their analysis include the following:
* The continuation of 'cyber super cycles', meaning periods of mass investment, both from financial backers (VCs and private equity firms) and customers, in their purchasing of cybersecurity products and services.
* 2022 was overall a record year for VC investment ($19B+) and M&A ($118.5B) in cybersecurity.
* Operational technology (aka OT; think power grids, electricity generators, elevators, HVAC, etc.), is a top area for investment for 2023. The Colonial Pipeline incident has sparked concern, and there are now three companies earning more than $100 million annually in OT security. The data intelligence of this space is with these systems reporting to central locations, all of which is done over APIs.
* 2022 also brought additional government involvement, which will spur regulation, new initiatives, perhaps offensive capabilities and almost certainly more spending.
* 2022 was anecdotally the first year of “best in suite” prioritization for customers, meaning that customers focused on buying not necessarily the best solution for defending against any single particular attack vector; instead looking at a broader category, and choosing a blend of depth and breadth. Average number of vendors is 75, needs to come down; 5 is not realistic, but something between 40-50 probably is more manageable.
* At the same time, increased cloud adoption and evolving application architectures brought very high complexity, and a difficult-to-monitor attack surface. As a result of this, companies started to experience technical debt in cybersecurity defense. This is currently the case in cloud security. In fact, the analysis here posits that cloud security is the number one need for enterprises in 2023. Enterprises have realized that cloud transformation is mandatory, and they need to refactor applications to get the cloud value and agility that they desire.
WHAT’S THE STATE OF API SECURITY?
Stay tuned. We’re putting together our analysis of the current state of API security, and some predictions for API security in 2023. We’ll be releasing that report soon.
WHAT’S THE STATE OF FIRETAIL?
This is the easiest transition to address - the state of FireTail is great! Admittedly, it’s easiest to adapt to market changes when you’re a young company, as we are. Fun fact - we officially incorporated on February 11, 2022. We enter 2023 having hit a number of great milestones for a young company:
* $5 million in seed funding raised earlier this year.
* Our team grew to 10 employees as of January 2023.
* The FireTail.app platform is live in production with customers.
WHAT’S NEXT FOR FIRETAIL?
We continue to push forward. We’re in a good position to expand beyond our current cohort of initial design partners in late Q1 2023. We also firmly believe what Dave DeWalt said during the NightDragon session:
> “Great companies get started during down cycles.”
Bob Ackerman’s quote also resonated with us:
> “Cyber is not a pick-up game; be committed or go home.”
We agree. We are also mindful of the macroeconomic environment around us. To that end, it’s always been part of our ethos to focus on security and customer outcomes first, and financial outcomes second. We believe that making our preventative API security middleware free and open source is the right thing to do, and we stand by that decision. If that means that many organizations will use it for its ability to block bad API calls, and never pay us, we accept that and still believe that it is a good outcome.
IS THERE ANYTHING WE CAN SHARE ABOUT THE FUTURE DIRECTION OF FIRETAIL’S TECHNOLOGY?
So much of our strategy is around solving security challenges for our customers. We will continue to produce versions of the FireTail middleware library as our customers need, and make sense for us to provide. And we will continue to expand its functionality as we learn of new attack vectors. We are also believers in examining a domain space holistically, so it’s not shift-left or shift-right; nor is it shift-left and defend-right, it’s:
> Shift everywhere.
STAY TUNED.
Please also check out my recent video where I discuss some of the themes covered in this blog in more detail.
The post FireTail’s 2022 Review on Macro, Industry, and Thoughts About What’s Next – FireTail Blog appeared first on Security Boulevard.
