The Dawn of Autonomous Skies: Balancing Innovation and Governance in Fighter Jets
In the rapidly evolving landscape of military technology, autonomous weapon systems (AWS) are no longer the stuff of science fiction. Recent milestones, such as the Baykar Bayraktar Kizilelma’s integration of advanced AESA radar systems and the Anduril Fury’s maiden flight, highlight a new era where unmanned fighter jets operate with unprecedented independence. These platforms promise to revolutionize air combat, but they also raise profound questions: How do we govern AI in split-second decisions? If traditional human oversight isn’t feasible, how do we ensure trustworthiness? And what does this mean for the doctrines shaping future air forces? This article explores these critical issues, arguing that conceptualizing robust AI governance is as vital as the technological achievements themselves.
Breakthroughs in Autonomous Fighter Jets
The past few months have seen remarkable progress in autonomous aerial vehicles designed for combat. Turkey’s Baykar Technologies has been at the forefront with the Bayraktar Kizilelma, an unmanned combat aerial vehicle (UCAV) engineered for full autonomy. On October 21, 2025, the Kizilelma completed its first flight equipped with ASELSAN’s MURAD-100A AESA (Active Electronically Scanned Array) radar, demonstrating capabilities like multi-target tracking and beyond-visual-range (BVR) missile guidance. This radar integration enhances sensor fusion, allowing the jet to process vast amounts of data in real-time for superior situational awareness. Earlier tests in October also included successful munitions strikes, underscoring its role as a “pure full autonomous fighter jet.”

Across the Atlantic, Anduril Industries’ Fury (officially YFQ-44A) is making waves in the U.S. Air Force’s Collaborative Combat Aircraft (CCA) program. On October 31, 2025, the Fury achieved its first flight just 556 days after design inception, a record pace for such advanced systems. This high-performance, multi-mission Group 5 autonomous air vehicle (AAV) is built for collaborative autonomy, meaning it can team up with manned fighters to extend reach and lethality. Powered by AI, it handles complex tasks like navigation, threat detection, and engagement without constant human input.

These developments aren’t isolated; they’re part of a global trend where nations like the U.S., Turkey, and others invest in AWS to gain strategic edges. Sensor fusion — combining data from radars, cameras, and other sources — enables these jets to outperform human pilots in data processing speed. However, this autonomy comes at a cost: the erosion of traditional safeguards.
The Governance Dilemma: No Room for Humans in/on the Loop?
In high-stakes scenarios like dogfights, where decisions must be made in seconds, incorporating human oversight — such as “human-in-the-loop” (where a person approves every lethal action), “human-on-the-loop” (supervision with override capability), or “human-in-command” (broad strategic control) — becomes impractical. Data link capacities simply can’t transmit the massive volumes of real-time sensor data to a remote operator and receive approvals fast enough. As one analysis notes, the latency in communication could mean the difference between victory and defeat.
This gap poses significant challenges to AI governance. Without human intervention, how do we ensure compliance with international humanitarian law (IHL), such as distinguishing between combatants and civilians or assessing proportionality in attacks? Reports from organizations like Human Rights Watch highlight risks: AI systems might misinterpret data, leading to unintended harm, and the lack of accountability undermines moral and legal frameworks. Geopolitical tensions exacerbate this, as an arms race in AWS could lead to instability, with nations deploying systems that escalate conflicts autonomously.
The United Nations has discussed lethal autonomous weapons systems (LAWS) extensively, emphasizing the need for “meaningful human control” (MHC). Yet, as a 2024 UN report summarizes, definitions and enforcement remain contentious, with concerns over civilian risks and ethical legitimacy dominating debates.
Building Trustworthiness in Ungoverned Skies
If direct human oversight isn’t viable, alternative mechanisms must emerge to ensure trustworthiness. One innovative approach could involve using digital twins — virtual replicas of the physical systems and environments — to enable simulation-based human oversight prior to deployment. By creating these high-fidelity models, operators can run pre-mission scenarios where AI behaviors are scrutinized and refined under human guidance, predicting outcomes and embedding ethical constraints without compromising real-time autonomy. Rigorous testing in these simulated setups, incorporating diverse threat landscapes, can enhance system predictability and reduce unforeseen risks.
International agreements could play an important role. Proposals for treaties banning fully autonomous lethal systems, similar to those on landmines, aim to mandate some level of human involvement. However, enforcement is tricky; nations might prioritize military advantage over ethics. Hybrid models, where AI handles tactical decisions but humans define “rules of engagement” parameters beforehand — potentially validated through digital twin simulations — offer a middle ground.
Reshaping Air Force Doctrines for an AI Era
The integration of autonomous jets like Kizilelma and Fury will fundamentally alter air force doctrines. The U.S. Air Force’s Doctrine Note 25–1 on Artificial Intelligence, released in April 2025, anticipates AI’s role in operations across competition, crisis, and conflict. It emphasizes “trusted and collaborative autonomy,” where AI augments human capabilities rather than replacing them entirely.
Future doctrines might shift toward “mosaic warfare,” where swarms of autonomous assets create adaptive, resilient networks. This requires new training paradigms: pilots becoming “mission managers” overseeing AI fleets, and doctrines incorporating ethical guidelines to prevent escalation. As one expert panel discussed, seamless human-machine teaming will define air power, but only if governance keeps pace.
For global powers, conceptualizing AI governance isn’t optional — it’s essential for maintaining strategic stability. Without it, we risk doctrines that prioritize speed over ethics, potentially leading to unintended wars.
Conclusion: Achievements Beyond the Hardware
The advancements in systems like the Kizilelma and Fury are undeniable triumphs of engineering. Yet, true progress lies in addressing the governance void. By theorizing and implementing innovative mechanisms — from embedded ethics to international norms — we can ensure these technologies serve humanity, not endanger it. As air forces evolve, the conceptualization of AI governance will be the critical achievement that secures a safer future in the skies. Let’s not just build faster jets; let’s build smarter safeguards.
References
- https://turdef.com/article/aselsan-s-murad-100-a-radar-completes-first-kizilelma-flight
- https://baykartech.com/en/press/direct-hit-on-first-strike-from-bayraktar-kizilelma/
- https://www.anduril.com/article/anduril-yfq-44a-begins-flight-testing-for-the-collaborative-combat-aircraft-program/
- https://www.wired.com/story/dogfight-renews-concerns-ai-lethal-potential/
- https://www.hrw.org/report/2025/04/28/hazard-human-rights/autonomous-weapons-systems-and-digital-decision-making
- https://www.tandfonline.com/doi/full/10.1080/16544951.2025.2540131
- https://arxiv.org/html/2405.01859v1
- https://docs.un.org/en/A/79/88
- https://lieber.westpoint.edu/future-warfare-national-positions-governance-lethal-autonomous-weapons-systems/
- https://www.armscontrol.org/act/2025-01/features/geopolitics-and-regulation-autonomous-weapons-systems
- https://aerospaceamerica.aiaa.org/institute/industry-experts-chart-the-future-of-ai-and-autonomy-in-military-aviation/
- https://idstch.com/technology/ict/digital-twins-the-future-of-military-innovation-readiness-and-sustainment/
- https://federalnewsnetwork.com/commentary/2025/06/digital-twins-in-defense-enhancing-decision-making-and-mission-readiness/
- https://militaryembedded.com/ai/cognitive-ew/from-swarms-to-digital-twins-ais-future-in-defense-is-now
The Dawn of Autonomous Skies: Balancing Innovation and Governance in Fighter Jets was originally published in Coinmonks on Medium, where people are continuing the conversation by highlighting and responding to this story.