The Military Use Of AI
Challenges and Opportunities for Taiwan
Cheng-Hung Hsu | 2024.10.22
Increasing cross-strait tensions between China and Taiwan suggest the need for the latter to look into the advantages offered by new defence approaches. With a defence budget vastly outpaced by China, Taiwan’s Overall Defense Concept recognises a need to look to cutting-edge technologies to offset scale imbalances.
Key challenges in the use of AI include: data-security concerns; ethical implications of autonomous weapons; unpredictability and unreliability of systems; high costs of implementation and maintenance; potential for increased use of armed force; issues of accountability and responsibility; risks associated with deepfake technology; and the danger of over-reliance on AI systems leading to a loss of traditional human military skills.
In terms of opportunities, the paper discusses the potential benefits of AI in enhancing situational awareness, improving command-and-control capabilities, and enabling advanced simulations for military training and strategy development. AI-powered systems can process vast amounts of data quickly, providing comprehensive battlefield intelligence and enabling faster, more accurate decision-making.
The paper emphasises the importance of carefully considering ethical, security and operational factors when integrating AI into Taiwan’s defence strategy. It suggests that effective use of AI could serve as a force multiplier for Taiwan’s military, potentially helping to offset China’s numerical and resource advantages.
The research underscores the potential of AI to significantly enhance Taiwan’s defence capabilities, while also cautioning about the risks and challenges associated with its implementation. The paper advocates for a balanced approach that maximises the benefits of AI in military applications while mitigating potential drawbacks and ethical concerns.
INTRODUCTION
Scholars may differ on the probability of a Taiwan contingency, but cross-strait tensions between Taiwan and China are rising, and Beijing refuses to exclude the use of military force. The “One China” policy that many countries have adopted limits Taiwan’s ability to rely on collective defence or alliances. Taiwan can expect US support, but as the experience of Ukraine suggests, it needs to be resolute and prepared to defend itself. US allies in the region that are friendly to Taiwan (Japan, South Korea, Australia) may become involved, but there are no legally binding agreements or authoritative policy statements that would make that a reliable planning assumption. Given limitations in resources and difficulties in resupply to the island, to deter China and defend itself, Taiwan must seek advantage from emerging technologies and strategies, such as the military use of AI, while preserving interoperability with its sole security guarantor, the US, through alignment on concepts and systems.
China’s economic growth over recent decades has given the country a defence budget more than 20 times that of Taiwan, changing the balance of military power across the Taiwan Strait. How can Taiwan respond? The answer lies in the asymmetric strategy of Taiwan’s Overall Defense Concept (ODC), which uses the small to control the big, leveraging Taiwan’s lead in cutting-edge technologies to gain advantage through the military use of AI. The transformation of modern warfare relies on the collection, organisation and manipulation of data: the ODC attempts to maximise Taiwan’s defence advantages by matching the characteristics of the battlefield environment in the Taiwan Strait to the development and application of low-cost, high-efficiency, high-quantity and high-survival weapons. The ODC envisions a large number of miniature missile assault boats, land-launched missiles, mines, attack drones and anti-armour rockets. Its effectiveness requires the use of AI to synchronise situational awareness with kinetic and electromagnetic effects. AI can enhance Taiwan’s defence and operational capabilities, serving as a multidomain force multiplier for other combat platforms and formations. This paper explores the opportunities and challenges associated with the effective use and implementation of AI in Taiwan’s military.
METHODOLOGY AND STRUCTURE
The paper’s methodology primarily relied on a review of English-language literature from 2019 to 2024. This is due in part to the scarcity of Chinese-language publications on the subject. However, the main reason for a focus on English-language and in particular US literature is that the US is the only country legally obliged to support Taiwan in securing the means for its defence and committed by leadership statements to come to its assistance. Given Taiwan’s unusual diplomatic status, alignment with US concepts and initiatives is thus of particular importance.
The paper has three main sections. The first offers a general outline of the advantages and disadvantages of military adoption of AI. The second explores starting points for the use of AI by Taiwan’s military and addresses the imperative to advance further. The third section evaluates AI prospects for Taiwan’s military and proposes solutions to overcome obstacles. The paper concludes with some key considerations for Taiwan’s government.
CHALLENGES AND OPPORTUNITIES FOR THE MILITARY USE OF AI
Coined by US computer scientist John McCarthy in 1956, the term “AI” referred to “the potential for creating machines that could simulate human intelligence”. AI denotes a machine’s capability to perform cognitive functions typically associated with the human mind. While there are risks associated with its use in a range of industries, from healthcare to finance, its application in the military domain carries unique challenges.
RISKS AND CHALLENGES
Miliary use of AI presents concerns particular to the risks that accompany the use of lethal force. These disadvantages highlight challenges and risks associated with military use of AI, and the importance of carefully considering the ethical, security, financial and human factors involved in leveraging AI technologies in defence strategies.
-
Data security: AI – like any other digital technology – can be hacked or manipulated. When systems are breached, sensitive information can be stolen and exploited, to the advantage of the adversary. In 2015, the US Office of Personnel Management suffered large-scale breaches of government data in the form of theft of confidential data affecting millions of federal employees and contractors. Sensitive personal information of approximately 21.5 million individuals who had undergone background checks, including 5.6 million fingerprint records, was stolen. The vulnerability of systems used to store and process sensitive information raises serious concerns about the ability to safeguard the systems on which AI would depend.
-
Ethics: While using robots in combat can reduce the number of human casualties and enhance operational efficiency, ethical questions arise when considering fully autonomous robotic soldiers. The concern is whether it is morally acceptable to allow machines to make life-and-death decisions without human oversight. The South Korean military considered deploying AI-powered sentry robots along the border with North Korea. These robots were equipped with automatic targeting and firing capabilities, raising significant ethical concerns about excluding human judgement. The prospect that they could engage and kill human targets without direct human oversight sparked widespread debate. Critics argued that delegating life-and-death decisions to autonomous machines inherently undermines human dignity and the principles of just war. The use of such AI-powered “killer robots” could lead to wars being fought entirely between autonomous systems, rather than between human soldiers, posing profound ethical questions about the role of human agency in warfare. Further challenges arose regarding accountability and responsibility if a malfunction in these robots were to cause civilian casualties. The “black box” non-transparent nature of the AI algorithms controlling the systems complicated the determination of accountability.
-
Unpredictability/unreliability: In 2021, an AI-powered drone strike carried out by the US military in Kabul, Afghanistan resulted in the deaths of 10 civilians, of whom seven were children, instead of the intended Islamic State in Khorasan target. The AI algorithms powering the drone’s target identification and engagement systems had failed to adequately discriminate between military and civilian targets, leading to the terrible loss of innocent and young life. The unpredictability of AI arises from the complexity and adaptability of these algorithms, which can learn, evolve and make decisions in ways that are not fully transparent in or constrained by their initial programming. As they encounter new situations and environments, their actions become increasingly difficult to anticipate with certainty, even when their ultimate goals are known.
-
Costs and maintenance: Building and maintaining AI systems in the military sector can incur significant costs, due to the systems’ complexity. Repair, maintenance and frequent upgrades add to the financial burden. In 2019, the US Department of Defense (DoD) awarded Microsoft a $10-billion cloud computing contract as part of its Cloud Strategy; known as the Joint Enterprise Defence Infrastructure (JEDI) project, it used AI to make it more effective. However, significant delays and cost overruns arose due to the complex engineering of the required AI systems. The “opaque nature” of the AI algorithms used in the JEDI project also “raised worries about unintended consequences” and the ability to properly test, validate and maintain these advanced technologies, which resulted in the Pentagon cancelling the contract. The need to ensure the security, reliability and transparency of these AI systems adds further to the complexity and cost of military modernisation, which can hamper the military’s efforts to integrate cutting-edge AI technologies.
-
Increased use of armed force: AI-driven information-processing systems could enable a proliferation of target information, broadening the use of force and potentially leading to more civilian casualties. According to a 2023 media report, the Israel Defense Forces (IDF) used an AI system known as Habsora (“the Gospel”) to rapidly identify and recommend targets for airstrikes in Gaza. According to the IDF, by extracting information quickly and automatically, Habsora provides targeted recommendations that match exactly to the identifications made by humans. Habsora and Israel’s Target Division have helped to build a database of between 30,000 and 40,000 people suspected of being Hamas or Palestinian Islamic Jihad militants. To put it another way, AI-driven systems have accelerated the compilation of a vast potential kill list. Before Habsora, it would take Israel up to a year to identify 50 potential targets in Gaza. During the May 2021 conflict, the AI system identified around 100 targets per day.
-
Accountability and responsibility: AI in the military raises both C2 and legal questions about accountability. Current research suggests AI systems need more rigorous testing to be validated and considered safe in risky military situations. Such systems often involve a complex network of stakeholders, including developers, data providers, users and regulators. Attributing responsibility can be challenging, as each stakeholder is likely to contribute to the operation of the system in different ways. Aside from humanitarian issues, the unpredictability of AI can, as the above example on unpredictability demonstrates, obscure who should be held accountable when things go wrong. The more AI systems become autonomous and capable of independent decisions, the more difficult it becomes to determine who should be held accountable for their actions. Should it be the AI system itself, its developers or its users? Developing and adapting C2 and legal structures to govern the use of AI is essential to maintain the military chain of command and establish lines of accountability.
-
Deepfake technology risks: In March 2023, The Intercept uncovered a US Special Operations Command procurement document that suggests the US military is seeking to develop deepfake or other AI-based technologies to deceive its adversaries. The document showed that the military wishes to use deepfakes to mislead enemies on the battlefield. However, there is a risk that use of deepfakes in combat could percolate up to the field of foreign relations. Militaries serving societies that believe in the principle of democratic control of armed forces should be wary of capabilities such as deepfakes that can undermine trust and the principle of truth, without which their democracies cannot function.
-
Path dependency: As militaries adopt AI-powered systems, there is a risk over time that they will lose the institutional memory of performing tasks the “old-fashioned” way (without AI support). Ultimately, the more reliant on AI systems a military becomes, the greater the cost when that system is penetrated.
OPPORTUNITIES
AI offers potentially significant advantages to military capability in terms of increased speed and accuracy, enhanced battlefield awareness and enhanced decision-making capabilities. Some of the examples examined below reflect bold claims, and it must be noted that technologies under development frequently fail to live up to early ambitions. However, given the immaturity and fast-moving nature of the AI field, it is appropriate to evaluate opportunities with an open mind about the potential range of utility.
AI is changing not just how the military fights, but how it works from head to tail. It potentially offers better situational understanding, faster decisions, improved targeting, less risk to military personnel, and more efficient recruitment, training and logistics, gifting militaries cost savings and advantages in planning, executing and sustaining missions. Some of these benefits are unproven, others may never materialise in the form projected, and many are accompanied by vulnerabilities and risks of over-dependence. While there is a cost–benefit balance to be struck, no matter how much AI is adopted for a force, understanding how these systems may work and shape the way an adversary thinks and acts has become indispensable.
-
Situational awareness: With AI-powered sensors and technologies, military intelligence organisations can better understand battlefield conditions and provide a more comprehensive picture of adversary capabilities and intent. US Army researchers have developed machine learning algorithms that can operate in bandwidth-constrained environments to rapidly update situational awareness. These algorithms can compress data while maintaining near-optimal performance, allowing models to be frequently retrained on decentralised data sources, helping soldiers gain real-time analysis in a rapidly changing battlefield. The US Army is planning to integrate AI into tactical command posts by integrating disparate data sources into a common operational picture. AI systems can predict enemy actions, identify weaknesses, assess the environment, plan missions and suggest ways to avoid problems. By rapidly analysing disparate data sources, AI can provide precise intelligence forecasts, mission planning recommendations and situational assessments far quicker than human analysts alone.
-
Enhanced C2: AI systems allow the military to swiftly and precisely process large volumes of data and acquire nuanced insights, bringing unprecedented efficiency and accuracy to their decision-making process. The US DoD is creating an AI-powered system to combine data from different sensors into a single picture to help the joint force make better decisions. The US military has developed the CJADC2 (Combined Joint All-Domain Command and Control) concept, which aims to increase interoperability and AI integration across all domains. By leveraging AI to rapidly collect, analyse and disseminate information on the battlefield, CJADC2 can identify and engage targets faster to create a well-informed force capable of defeating adversaries through accelerated decision-making cycles. By employing AI algorithms for predictive analytics and scenario modelling, the military can anticipate evolving threats and make immediate strategic adjustments to ensure agile, responsive C2. This capability is viewed by military strategists in both the US and China as a critical determinant that could prove decisive in shaping the outcomes of future armed conflicts and warfare scenarios.
-
Simulations and modelling: AI enables highly realistic simulations and modelling of complex military scenarios, systems and environments. This lets militaries test new ideas, strategies and equipment in a virtual environment before trying them in the real world. For example, the US Air Force is using AI-powered digital twins and simulations to model performance and maintenance requirements of aircraft such as the F-35 fighter. This helps accelerate innovation in areas such as predictive maintenance and mission planning. By personalising training regimens, AI enables more effective preparation of forces for dynamic real-world scenarios without compromising the safety of personnel. In the long term, AI-driven simulation training could be more cost effective than other kinds of realistic training simulations.
-
Lethal autonomous weapon systems: The integration of weaponry equipped with advanced sensors and algorithms offers a significant advance in military capability. These systems autonomously identify targets while ensuring human oversight for specific engagements, thereby enhancing precision, speed and efficiency. AI-supported autonomous systems have the processing power to draw on all the data-rich systems mentioned above to form an integrated, rapidly functional “kill chain” that can predict enemy actions, identify vulnerabilities, assess environmental conditions, evaluate mission strategies and recommend mitigation plans. By anticipating threats, optimising tactics and streamlining decision-making through AI-powered analytics, military forces can stay one step ahead of their targets, increasing operational effectiveness and mission success rates.
-
Training and recruitment: The US Air Force and the US Army have used AI-powered training tools that can reduce the time needed to complete training materials by up to 40%. AI-based chatbots have been used to assist with military recruitment, for example the US Army’s chatbot, Sergeant Star, which officially came into operation in 2006. The US Navy is deploying a conversational AI system called “Amelia” to handle and resolve the most common technical support questions from naval and civilian personnel, reducing the need for human support agents. The rollout of Amelia is part of the Navy’s $136-million Navy Enterprise Service Desk initiative to update and merge more than 90 IT support centres into a unified assistance platform.
-
Logistics: AI-driven solutions hold immense potential for streamlining logistics processes within the military domain. AI algorithms can analyse historical data on equipment use, maintenance schedules and operational rhythms to predict future demand for spare parts, ammunition, fuel and so on, enabling more proactive inventory management with the aim of avoiding stock-outs or surpluses of critical supplies. The US Army is using AI for predictive logistics, leveraging the technology to help forecast and manage the supply of parts and equipment more efficiently.
TAIWAN’S MILITARY USE OF AI
Taiwan is in a strategically central and commanding position in the Western Pacific. Its location across sea lanes connecting large economies and its proximity to the Chinese mainland make Taiwan vital in terms of defence and power projection for the major powers. Taiwan is also an important international trade hub and a partner in global supply chains.
▲ Figure 1: Taiwan’s Location in the First Island Chain. Source: AndreaNicolini/Adobe Stock.
Figures from 2023 show Taiwan as the 17th-largest trading economy globally, with a trade volume of more than $900 billion in that year. Taiwan produces more than 60% of the world’s semiconductors and more than 90% of the most advanced ones. One Taiwanese company, TSMC, dominates the global semiconductor market, with 57% of the market share of semiconductor foundries in 2021. If Taiwan were to be attacked and production disrupted, no country could fill the void, and a halt in the manufacture and distribution of semiconductors would cause a shortage of technology products such as smartphones, computers and cars, along with company closures.
China expresses a preference for peaceful unification but has spent the past 20 years developing its military to conquer Taiwan. China’s leaders consistently communicate that Taiwan should be under Beijing’s control, and that they are willing to use force to achieve this. As Taiwan becomes more independent and less interested in unifying with China, Beijing may decide that force is the only way to achieve its political goals regarding Taiwan.
TAIWAN’S STARTING POINTS
Currently, the Taiwanese military is actively exploring and utilising AI technology to enhance military efficiency and strengthen defence capabilities. For example, the Tri-Service General Hospital, one of the largest teaching hospitals in Taiwan, receives more than 2 million visits per year. By combining a Microsoft large-scale language model with AI and the technical support of Microsoft’s Taiwan R&D centre, the hospital has brought together medical big data and Azure OpenAI to improve the accuracy of medical audio-visual recognition, to be automatically generated by AI to enable doctors and caregivers to improve the quality of diagnosis and care. The hospital’s medical information team will extend its technology development to record reports of doctors’ check-ups, outpatient clinics and surgeries, and the research results from medical image recognition, while reports of disease signs and symptoms and diagnoses will be generated and shared with the 13 military hospitals in the area of electrocardiography, to improve the speed of first aid through AI technology.
China is rapidly increasing its intelligence-gathering capabilities against Taiwan, including through cyber attacks, surveillance aircraft (drones) and electronic warfare. A significant challenge for Taiwan is to monitor and analyse these multidomain threats. Taiwan is seeking to learn from countries such as Israel to improve its own use of these technologies to counter the growing Chinese threat. By leveraging AI algorithms for data fusion and pattern recognition, Taiwan can gain actionable insights into adversary intentions and activities. This includes using signal intelligence from communications and radar transmissions to understand China’s capabilities and intentions. Social media monitoring to detect disinformation campaigns or indications of impending military activity enables proactive decision-making and threat-mitigation strategies.
As cyber threats become increasingly complex and pervasive, Taiwan must strengthen its cyber-defence capabilities to protect critical infrastructure and sensitive information. Taiwan has elevated cyber security as a national security priority and is taking proactive measures through initiatives such as the National Institute of Cyber Security and the Administration for Cyber Security of the Ministry of Digital Affairs, which is developing cutting-edge AI tools to detect fraudulent online activities, analyse suspicious messages and combat misinformation or false information campaigns. By further leveraging AI for threat hunting, anomaly detection and incident response, Taiwan can enhance its ability to defend against cyber attacks and minimise the impact of cyber incidents on military operations.
In the ongoing Ukraine–Russia war, UAVs, especially smaller, low-cost drones adapted for combat roles, have demonstrated astonishing operational effectiveness in missions such as surveillance, targeting and precision-strike operations, enabling the outgunned Ukrainian army to deliver precise strikes and inflict significant damage against the quantitively superior Russian army.
The ability to use missiles, drones and mines is critical for the defence of Taiwan, which needs to signal the ability to destroy invading ships and aircraft before they reach the main island. Taiwan could use drones with large payloads to attack China’s amphibious fleet, strike strategic targets and supplement crewed aircraft. They could also be used to serve as missile decoys and enable dispersed operations from roads if airbases are attacked. The use of multi-layered deterrence to maintain a solid defence posture is particularly suitable for Taiwan’s national defence strategy, which assumes lean military strength and tight procurement budget allocation. Taiwan’s drone capabilities are currently inferior to those of China. To catch up, Taiwan has set up the Drone National Team programme, which by mid-2024 had brought together companies and the military to produce more than 3,200 drones.
The National Chung Shan Institute of Science and Technology (NCSIST) in Taoyuan City has indicated its plans to integrate AI and deep learning to develop an intelligent training environment that integrates live, virtual and constructive simulation into the more than 240 sets of simulation systems it has established for Taiwan’s army, including a drone-operated training simulator and the Brave Eagle training system, which has the potential to improve the comprehensive effectiveness of the army’s combat power. This year, Taiwan’s Ministry of National Defense established a National Defense Innovation Group. The ministry has demonstrated its determination to strengthen its asymmetric warfare capabilities, and has begun in-depth cooperation with the US Department of Commerce to promote projects such as drone systems, anti-drone systems, and AI use.
CURRENT USE OF AI IN CHINA’S MILITARY
China’s People’s Liberation Army (PLA) has been actively developing and deploying AI since the mid-2010s. These efforts are in line with China’s broader strategic initiatives, such as the Made in China 2025 plan from 2015 and the New Generation of Artificial Intelligence Development Plan, released in 2017.
In July 2017, China’s State Council released a policy document, the “Developmental Regulations on a New Artificial Intelligence Generation”, which outlines the country’s AI development plan. The document demonstrates China’s ambition to become a world leader in AI by 2030. It stipulates that AI will be used for defence purposes, including command decision-making, military simulation and defence equipment. Since the release of the document, China has started to use AI more extensively in its military, particularly in the following areas:
-
Unmanned combat systems: The PLA has been developing and using unmanned AI systems since 2015. These include drones for air, ground, sea and subsea operations. China has world-class capabilities in AI-dependent drone swarms with military potential. In April 2023, the PLA tested an AI system to help with artillery targeting, showing how AI is used in weapons.
-
Battlefield awareness and decision-making: The PLA is using AI to improve awareness on the battlefield and in decision-making. This means combining data from different sources to get a complete up-to-date picture. Chinese military experts have discussed the idea of a “command brain”, to use AI for making plans at the tactical and operational levels.
-
Multidomain operations: AI has been employed by China to conduct and coordinate actions across the domains of land, sea, air, space and cyber since at least 2020. AI helps to combine and analyse data from different domains to improve effectiveness.
-
Predictive maintenance and logistics: China is using AI to maintain equipment and improve logistics. AI can predict when equipment might fail and help with decisions about where to put resources. China has been using AI in this area since at least 2020. This is part of a broader strategy to leverage AI for military advantage and modernisation.
-
Information and electronic warfare: AI is being used in information and electronic warfare to help the PLA in modern conflict. For example, the PLA Navy Submarine Academy has awarded contracts related to AI-based adaptive beamforming techniques.
-
Simulation and training: The PLA is using AI in training its military personnel through realistic and effective simulations of warfare. The PLA has been integrating AI technologies into various aspects of military training and operations as part of a shift towards “intelligentized warfare”. This shift began gaining significant momentum around 2015, when China emphasised the importance of AI Informatisation in its defence strategy.
-
Command, control and communication: AI is being added to the PLA’s command, control and communication systems to improve military operations. This includes applications at all levels of warfare.
China has ambitious plans to employ AI in the military, but limited transparency means that it is not always clear how advanced these systems are. The PLA is still working on overcoming challenges in testing, training and developing concepts for these AI-enabled systems. The development of military AI in the PLA is certainly changing. In the near future, changes will probably be small, but in the long term, they could be substantial.
TAIWAN’S NECESSARY TILT TOWARDS AI
Four areas that shape Taiwan’s defence are connected to the kinds of capability advantages presented by the military application of AI.
-
AI’s centrality to China–US rivalry and the PLA’s modernisation make it a major consideration in Taiwan’s defence strategy. In 2021, veteran US diplomat Henry Kissinger warned that, among other things, a mutual failure to understand each other’s AI capability could lead to conflict. China’s President Xi Jinping talked about “emerging domain strategic capabilities” at the National People’s Congress on 11 March 2024, calling on the military to deepen reforms, promote innovation and enhance emerging strategic capabilities. AI is a source as well as an instrument of conflict between Taiwan’s principal ally and its principal adversary, so even if a resource imbalance makes it difficult for Taiwan to acquire capabilities at the same level as China, there is no escaping the need for it to understand the AI capabilities of both ally and adversary, and how those are shaping the future of warfare.
-
US forces that might be available to defend Taiwan are based some distance away in Japan, Guam and elsewhere, making early warning of preparations for invasion essential to a successful defence strategy. It is equally important that decision-makers in Taiwan and the US base their judgements on sources of information that both can trust. The more AI and machine learning are adopted by the US to monitor and assess China’s intentions towards Taiwan, including any indications of a possible military build-up or preparations for invasion, the more important it becomes for Taiwan to keep pace and not allow a gap to open in terms of a common standard for indications and warnings.
-
A successful invasion of Taiwan would require large amounts of materiel to cross the Taiwan Strait, making the underwater battlefield of particular importance. It is therefore natural that Taiwan is strongly incentivised to develop autonomous underwater vehicles (AUVs) powered by AI technology. The National Sun Yat-sen University successfully built Taiwan’s first marine-specific AUV with AI capabilities in 2020. It leverages AI and deep learning for object recognition, tracking, obstacle avoidance and self-correction during underwater operations. By integrating AI into autonomous systems, Taiwan can extend its operational reach and capabilities, enabling unmanned platforms to operate in contested environments and execute missions with minimal human oversight.
-
The air domain has a twofold importance for Taiwan’s defence – as a means by which an invasion force could arrive, and as a means by which Taiwan’s own forces could be degraded or destroyed in advance of an invasion. According to the 2022 National Defense Technology Trend Assessment Report published in late 2022 by Taiwan’s defense think tank, the Institute for National Defense and Security Research, Taiwan’s Ministry of National Defense believes that drones should be the focus of Taiwan’s asymmetric warfare weaponry development. NCSIST, the main R&D organisation of the Armament Bureau of Taiwan’s Ministry of National Defense, is developing AI-enabled weapon systems related to the use of drones.
In conclusion, the integration of AI into Taiwan’s military is already offering many opportunities to enhance operational effectiveness and strengthen deterrence. As AI continues to advance, Taiwan must invest in R&D, talent acquisition and international collaboration to harness the full potential of AI technologies and secure its position as a resilient and capable force for peace and stability in the region.
CHALLENGES AND RECOMMENDATIONS FOR TAIWAN’S FUTURE MILITARY USE OF AI
This section lays out seven principal challenges faced by Taiwan in advancing the implementation of AI for military functions, and makes recommendations for how to address them to continue its necessary tilt towards AI.
CHALLENGE 1: DATA SECURITY AND PRIVACY CONCERNS
Information has always been vital in war, but in modern warfare, information – data – is the foundation for leveraging the power of AI to deliver military advantage. Implementing AI in military operations requires vast amounts of data, including information about tactics, personnel and equipment. Military drones with the latest sensors capture live intelligence on enemy movements and send the data to command centres for analysis.
More data can only be beneficial if the problems of information overload can be managed and the data can be reliably fused into a single picture. Algorithms and machine learning find patterns, trends and problems in big data, helping to turn raw data into useful information, and helping leaders to make decisions and plan ahead. Taiwan’s Ministry of National Defense is implementing standardised protocols and secure cloud infrastructure to overcome information silos. This network lets data flow between different organisations, helping them work together and understand the operational landscape better.
However, ensuring the security and privacy of this data presents a significant challenge. Concerns include unauthorised access, data breaches and exploitation by adversaries. Military personnel are encouraged to share information through user-friendly tools and platforms that let people in different places work together. But it is vital to ensure sharing technology is used properly.
RECOMMENDATIONS
-
Use encryption to protect data stored on computers and sent over the internet.
-
Implement secure communication protocols to ensure that data transmitted between systems remains protected from interception or tampering.
-
Restrict access to data to authorised personnel.
-
Verify user identities for accessing sensitive information.
-
Prioritise anonymisation and pseudonymisation techniques to de-identify sensitive data, reducing the risk of exposure in the event of a breach while still allowing for meaningful analysis and use.
-
Engage advanced encryption to protect sensitive data during transmission and storage.
-
Use secure cloud infrastructure for scalable processing with robust access controls and data isolation.
-
Use strict data-handling protocols, such as applying anonymisation and secure deletion procedures.
-
Ensure continuous improvement based on feedback from military units, enhancing both performance and security.
Real-world example:
The US DoD’s Project Maven, which uses AI for object recognition in military imagery and video data, makes use of advanced encryption and secure cloud infrastructure to protect the sensitive data used in its AI-powered object recognition systems for military intelligence.
CHALLENGE 2: TRANSPARENCY AND EXPLAINABILITY
Many AI techniques, such as deep learning, are “black boxes”, in which the decision-making process is not transparent or easily explainable to human operators and decision-makers. Interpretable AI can help to identify potential biases or errors in the decision-making process, which is crucial for high-stakes military applications.
RECOMMENDATIONS
-
Develop more transparent and interpretable AI models that allow military personnel to understand the “reasoning” behind the system’s decisions, enabling better oversight and trust in the technology.
-
Utilise research on AI explainability and visualisation techniques to improve the understandability of military AI systems. Visualisation techniques such as saliency maps and feature importance plots can help human operators quickly comprehend the factors influencing an AI system’s outputs.
Real-world example:
The US’s Defense Advanced Research Projects Agency’s Explainable AI (XAI) programme, which ended in 2021, aimed to produce more transparent machine learning models while maintaining high performance, applicable to military decision-support systems.
CHALLENGE 3: ROBUSTNESS AND RELIABILITY
AI systems have been shown to be vulnerable to adversarial attacks that can drastically reduce their performance, posing risks in high-stakes military applications. Robust security measures, such as adversarial training and model hardening, can help harden AI systems against intentional attacks designed to deceive or disable them.
RECOMMENDATIONS
-
Implement robust security measures to protect AI models from external manipulation.
-
Continuously monitor for new vulnerabilities and develop countermeasures.
-
Use continuous monitoring and rapid response to emerging vulnerabilities. This is crucial to maintaining the reliability of military AI systems in the face of evolving threats.
-
Limit external access to critical AI models and defence techniques, which can help prevent adversaries from reverse-engineering or exploiting these systems.
Real-world example:
The US Army Research Laboratory is developing AI systems that can detect and resist adversarial attacks, particularly for image recognition systems used in military applications.
CHALLENGE 4: TECHNOLOGICAL LIMITATIONS AND INTEGRATION COMPLEXITY
Deploying AI systems in military environments often involves integrating complex technologies across diverse platforms and systems. Compatibility issues, interoperability challenges and limitations in AI capabilities may hinder seamless integration and adoption.
RECOMMENDATIONS
-
Design AI systems with modular and scalable architectures to facilitate integration with existing military infrastructure.
-
Prioritise interoperability standards and protocols to ensure compatibility across different platforms and systems.
-
Conduct rigorous testing and evaluation of AI systems in realistic operational environments to identify and address technological limitations and performance gaps.
-
Implement iterative development processes to refine AI algorithms and improve system capabilities over time.
-
Pool resources and share knowledge, which can accelerate technological advances and facilitate smoother integration of AI into military operations.
-
Foster collaboration and partnerships with industry, academia and allies to leverage expertise and resources in AI research and development.
Real-world example:
The US Air Force’s Advanced Battle Management System uses a modular, open-systems approach to integrate AI capabilities across multiple platforms and domains. This approach enables rapid adaptation to new technologies and threats while maintaining cross-branch and allied forces interoperability.
CHALLENGE 5: INSUFFICIENT TRAINING DATA
Many military applications lack the large, high-quality datasets typically required to train effective AI models using machine learning.
RECOMMENDATIONS
-
When dealing with limited training data, techniques such as data augmentation can be employed to artificially expand the dataset. This includes methods such as cropping, flipping, rotating or adding noise to existing data samples. While not a complete solution, data augmentation can help increase the diversity and size of training datasets.
-
Instead of training models from scratch on limited data, transfer learning allows leveraging of models pre-trained on large, general datasets and then fine-tuning them on the smaller, domain-specific military datasets. This can significantly reduce data requirements while still achieving good performance.
-
For certain military use cases such as simulations or scenario modelling, it may be possible to generate synthetic training data using rule-based systems, physics engines or generative adversarial networks. While ensuring realism is crucial, synthetic data can supplement real-world data to increase the overall training dataset size.
Real-world example:
The US Air Force’s Dataworks initiative aims to create high-quality diverse datasets for AI training in military applications, addressing the challenge of limited data in sensitive contexts. This initiative is crucial for overcoming data-scarcity challenges in developing AI for sensitive military applications.
CHALLENGE 6: KEEPING PACE WITH RAPID AI ADVANCES
AI is changing fast, so it is challenging for the military’s testing and use to keep pace with developments in the technology.
RECOMMENDATIONS
-
Agile development allows the military to be more responsive to the fast-paced changes in AI, rather than relying on traditional, lengthy procurement processes.
-
Continuous integration and testing help ensure new AI systems are thoroughly vetted before being deployed in operational environments.
-
Investing in internal AI R&D helps the military stay ahead of the curve and develop custom solutions tailored to its specific needs.
-
Attracting and retaining skilled AI researchers and engineers ensures the military has the necessary expertise to effectively leverage the latest advances.
-
Collaborating with academic and industry partners can help the military access the latest AI innovations and knowledge.
Real-world example:
The US DoD’s Joint Artificial Intelligence Center (JAIC) uses an agile development approach, allowing for rapid prototyping and iteration of AI systems. This approach allows JAIC to keep pace with rapid advancements in AI technology while ensuring that developed systems meet the specific needs of the military.
CHALLENGE 7: ETHICAL AND LEGAL IMPLICATIONS
AI raises complex ethical and legal issues around autonomy, compliance with international law and accountability for AI-driven decisions and actions.
RECOMMENDATIONS
-
Develop comprehensive ethical guidelines and frameworks specific to the use of AI in military contexts. Ensure that these guidelines address issues such as human oversight, accountability and the responsible use of autonomous systems.
-
Engage with and draw on the work of international processes set up to explore and develop regulation on lethal autonomous weapon systems (LAWS) in the context of international humanitarian law, such as the Group of Governmental Experts on LAWS at the UN Office for Disarmament Affairs in Geneva.
-
Ensure military AI applications comply with national and international regulations, including regulations governing the use of force, data protection and human rights.
-
Conduct legal reviews and assessments to identify and mitigate potential legal risks.
-
Foster transparency in AI systems by providing clear documentation of their capabilities, limitations and decision-making processes.
-
Establish mechanisms for accountability, including processes for reviewing and auditing AI-related decisions and actions.
Real-world example:
The UK Ministry of Defence has established an AI Ethics Advisory Panel to provide external advice and scrutiny of its AI development and deployment, ensuring alignment with ethical principles and legal requirements.
CONCLUSION
AI appears to offer significant military advantages in terms of strategy (especially information processing and C2), operational capabilities, logistics and support. However, navigating the complexities of implementing AI in the military field requires careful consideration of not only technological capabilities but also ethical and organisational challenges.
In Taiwan’s case, where material imbalances and political dependencies dominate the risks of conflict, those responsible for the island’s defence cannot afford to neglect any aspect of AI’s potential to gain time and level the playing field. Embracing AI in the military fosters a culture of innovation and adaptability to stay ahead of emerging threats and technological advances. By investing in R&D and fostering collaboration with international partners, Taiwan can design military AI systems with modular and scalable architectures that can accommodate updates and upgrades as the technology evolves, as well as remaining interoperable with its key ally, the US.
By identifying and mitigating risks proactively, Taiwan’s military can make gains beyond enhancing its own defensive capabilities. Keeping up with the rapid pace of AI development through the recommendations in this paper will enable the military to more effectively integrate and leverage the latest AI innovations to maintain a world-class technological edge, making Taiwan a globally more attractive defence industry partner. The combination of Taiwan’s industrial and technological and skills base, experience in AI research and experimentation, and the pressures of its defence challenges position it to take a lead on the military application of AI.
Ultimately, Taiwan must embrace military AI because its adversary uses it, and so does its ally, and Taiwan must align with them on critical defence judgements and for interoperability. Taiwan needs AI to offset its scale disadvantages in demographics and material capability. AI-powered automation can multiply the number of platforms without the need for large personnel outlays. Gains in administrative and logistic efficiency permit scarce military personnel to focus their efforts on more critical missions and tasks, thereby maximising operational effectiveness.
Given the complex geopolitical situation and rapid advances in military AI, Taiwan faces a critical decision regarding the immediate development of AI for military use.
KEY CONSIDERATIONS
Key considerations for the Taiwanese government include:
-
Strategic necessity: AI development is likely to be crucial for Taiwan’s asymmetric defence strategy against potential threats from China. As China continues to advance its military AI capabilities, Taiwan needs to develop countermeasures to maintain a credible deterrent. AI could significantly enhance Taiwan’s surveillance, reconnaissance and early warning systems, providing a vital edge in situational awareness and response time.
-
Technological readiness: Taiwan has a strong technological foundation, particularly in semiconductor manufacturing, which could support AI development. However, it may face challenges in terms of data availability, computational power and specialised expertise for military AI applications. Careful assessment of these capabilities is necessary to ensure effective development and deployment.
-
Economic considerations: Given Taiwan’s limited defence budget compared to China’s, investing in AI could be a cost-effective way to enhance military capabilities. AI systems could potentially provide force multiplication effects, allowing Taiwan to do more with less. However, the initial investment and ongoing maintenance costs must be carefully weighed against other defence priorities.
-
Operational impact: Integrating AI into military operations will require significant changes in training, doctrine and organisational structure. Taiwan’s military will need to develop new skills and competencies to effectively utilise AI systems. Additionally, robust cyber-security measures must be implemented to protect AI systems from potential attacks or manipulation.
To mitigate risks and accelerate the development and integration of AI capabilities in the military, Taiwan should consider:
-
Focusing on specific AI applications that address critical defence needs.
-
Leveraging international partnerships, particularly with the US, for technology sharing and expertise.
-
Investing in education and training programmes to build a skilled workforce for AI development and implementation.
-
Prioritising cyber security and resilience in AI system design and deployment.
In conclusion, while there are challenges and risks associated with rapidly developing military AI, the strategic necessity for Taiwan appears to outweigh these concerns. In addition, China’s growing military capabilities, its future ambitions and the potential for AI to serve as a force multiplier make it crucial for Taiwan to consider prioritising the development of military AI applications. However, this should be done thoughtfully, with careful consideration of ethical implications, international norms and the need for robust security measures.
Cheng-Hung Hsu is the Chief of Operations Control at Taiwan’s Information, Communications and Electronic Force Command. His research interests include cyber security, defence strategy and Indo-Pacific policy. He holds a BA in Electronic Engineering and is a graduate of the Republic of China Air Force Air Command and Staff College.