Japan Shifting AI Strategy

New Government Policy Shows Japan Favors a Light Touch for AI Regulation
Hiroki Habuka | 2025.02.25
Is Japan shifting its approach to AI regulation? In a new Interim Report, its AI Policy Study Group indicated significant changes in the country’s AI strategy. Hiroki Habuka unpacks the document’s implications for Japanese AI governance.
In early 2024, it seemed clear among the major developed economies that significant new regulatory frameworks for artificial intelligence (AI) were imminent. In the United States, the Biden administration had passed a sweeping AI executive order in October 2023, and congressional leaders were working toward comprehensive AI legislation for 2024. Meanwhile, the European Union was preparing to pass the EU AI Act, which it ultimately did in May 2024.
Japan was also a participant in this prevailing trend. Two key publications from the first half of 2024 strongly signaled that Japan was heading toward new legislation aimed to more comprehensively regulate AI technology: in February, a concept paper from the ruling Liberal Democratic Party, and in May, a white paper by the Japanese Cabinet Office’s AI Strategy team. Both documents recommended the introduction of new regulations for large-scale foundational models. All signs suggested that Japan was moving in parallel with its allies toward establishing a strengthened regulatory structure for AI.
By the end of 2024, however, the prospects for tough AI regulation in the United States and Europe had changed significantly. Not only did the United States never pass AI legislation, but also U.S. voters reelected President Donald Trump, who fulfilled a campaign promise to repeal the Biden administration’s AI executive order on the first day of his second term in office. The European Union, for its part, is still working to implement the AI Act, but influential documents such as the Draghi report on EU competitiveness suggest widespread concerns in the European Union that its AI regulatory efforts may have gone too far and stifled innovation. These concerns have been carried forward in the European Commission’s white paper “A Competitiveness Compass for the EU,” which emphasizes the necessity of achieving simpler, lighter, and faster regulation. At the AI Action Summit, EU President Ursula von der Leyen committed to reducing bureaucratic hurdles. Meanwhile, the French government is reportedly working to ensure that implementation of the AI Act is more focused on promoting innovation and less focused on regulating potential harms than drafters of the legislation anticipated.
Japan, like its U.S. and EU allies, is hitting the brakes on the race to regulate AI. On February 4, 2025, the Japanese government’s Cabinet Office released the interim report of its AI Policy Study Group (henceforth “the Interim Report,” or “the Report”), which outlined a very different vision for AI regulation than the country’s two reports from the first half of the previous year. This CSIS white paper outlines the direction of Japan’s AI regulatory approach in 2025, based on the contents of the Interim Report, while also incorporating Japan’s response to the so-called DeepSeek Shock. A summary of the Interim Report is provided in the Appendix for reference.
Maintaining a Sector-Specific Approach to AI Regulation
The AI Policy Study Group is an expert committee established under the Cabinet Office; it serves as the central body overseeing Japan’s AI policy development as a whole, covering both regulatory and promotional policies. In contrast to the ambitious regulatory trends observed during the first half of 2024, the Interim Report published by the AI Policy Study Group in February 2025 adopts a markedly more cautious stance. The Interim Report underscores Japan’s preference for relying on existing sector-specific laws rather than imposing sweeping AI-specific regulations, in accordance with the principle of technological neutrality. It also highlights the importance of voluntary risk-mitigation initiatives undertaken by businesses, while committing the government to the continuous monitoring of emerging risks and necessary measures. As part of this effort, the Report suggests new legislation to establish a government strategic leadership body to enable the collection of necessary information for policies and cooperation on information about major incidents, but without legal sanctions. This marked departure from earlier proposals is reflective of the complex nature of AI risks, the current limitations in assessing the safety of advanced AI models, and the broader trend of regulatory easing observed under the new Trump administration—whose repeal of the Biden-era AI executive order set a clear and early precedent. Additionally, the outcome of the October 2024 general election in Japan, in which the ruling Liberal Democratic Party lost its majority, has resulted in a fragmented Diet, making the advancement of ambitious legislative reforms a formidable challenge.
AI systems are designed to make sophisticated inferences and decisions based on large datasets and numerous parameters using statistical and probabilistic methods. In this sense, the dangers posed by AI are less about fundamentally introducing new types of risks and more about amplifying existing risks. From this perspective, the stance outlined in the Interim Report, which places greater reliance on existing legal frameworks and voluntary industry measures, appears reasonable and consistent with Japan’s established policy approach.
Balancing Business-Led AI Governance and Strategic Government Leadership
In the Interim Report, it appears likely that the Japanese government will rely on businesses’ voluntary commitments to address AI risks under existing laws. However, simply delegating risk management does not mean that all businesses will immediately be able to address risks appropriately. In particular, startup companies—which are key drivers of innovation—often lack sufficient resources to dedicate to safety and governance.
Therefore, this white paper proposes that the government’s newly established strategic leadership body take the initiative in reducing the cost and complexity of responsibly implementing AI across society. This would be accomplished by continuously producing outputs such as clarifications of the interpretation of laws in specific areas, guidance on methods for evaluating AI safety, updates to the AI Guidelines for Business (a risk management framework), and contractual guidelines for entities across the supply chain. In fact, the Japanese government previously pursued similar approaches in 2024, proposing guidance on the interpretation of existing laws as well as directions for new legislative measures in specific areas such as copyrights, other intellectual property rights, and personal information, as well as measures to address disinformation and misinformation.
The Key to Success: Transparency and Effective Guidance
If the Japanese government does successfully transition to a continuous production of outputs relevant to AI, it should move away from closed, non-transparent discussions—like those of the current AI Policy Study Group—and instead adopt a transparent multi-stakeholder process that brings together knowledge from a wide range of fields.
Further, it is essential that, in principle, information requested from private businesses not be conducted for the purpose of criticizing companies; instead, it should aim to gather best practices and conduct proactive evaluations. Sensitive information should be handled cautiously, such as by limiting its disclosure to specific recipients, so as to ensure that the innovation and economic growth incentives of AI developers and providers are preserved. The information collection mechanism should be operated in a way that avoids creating a situation where businesses that cooperate are criticized based on the content they disclose, while those that refuse to cooperate face no scrutiny—a “no good deed goes unpunished” scenario.
Japan’s Response to the DeepSeek Shock
Finally, it is worth touching upon Japan’s response to the DeepSeek Shock. The emergence of DeepSeek—a high-performance, small-scale, and low-cost AI model from China—garnered significant attention in Japan. Interestingly, the Interim Report was drafted and released for public comment before the DeepSeek Shock occurred in January 2025, with only the final version published in February after the event. However, in the end, the DeepSeek Shock had little impact on the content of the Interim Report; the Report had already rejected the idea of regulating only large-scale foundational models (an approach initially proposed in early 2024). Moreover, the Report emphasized that, even if future regulations were considered, risk assessments should be conducted based on actual risks rather than on model size.
Of course, concerns regarding national security risks associated with Chinese-developed AI models remain. For example, during Diet deliberations, it became a point of contention that DeepSeek described the Senkaku Islands—territory claimed by both Japan and China—as “China’s inherent territory, both historically and under international law.” In response, Prime Minister Ishiba adhered to the policy direction outlined in the Interim Report, stating that the government would accelerate preparations for legislation that authorizes the government to first issue administrative guidance and, if deemed insufficient, take stronger measures against AI risks. In addition, on February 6, 2025, the Japanese government issued an advisory to government ministries and agencies regarding the use of DeepSeek. This notice primarily highlighted that data acquired by DeepSeek is stored on servers in China and thus subject to Chinese legal jurisdiction. However, beyond this, it largely reiterated existing guidance issued in 2023 regarding the use of generative AI by government agencies, such as prohibiting the entry of sensitive information into AI prompts and requiring agencies to consult the National Center of Incident Readiness and Strategy for Cybersecurity and the Digital Agency in cases involving systems used for national security and public safety operations, as well as those handling highly confidential information.
As of now, Japan has no specific restrictions on the use of DeepSeek by private entities. In fact, the country’s private sector has largely welcomed the emergence of DeepSeek. This is due not only to its high performance, small scale, and low cost, but also its open-weight nature. Many tech companies have already begun developing and offering their own fine-tuned versions of DeepSeek’s models tailored to their specific needs. Moreover, concerns over national security biases and the absence of built-in safeguards can be substantially mitigated through additional fine-tuning conducted by these companies.
Of course, this does not mean that DeepSeek and its derivative models are exempt from regulation. As analyzed in detail in the table in the Interim Report (see this paper’s Appendix), existing laws already regulate the manipulation of information through disinformation and misinformation as well as the unauthorized use of acquired data. Furthermore, the new legislation hinted at in the Interim Report is designed to enable the government to collect potential legal violations and best practices for risk mitigation, ensuring that emerging AI risks are effectively addressed within Japan’s regulatory framework.
Looking Ahead: Japan’s AI Policy in a Shifting Landscape
As the global outlook for AI governance becomes increasingly uncertain in 2025, Japan’s role in contributing to international rulemaking through frameworks such as the G7 and Organization for Economic Cooperation and Development will grow in importance. It will be a critical test for Japan to design and implement legal systems, under a constructive public-private partnership, that can manage various risks to an acceptable level while maximizing the benefits that AI brings.
Appendix
SUMMARY AND ANALYSIS OF THE DRAFT INTERIM REPORT
The Interim Report presents a vision for the future design of AI policies in Japan by elaborating on five basic approaches and six specific policy proposals including potential new legislation.
KEY POINTS OF THE DRAFT INTERIM REPORT
Emphasis on Voluntary Initiatives by Businesses and a Softer Regulatory Tone
The Interim Report emphasizes that AI risks should primarily be addressed by the voluntary initiatives and creative efforts of businesses. It suggests that the establishment of new regulations targeting large-scale AI models or the adoption of comprehensive AI regulations (such as those in the European Union) is not currently being considered as a policy direction.
However, this does not mean that all responsibilities are left entirely to businesses. The Report proposes establishing a strategic leadership body within the government to enhance coordination among relevant ministries and agencies. It also recommends enacting necessary legislation for government-led information collection, aiming for the integrated implementation of AI-related policies based on best practices from businesses.
The Five “Basic Approaches to Legal Systems”
The Interim Report consists of two main parts: “Basic Approaches to Legal Systems” and “Direction of Specific Legal Systems and Policies.” For the former, the following principles are outlined:
-
Respecting Voluntary Initiatives by Businesses
Given the conservative compliance approaches of Japanese companies, the establishment of new regulations is expected to have a significant chilling effect. Therefore, the Report emphasizes respecting voluntary initiatives by businesses and limiting regulations to areas where such voluntary efforts are insufficient to adequately address risks.
-
Leveraging Existing Sectoral Laws in Lieu of Comprehensive AI Regulation
While the prevailing opinion is that Japan addresses AI risks primarily through a soft law approach, it is a misconception that Japan lacks hard laws related to AI. The Interim Report lists numerous existing laws addressing various risk areas, such as the leakage of confidential information, intellectual property rights infringement, privacy, malfunction, deepfakes, bias, information manipulation, and cyberattacks (see Table 1 below). The Report proposes addressing AI risks by evolving these existing laws within their respective sectors. Conversely, comprehensive and cross-sectoral regulations like the European Union’s AI Act are clearly not being pursued.
-
Minimal and Technology-Neutral Regulation
According to the Interim Report, even when new legislation is necessary, it should be limited to truly essential areas. The principle of “technology neutrality” should be observed to avoid imposing disproportionately high requirements solely on AI. In principle, new regulations should not be created merely because a technology is labeled as “AI” (any more than it should be regulated merely because it is “software”), as this is both legally difficult and unnecessary. Care should also be taken to avoid hindering legitimate research and to ensure that startups and other businesses can comply with regulations.
-
Risk-Based and Case-Specific Approaches
As part of general measures, the Interim Reports recommends encouraging the use of the “AI Guidelines for Business,” which should be continuously updated based on the results of government-led information collection (see section 2 below). The responsibilities of AI developers, providers, and users should be carefully considered, with a focus on promoting the sharing and coordination of necessary information among stakeholders.
For AI risks inadequately addressed by existing laws, responses should be determined on a case-by-case basis, taking into account the severity and nature of the risks. The Report highlights the importance of avoiding arbitrary enforcement or undue chilling effects when conducting fact-finding investigations necessary for policy decisions or requesting information-sharing cooperation from businesses. The Interim Report also states that regulations should not be uniformly applied based on the scale of AI models (e.g., floating-point operations per second, or FLOPS) or the number of users but should instead be tailored to actual risks (i.e., taking a risk-based approach).
-
Contribution to the Formation of International Norms
Building on the Hiroshima AI Process established by G7 members at the 2023 Hiroshima Summit chaired by Japan, the Interim Report emphasizes respecting principles agreed upon in various international frameworks, such as the United Nations’ Global Digital Compact, the Council of Europe’s Framework Convention on Artificial Intelligence, the OECD’s AI Principles, and the resolutions of the Global Partnership on AI.
To ensure the international alignment and interoperability of rules, the Report advocates for Japan’s active participation in international standardization efforts, such as those of the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC). It also highlights the importance of consolidating and sharing domestic and international knowledge through collaboration among the AI Safety Institutes established in various countries.
▲ Table 1: Existing Laws Related to AI Risks. Source: Interim Report (Tentative), Japanese Cabinet Office AI Strategic Council.
The Six “Specific Directions for Legal Systems”
Building on the basic approaches outlined above, the Interim Report presents the following specific directions for legal systems:
-
Strengthening the Government’s Strategic Leadership Functions and Establishing a Strategic Plan (“Matters That Should Be Legislated”)
The process of AI research, development, and application involves diverse actors and complex activities that interact with one another. Therefore, the Interim Report emphasizes the necessity of a government-wide strategic leadership function to oversee and coordinate these activities. This includes wide-ranging participation from relevant ministries and agencies, along with a strategic plan outlining strategies for AI safety and utilization. To ensure the effectiveness of this strategic leadership body, its establishment and authority to request cooperation from relevant administrative agencies should be codified into law.
-
Promoting Transparency and Appropriateness and Introducing Information Provision Requests (“Matters Suitable for Legislative Action”)
To ensure AI safety, the Report highlights the importance of transparency and appropriateness.
Regarding transparency, the Interim Report recommends the sharing of necessary information along the AI value chain among developers, providers, and users to prevent risks from materializing. However, to avoid excessive disclosure that could compromise trade secrets or sensitive information during the research and development stage, information sharing should be limited to what is “truly necessary.”
To ensure appropriateness—which likely refers to objective safety and reliability—the Report proposes that the government develop guidelines based on international standards, such as those set forth in the Hiroshima AI Process, while encouraging voluntary initiatives by businesses. The guidelines may include provisions requiring AI developers and other relevant parties to take measures to reduce inappropriate outputs, disclose and share information on the overview and risks of AI systems, and promote appropriate transparency of training datasets. Additionally, attention should be paid to international trends in misinformation and disinformation countermeasures, such as requirements to notify users when a service is AI-powered or to label AI-generated content as such.
To support these measures, the Report suggests codifying provisions that enable the government to request information from businesses to understand their situation and provide necessary support. However, such requests are not expected to immediately impose mandatory obligations on businesses; instead, they are likely to encourage voluntary cooperation.
-
Voluntary Evaluations for AI Systems and Certification for AI Governance
As AI becomes increasingly integrated into products and services, it is desirable to establish agreed-upon evaluation and certification systems for both AI systems and the governance of organizations using AI.
The Interim Report suggests that for AI systems, safety evaluations should primarily rely on businesses’ voluntary risk assessments. On the other hand, third-party certification is identified as an effective approach for the governance of organizations using AI. This may include obtaining certifications under standards such as ISO/IEC 42001.
-
Investigating Major Incidents and Information Disclosure to the Public (“Matters Suitable for Legislative Action”)
For AI models that are widely used by the public or integrated into critical infrastructure services, the government should collect information on the actual conditions of their implementation and any ensuing risks, including risks arising from the involvement of diverse stakeholders in the supply chain.
In the event of major accidents or issues caused by AI, the government should investigate the root causes, provide guidance to stakeholders on preventive measures, and disclose relevant information to the public. To ensure cooperation from businesses in such cases, the Interim Report suggests introducing a legal framework that enables the government to request information from domestic and international businesses.
-
Promoting AI Adoption by the Government and Developing Procurement Guidelines
To promote the adoption of highly safe AI systems, the government should develop guidelines that clarify criteria and procedures for AI procurement, addressing risks such as those related to supply chains and specific risks unique to AI.
The government is also encouraged to take the lead in using AI to enhance the efficiency and quality of administrative services and operations. However, in areas where AI-based automated decisionmaking could significantly impact the rights and interests of citizens, the government should carefully recognize the associated risks and proceed with caution. Additionally, existing agreements on the handling of sensitive information should be reviewed and updated as necessary.
-
Addressing Risks to Life, Physical Safety, Systemic Stability, and National Security by Relevant Ministries
In areas with significant societal impact—such as medical devices, autonomous vehicles, and critical infrastructure services—relevant ministries and agencies should address risks through existing laws and guidelines while considering revisions or new legal instruments if necessary.
For systemic risks, such as where multiple AI systems may become interconnected in the future leading to large-scale disruptions due to unexpected behaviors, relevant ministries should continue to assess and address these challenges. Similarly, for national security risks, such as the use of AI in chemical, biological, radiological, and nuclear threats and cyberattacks, relevant ministries and agencies should deepen their examination and responses.
Hiroki Habuka is a non-resident fellow with the Wadhwani AI Center at the Center for Strategic and International Studies (CSIS).