THE AUTHOR:
Reza Eftekhar, Senior Legal Adviser at the Iran-United States Claims Tribunal (“IUSCT”)
Introduction
When discussing the intersection of arbitration and artificial intelligence (“AI”), a great majority of arbitration practitioners and academics immediately think of the application of AI by the parties or by the arbitral tribunal in the arbitration process (See, e.g., here). Some also envision AI functioning as an arbitrator (See, e.g., here).
However, far less attention has been devoted to scenarios where the AI itself is the factual subject matter of a dispute that is submitted to arbitration. For instance, disputes could arise out of an investment for the provision of AI services by a tech company to a foreign state or a state entity.
In the context of investment treaty arbitrations, given the vast investments made by tech companies in developing AI systems on a cross-border level on the one hand, and states’ increasing regulation of the use of AI in their territories on the other, the materialisation of AI-related investment treaty disputes is a very likely, if not an already existing, scenario.
Since investment treaties have not been drafted and/or revisited with the crystallisation of AI technology in mind, the introduction of such investments into the realm of ISDS could raise complex questions regarding jurisdiction, merits, and damages.
The purpose of the present article is to examine the possibility of the emergence of ISDS disputes pertaining to foreign investments in the AI sector. A subsequent contribution on AI-related ISDS will address the key legal issues that may arise in an investment treaty tribunal’s analysis of its subject-matter jurisdiction in such disputes.
Foreign Investment in the AI Sector
Foreign investment in the AI sector can take many forms: Apart from more traditional approaches such as acquiring shares in a local AI enterprise or forming joint ventures with local tech companies, there are more modern ways to invest across borders in this sector. For instance, there is now an increasing number of foreign investment contracts with states or state entities to provide AI-related services, such as the automation of public administration (e.g., judicial systems, transportation networks, energy grids) or the development of integrated smart city platforms and AI-powered cloud infrastructure (See, e.g., here).
In addition, general-purpose AI models allow anyone around the world with access to their platforms to subscribe to their AI systems. They may generate income through various sources, such as AI integration services or subscription fees. In certain jurisdictions, operating such AI platforms may require obtaining a license from the host country’s authorities and/or installing server infrastructure. In such situations, it is arguable that the tech company has made an investment within the meaning of the applicable investment treaty in the territory of the target country.
The AI investment operations described above rely on powerful computer systems. Thus, to carry out their projects and business activities, AI investors require data centres. Official UN reports point out that foreign investment in data centres has increased substantially. A recent UNCTAD report indicates that data centres accounted for more than 20% of global greenfield investments in 2025. The report notes that this marked increase in investments in data centres is primarily driven, inter alia, by an increasing demand for AI infrastructure. This trend, in turn, confirms the increase in various forms of foreign investment in the AI sector.
Whether an AI investment, with its various asset components, is a protected investment over which an investment treaty tribunal may assert subject-matter jurisdiction will be discussed in a subsequent contribution.
States’ Growing Regulation of the AI Sector
The current legal framework governing AI-related issues is marked by insufficient rulemaking. In certain jurisdictions, this is a deliberate policy choice, where administrations intentionally prioritise limitless innovation in AI and retain flexibility to adapt to further AI advancements. In other jurisdictions, significant efforts have been put in place to regulate the AI sector. (For further analysis and comparison between various approaches, see here)
However, globally, one can observe a prevailing trend towards regulating AI. The EU AI Act is the first comprehensive legislative framework that regulates the use of AI across all sectors and activities within the EU. Certain countries have also recently passed general AI laws. For instance, Italy’s National AI Law came into force on 10 October 2025. South Korea has also followed suit by enacting the Framework Act on the Development of Artificial Intelligence and Establishment of Trust, which entered into force on 22 January 2026. These legislative texts prohibit deploying AI for certain activities and condition its deployment in other activities on compliance with certain statutory obligations (See, e.g., Articles 23-26 of the EU AI Act).
To be sure, in order to protect human rights and public interest, safeguard national security, preserve traditional jobs, and adopt protectionist policies in support of domestic companies against the risk of being crowded out by superior foreign competitors, host states may enact general legislation and employ a series of specific regulatory measures such as introducing limitations on AI training and data inputs, establishing mandatory data-sharing obligations with the public and the government, imposing transparency and disclosure obligations with respect to algorithms (e.g., the obligation to disclose the source-code or the decision-making logic), suspending or blocking AI subscription accounts, platforms, or activities, and revoking AI licenses.
Furthermore, another critical factor is likely to lead to significant AI-related legislation and regulatory measures in the future. It has now transpired that AI-related technologies may have vast environmental impacts. Firstly, operating AI systems requires powerful servers that could consume a substantial amount of electricity, which in turn causes increased greenhouse gas emissions. Secondly, AI-related data centres could consume appreciable amounts of water to keep their systems cool. This places a strain on local water resources. Thirdly, continuous advances in hardware technologies could generate large amounts of obsolete electronic devices as waste, which may contain toxic substances. Coupled together, these factors render AI infrastructure a critical environmental challenge.
In this regard, it has been recently stated by the Chief of the Mitigation Branch in UNEP’s Climate Change Division that:
[D]ata centres place growing demand on electricity systems and can also affect water resources – depending on their design, cooling technology and location. While energy demand is rising across all AI data centres, water impacts vary greatly, with some relying on water-based cooling, particularly in hot or water stressed regions […] [I]n 2024, global data centres were estimated to have consumed 415 terawatt-hours of electricity – approximately 1.5 per cent of global electricity. This consumption is predicted to double by 2030. And, of course, when this power is generated from fossil fuels, we have a major emissions issue.
Given that many jurisdictions require technology investors to establish local data centres (See, e.g., Article 37 of the Cybersecurity Law of China), AI-related environmental risks may raise such legitimate concerns that trigger regulatory intervention by the recipient state.
Confronted with the environmental impacts of AI infrastructure, states may take action to abide by their international and municipal law climate change obligations (For a comprehensive treatment of states’ respective international obligations and responsibilities, see ICJ’s Advisory Opinion on “Obligations of States in respect of Climate Change”). In particular, states may adopt measures against AI data centres established by foreign investors. These steps could range from the closure of data centres and a ban on their activity to less dramatic measures such as imposing fines or increasing taxes based on the extent of emissions. Some other measures may include requiring investors to replace their current equipment with more sustainable alternatives that are more energy-efficient and climate-friendly.
In other situations, countries may restrict foreign investment in sensitive areas such as AI. To explain, high-technology industries feature prominently on the investment agendas of sovereign wealth funds (“SWFs”). (Herdegen 434) Developed countries, with leadership in AI technologies, seek to adopt robust policies to foreclose or limit access to these cutting-edge technologies through foreign investment by such SWFs. For example, Executive Order 14083 (2022) in the United States renders the US FDI screening procedure more responsive to possible national security risks related to advanced technologies, such as AI. Although this Order does not preclude FDI in the AI sector altogether, it heightens scrutiny and places potential limitations when reviewing proposed foreign investments in AI, which is a technology “that [is] fundamental to national security”. This has particular significance in cases where the investment treaty in question extends the treaty’s relative standards of protection (e.g., MFN and NT) to the pre-establishment phase. In such cases, the putative investor may file a claim against the host state for the alleged breach of the MFN or the NT treaty clause (See, e.g., Articles 14.4 & 14.5 of the USMCA (2018)). The viability of such a claim would depend, inter alia, on public policy exception clauses (See, e.g., Article 14.16 of the USMCA (2018)).
Impact of States’ Regulatory Measures on Foreign Investments in the AI Sector and Materialisation of ISDS Disputes
The magnitude of foreign investments in various AI-related sectors, coupled with states’ growing regulation of AI, heightens the likelihood of the materialisation of investor-state disputes under investment treaties.
While various factual scenarios may engage multiple investment treaty standards, this article confines the analysis to two prominent examples: (i) fair and equitable treatment; and (ii) indirect expropriation.
- Where foreign investors undertake AI-sector investments in jurisdictions where AI is not yet specifically regulated, they structure their investments in reliance on the legal framework that lacks dedicated AI compliance requirements. However, once strict AI laws and regulations are enacted by the host state’s legislative and regulatory authorities, the investor’s legitimate expectations may be frustrated, and the regulatory stability on which the investment was premised may be undermined. For example, once a state characterises the use of AI in certain economic activities as high risk, this may require the investor to fulfil certain statutory obligations that would entail substantial additional expenditure not accounted for at the time of investing. Such a regulatory approach may erode the enterprise’s competitive advantages and impair the profitability of its investments. Other state regulatory measures may adversely impact economic activities in the field by hampering the operational capacity of the AI investor. This could happen where the state requires the investor to replace its data centre equipment with more climate-friendly infrastructure. Faced with such state measures, aggrieved AI investors may avail themselves of an applicable investment treaty to lodge an arbitration case against the host state on the international law plane. Since, as certain tribunals have held, “the most important function of the fair and equitable treatment standard is the protection of the investor’s […] legitimate expectations” (Electrabel v. Hungary [7.75]), subsequent regulatory changes negatively affecting the AI sector may form the basis of a claim by the investor alleging a breach of the FET standard of the treaty.
- In other instances, it may be the investor’s property rights that are influenced by states’ regulatory measures. For instance, where the host country requires transparency in algorithms, the investor’s alleged intellectual property rights may be exposed to the risk of undue and unauthorised technology transfer. If such exposure leads to a substantial deprivation of the economic value of the investment, the AI investor may assert that the host state has indirectly expropriated its investment. (For the substantial deprivation test for indirect expropriation, see Telenor v. Hungary [64]-[65])
Concluding Remarks and Future Outlook
Foreign investments in the AI field are on the rise. This is while states are actively seeking to keep pace with AI developments, inter alia, by enacting legislation and adopting regulatory measures. These legislative and regulatory measures may have detrimental effects on foreign investments in the AI industry. This, in turn, may culminate in the materialisation and proliferation of investment treaty disputes in the AI sector.
The upshot of these developments could be that ISDS cases that have traditionally focused on oil, gas, and energy production, telecommunications, mining, and construction will gradually shift to technology- and AI-related cases.
ABOUT THE AUTHOR
Dr. Reza Eftekhar is a Senior Legal Adviser at the IUSCT. He deals with public international law and contractual interstate arbitrations in this capacity. He is also a practitioner in international investment and commercial arbitrations. He has written widely on international commercial and investment treaty arbitration and has been a speaker at numerous seminars on these subjects. He is the author of the book entitled “The Role of the Domestic Law of the Host State in Determining the Jurisdiction ratione materiae of Investment Treaty Tribunals: The Partial Revival of the Localisation Theory?” (Brill 2021) He holds a Ph.D. in International Dispute Settlement from Leiden University.
*The views and opinions expressed by authors are theirs and do not necessarily reflect those of their organizations, employers, or Daily Jus, Jus Mundi, or Jus Connect.




