Data Article

The legal standing of artificial intelligence as a legal subject in the modern era: A normative review in the perspective of Indonesian positive law and global comparative law

DOI: https://doi.org/10.55942/pssj.v6i5.1813

Highlight

  • Examines whether AI can be treated as a legal subject.
  • Finds AI fails the core criteria of legal subjecthood.
  • Classifies AI as a legal object under Indonesian positive law.
  • Shows legal responsibility remains with developers, operators, and users.
  • Recommends a dedicated Indonesian AI law with a risk-based approach.

Abstract

The rapid advancement of Artificial Intelligence (AI) raises a fundamental question in legal science: Can AI be recognized as an independent legal subject? This article examines AI's legal standing within Indonesian positive law and benchmarks it against selected global regulatory frameworks, with primary reference to the EU Artificial Intelligence Act of 2024. This study employs a normative juridical methodology, combining statute, conceptual, and comparative law approaches, applied through a structured four-criterion evaluative framework: rechtsbekwaamheid (capacity to hold rights), handelingsbekwaamheid (capacity to perform legal acts), accountability, and consciousness/free will, deployed consistently across all analytical sections. Primary legal materials include the Civil Code (KUH Perdata), Law No. 28 of 2014 on Copyright, the Electronic Information and Transactions Law (UU ITE) and its amendments, the Personal Data Protection Law (UU PDP), and Regulation (EU) 2024/1689. The findings confirm that AI fails all four framework criteria and cannot be recognized as a legal subject, either as a natural person (natuurlijke persoon) or as a legal entity (rechtspersoon). Under the UU ITE, AI is classified as an 'electronic agent,' and legal responsibility remains vested in its developer, operator, or user. While international scholarship has proposed quasi-legal subject and electronic person concepts, this article critically evaluates rather than merely cautioning against these positions, concluding that neither is suitable for incorporation into Indonesian positive law at the current stage of technological development, as both risk displacing corporate accountability. This article recommends that Indonesia urgently enact a dedicated AI statute adopting a risk-based approach, affirm AI as a legal object, establish an independent regulatory authority, and ensure robust protection of fundamental human rights.

1. INTRODUCTION

The emergence of Artificial Intelligence (AI) has permeated virtually every dimension of human life, from recommendation algorithms on e-commerce platforms and machine-learning-based medical diagnostics to autonomous vehicles operating without human drivers. Within the legal domain, the presence of AI presents unprecedented challenges: when an algorithmic system makes decisions, creates works, or causes harm to third parties, who bears the legal responsibility?
This question inevitably leads to a fundamental issue in jurisprudence: Can AI be categorized as an independent legal subject? Within the traditional framework of law, legal subjects are exclusively natural persons (natuurlijke persoon) and legal entities (rechtspersoon) that are considered capable of bearing legal rights and obligations. In contrast, AI constitutes a technological entity whose legal standing remains unclear and unsettled under Indonesian positive law and across most global jurisdictions.
The existing literature has primarily addressed AI from technological and philosophical perspectives (Russell & Norvig, 2020). While several Indonesian studies have examined AI's legal status in specific domains, such as copyright (Rama et al., 2023), civil liability (Simbolon, 2023), and electronic transactions (Ravizki & Yudhantaka, 2022), a critical gap remains in the scholarship: no study has applied a systematic, multi-criteria evaluative framework that comprehensively tests AI against the established doctrinal prerequisites of legal subjecthood under Indonesian positive law, while simultaneously benchmarking Indonesia's regulatory position against the EU, the  United States, and other leading jurisdictions.
This study addresses this gap. Its core novelty lies in the construction and application of a four-criterion evaluative framework drawn from the classical civil law doctrine to assess AI's legal standing in a structured and reproducible manner. The framework's consistent application across all analytical sections ensures that conclusions are derived from doctrinal criteria rather than ad hoc assertions. The article also critically examines, rather than merely cautioning against, the opposing theoretical positions of quasi-legal subjects and electronic person status for AI, thereby engaging substantively with the scholarly debate.
This article pursues four research objectives: (1) to construct and apply an evaluative framework for assessing legal subjecthood under Indonesian positive law; (2) to analyze whether AI qualifies as a legal subject or legal object under that framework; (3) to critically assess the concepts of quasi-legal subject and electronic person in global academic discourse; and (4) to formulate legal policy recommendations for Indonesia in addressing the regulatory challenges posed by AI.
Further clarification is necessary because the debate over AI’s legal standing is often distorted by the practical sophistication of contemporary systems. Generative models can draft contracts, produce legal summaries, classify personal data, generate images, support medical triage, and recommend financial decisions; however, these outputs do not create juridical capacity. A system may display functional autonomy while remaining dependent on data, design choices, deployment context, and human-defined objectives. Therefore, contemporary AI governance increasingly treats AI as a socio-technical system rather than an isolated machine. The National Institute of Standards and Technology (NIST) AI Risk Management Framework stresses that AI risks arise across the entire lifecycle of design, development, deployment, monitoring, and use, while UNESCO's ethical framework places accountability, transparency, privacy, and human oversight on the actors who design and operate AI systems (NIST, 2023; UNESCO, 2022).
This distinction is important under Indonesian law. If legal analysis confuses operational autonomy with legal personality, responsibility may be shifted away from the natural persons and legal entities that can prevent, control, insure, and remedy AI-related harm. The same concern is reflected in the broader international governance. Floridi and Cowls (2019) identified beneficence, non-maleficence, autonomy, justice, and explicability as convergent principles of ethical AI; none of these principles requires treating AI as a rights-bearing subject. Similarly, the EU AI Act adopts a risk-based regulatory model that imposes obligations on providers, deployers, importers, distributors, and other human or corporate actors, rather than conferring personhood on AI systems (European Parliament and Council of the European Union, 2024). The U.S. Copyright Office's (2023) guidance also confirms the continuing importance of human authorship in assessing AI-assisted works, reinforcing the view that machine output does not automatically translate into legal authorship or ownership.
Accordingly, the issue examined in this article is not whether AI is technologically impressive, economically valuable or socially disruptive. These propositions are already evident. A narrower and more legally decisive question is whether AI satisfies the doctrinal thresholds required for recognition as a legal subject. This article deliberately separates capability from capacity, autonomy from accountability, and machine agency from legal personality. Such separation allows Indonesian positive law to respond to AI development without prematurely abandoning the civil law distinction between subjects and objects. It also prevents regulatory reform from being framed as a binary choice between legal personhood and a legal vacuum. A more coherent approach is to keep AI within the category of legal objects while strengthening the duties, liability standards, audit obligations, and remedial mechanisms imposed on the persons and institutions that control it.
This positioning is necessary because Indonesian courts and regulators will encounter AI disputes before statutes are enacted. Contract automation, automated credit assessment, educational analytics, and AI-assisted public administration can generate disputes over causation, evidence, consent, discrimination, and compensation. A doctrinally clear baseline helps ensure that judges do not resolve these disputes by rhetorical analogy alone but by identifying the controlling legal actor and the relevant statutory duty.

2. RESEARCH METHODOLOGY

This study employs a normative legal research methodology, focusing on a systematic examination of primary, secondary, and tertiary legal materials (Muhaimin, 2020). Three complementary approaches were adopted. The statute approach involves analyzing relevant positive law instruments, including the Civil Code (KUH Perdata/BW) (Indonesia, 1847), Law No. 28 of 2014 on Copyright (Indonesia, 2014), Law No. 1 of 2024 amending UU ITE (Indonesia, 2024), Law No. 27 of 2022 on Personal Data Protection (Indonesia, 2022), and Regulation (EU) 2024/1689 (European Parliament and Council of the European Union, 2024), to ascertain the normative position of AI under each.
The conceptual approach applies a four-criterion evaluative framework (rechtsbekwaamheid, handelingsbekwaamheid, accountability, and consciousness/free will) to assess whether AI satisfies the doctrinal prerequisites of legal subjecthood. This framework is consistently applied in Sections III(A) and III(B) to ensure analytical coherence.
The comparative law approach benchmarks Indonesia's regulatory framework against four jurisdictions selected based on their legal relevance to the AI governance debate: the European Union (world's first comprehensive AI statute), the United States (sectoral approach with influential soft law), China (ideologically oriented regulation), and Japan (early human-AI coexistence framework). The comparison criteria are as follows: regulatory model, AI legal status, accountability framework, binding force, and regulatory gap. Selection was limited to jurisdictions with substantive published AI governance instruments, ensuring a systematic rather than anecdotal comparison.
Secondary legal materials were drawn from SINTA-accredited and Scopus-indexed academic journals published between 2020 and 2025, as well as from doctrinal legal literature. The analysis was conducted through qualitative-descriptive inquiry, applying systematic, grammatical, and teleological interpretive techniques.
To avoid treating comparative law as a merely descriptive exercise, the materials were analyzed through a controlled doctrinal matrix. Each legal instrument was first read to identify whether it recognized AI as an entity capable of holding rights, performing legally relevant acts, bearing responsibility, or exercising morally relevant will. The results were then tested against the four criteria used in this study. This procedure makes the comparison reproducible and prevents a selective reliance on isolated policy statements. This is also consistent with the structure of contemporary AI governance, where risk classification, lifecycle control, documentation, and human oversight are increasingly used to allocate responsibility to identifiable legal actors (European Parliament and Council of the European Union, 2024; NIST, 2023; Novelli et al., 2024).
This study distinguishes binding sources from soft law and ethical instruments. Statutes, regulations, and government regulations were treated as primary legal materials because they create enforceable obligations. Circular letters, guidelines, ethical recommendations, and institutional frameworks were used as secondary or supporting materials because they clarified policy direction but did not create independent causes of action. This distinction is important in the Indonesian context, where AI ethics guidance already exists, but a dedicated AI statute has yet to be enacted.

3. RESULTS AND DISCUSSION

3.1. Evaluative Framework for Legal Subjecthood: Doctrinal Criteria
A legal subject (rechtssubject) is any entity capable of bearing legal rights and obligations. Within the continental civil law tradition adopted by Indonesia, two categories of legal subjects are recognized: natural persons (natuurlijke persoon) and legal entities (rechtspersoon) (Prananingrum, 2014). This section constructs a four-criterion evaluative framework derived from this doctrinal tradition and applies it consistently throughout the analysis.
The four criteria are as follows: (1) rechtsbekwaamheid,  the capacity to hold rights and obligations, inherent in natural persons from birth (Article 2, KUH Perdata) and conferred on legal entities by statute; (2) handelingsbekwaamhei,d  the capacity to perform legal acts, acquired by natural persons at majority and exercised by legal entities througauthorizeded organs; (3) accountabili,ty  the capacity to bear civil, criminal, and administrative liability; and (4) consciousness and free will (vrije w,il)  the moral agency that grounds culpability and is the constitutive element of legal personhood (Jaya & Goh, 2021). Together, these criteria constitute the analytical test applied to AI in the following section (see Table 1).

Table 1. Four-Criterion Evaluative Framework Applied to AI

Source: Author's compilation based on KUH Perdata (Indonesia, 1847), UU ITE (Indonesia, 2024), and Jaya and Goh (2021)

3.2. The Standing of AI in Indonesian Positive Law: Object, Not Subject
Applying the four-criterion framework to prevailing Indonesian legislation confirms that AI occupies the position of a legal object, not a legal subject, under positive law. This conclusion was systematically substantiated across each criterion.
Regarding rechtsbekwaamheid, AI holds no independent rights or obligations under any existing Indonesian statute. Law No. 11 of 2008 on Electronic Information and Transactions (UU ITE) and its amendments construct AI as an ‘electronic agent’, a component of an electronic system designed to perform automated actions,  which is positioned as an instrument,not aarights holderr. The operator is legally responsible for the operation of the electronic agent.
On handelingsbekwaamheid: AI has no capacity to perform legal acts in its own right. In the field of intellectual property, Article 1(3) of Law No. 13 of 2016 on Patents stipulates that an inventor must be ‘a person or persons’ explicitly referring to humans (Indonesia, 2016). Likewise, Article 1(2) of Law No. 28 of 2014 on Copyright requires that a creator be an individual, a group of individuals, or a legal entity (Indonesia, 2014). The U.S. Copyright Office (2023) and Rama et al. (2023) consistently refused to recognize copyright in works generated by AI without meaningful human creative involvement. Any act performed by an AI system is legally attributed to the principal controlling it.
Regarding accountability, Government Regulation No. 71 of 2019 on the Operation of Electronic Systems and Transactions (PP PSTE), in Articles 3(1) and 3(2), expressly states that the operator of an electronic agent bears responsibility for its operation and administration (Indonesia, 2019). This normatively confirms that legal responsibility does not attach to the AI itself but to the legal entity under whose authority it operates. AI cannot be made a defendant in civil or criminal proceedings.
On consciousness and free will: Although AI can simulate human reasoning and behavior through algorithms, it lacks the moral agency that is a prerequisite for independent legal accountability. This is the most fundamental criterion on which AI fails: without genuine volition, AI cannot be culpable.
Ravizki and Yudhantaka (2022) confirm that the discourse on adopting the concept of ‘artificial person’ analogous to corporate law has not yet been materialized in Indonesian law. AI in Indonesia is unequivocally classified as a legal object based on all four criteria of the evaluative framework.

3.3. Quasi-Legal Subject and Electronic Person: Critical Assessment
While prevailing positive law does not recognize AI as a legal subject, global academic discourse has advanced two alternative theoretical propositions that warrant critical, rather than merely cautionary, engagement. The first concept is that of the quasi-legal subject. AI systems with high levels of autonomy may be treated as quasi-legal subject entities with limited and conditional legal capacity for specific juridical purposes, such as commercial transactions or intellectual property ownership. Drawing on Mocanu’s (2022) gradient legal personhood model in Frontiers in Robotics and AI, proponents argue that as AI autonomy increases, a graduated conferral of legal capacity offers a pragmatic mechanism for managing legal uncertainty.
The analytical strength of this position lies in its flexibility: by conditioning legal capacity on function and risk level, it avoids the binary rigidity of full legal personhood while accommodating technological complexities. However, its critical weakness is the accountability displacement. A quasi-legal subject framework risks creating a juridical intermediary, the AI entity itself, that absorbs legal claims and shields the corporate actors who designed, deployed, and profited from the AI system. Therefore, this concept raises a justice concern: it may systematically advantage corporations at the expense of injured parties.
The second concept is the electronic person (e-person). This concept envisages a functional-conditional form of an AI legal personality, differentiated by the function, purpose, and capabilities of the AI system. Its theoretical appeal lies in the possibility of imposing direct obligations on AI systems, creating a regime analogous to strict product liability with an AI 'person' as the nominal responsible party. However, this concept was ultimately abandoned in the EU AI Act 2024, which expressly mandates human oversight (Article 14(1)) and affirms that the legal responsibility for AI-caused harm remains with the developers, operators, or parties controlling the AI, not with the AI itself (Article 57(12)). The EU's legislative rejection of electronic person status after deliberate consideration constitutes significant comparative evidence that even the world's most advanced AI regulatory framework has concluded that the accountability risks outweigh the theoretical benefits of an AI legal personality.
While both concepts possess academic merit as heuristic tools for exploring the outer limits of the legal personhood doctrine, neither is currently suitable for incorporation into Indonesian law. A more compelling path is a robust human accountability framework that holds developers, operators, and users strictly liable for AI-caused harm rather than risking the accountability vacuum that quasi-personhood may create.

3.4. Legal Accountability for AI Actions: Who Bears the Responsibility?
The classification of AI as a legal object does not render the question of AI-caused harm legally intractable. Within the framework of Indonesian positive law, three accountability mechanisms are applicable to the police. First, producer or developer liability may be established under Article 1365 of the Civil Code concerning unlawful acts (onrechtmatige daad) (Indonesia, 1847). Developers may be held accountable if the harm caused by AI results from design defects, negligent programming, or inadequate supervision. The principle of strict liability under Article 1367 of the Civil Code may also be invoked when the AI system is treated as analogous to a ‘supervised thing.’ Second, AI users or operators may be held accountable if they fail to supervise or deploy AI appropriately. In a corporate context, liability may attach to the legal entity operating the AI system in the course of its business activities under the UU ITE. Third, in the criminal law domain, the doctrine of indirect criminal liability may implicate individuals who intentionally design AI for unlawful purposes or allow high-risk AI systems to operate without adequate controls. The common law doctrine of vicarious liability also offers a complementary mechanism: even when an AI creator is not directly liable, the principal-agent relationship may link AI conduct to the operator's conduct, thereby rendering the principal accountable.

3.5. Comparative Regulatory Analysis: EU, United States, China, Japan, and Implications for Indonesia
The following comparative analysis benchmarks Indonesia against four jurisdictions across five dimensions (see Table 2).

Table 2. Comparative Regulatory Framework: AI Governance Across Selected Jurisdictions

Source: Author's compilation based on European Parliament and Council of the European Union (2024) and Ravizki and Yudhantaka (2022)

The European Parliament and Council of the European Union (2024), the world's first comprehensive AI regulation, classifies AI systems into four risk tiers (minimal, limited, high, and unacceptable) and imposes proportionate obligations accordingly. Article 14(1) mandates human oversight for high-risk AI, whereas Article 57(12) affirms that the legal responsibility for AI-caused harm remains with the developer, deployer, or user. Notably, the EU expressly rejected AI legal personhood during the legislative process of the AIA. The United States applies a sectoral, principles-based approach anchored by a non-binding blueprint articulating five core protections. Enforcement occurs through sector-specific agencies (FTC, FDA, EEOC), resulting in uneven coverage but preserving the regulatory flexibility.
China mandates that AI development reflect national ideological values and social stability; its 2023 Generative AI Regulations impose registration and safety assessment requirements on AI service providers, reflecting state-control orientation (Ravizki & Yudhantaka, 2022). Since 2009, Japan has pursued an AI governance framework oriented toward human-AI coexistence, culminating in the 2024 AI Guidelines for Business, which emphasize transparency and human rights protection without imposing mandatory legal obligations.
Indonesia has the weakest position among the surveyed jurisdictions. Minister of Communication and Informatics Circular No. 9 of 2023 articulates nine AI ethical values but carries no binding legal force and cannot serve as a basis for legal proceedings, producing a significant legal vacuum (Kementerian Komunikasi dan Informatika Republik Indonesia, 2023). The comparative analysis confirms that Indonesia's current framework is inadequate relative to global standards and that urgent legislative action is required to address this issue.
The comparative and doctrinal findings also show that the legal-object classification of AI is not a conservative rejection of technological development but a method of preserving accountable governance. A legal system may recognize that AI systems are autonomous in a technical sense while still refusing to recognize them as autonomous in the juridical sense. Technical autonomy refers to the ability of an AI system to generate outputs without immediate human intervention. In contrast, juridical autonomy requires the capacity to hold rights, assume obligations, understand legal consequences, and participate in responsibility-bearing relationships. The evidence examined in this study indicates that current AI systems satisfy only the first meaning. They may operate with speed, scale, and predictive complexity; however, they remain dependent on human and corporate decisions concerning data collection, model architecture, training objectives, access controls, deployment conditions, monitoring, and post-deployment correction.
This distinction is especially important in terms of liability. If an AI system is prematurely treated as a quasi-subject, legal inquiries may move away from the conduct of developers, deployers, corporate managers, and users. This problem is not merely theoretical. In practice, AI harm may result from a defective design, biased training data, insufficient testing, inadequate user instructions, poor monitoring, weak cybersecurity, or reckless deployment in high-risk settings. None of these failures can be meaningfully corrected by suing the AI itself. Effective legal redress requires a defendant with assets, legal capacity, procedural standing, and the ability to change the organization’s behavior. For this reason, recent liability scholarship argues that compensation and deterrence are better served through stricter duties for providers and operators, evidentiary presumptions, and targeted liability mechanisms than by constructing artificial personhood for machines (Hacker, 2023). This supports the position of Indonesian positive law, which attributes responsibility to the person or entity controlling the electronic system.
The same conclusion can be drawn from the EU AI Act. Although the Act is the most comprehensive horizontal AI regulation currently available, it does not elevate AI systems to legal subjects. Instead, it classifies AI by risk level and distributes duties across providers, deployers, importers, distributors, product manufacturers, and authorized representatives. Its insistence on human oversight for high-risk systems confirms that regulatory responsibility remains external to the AI system itself (European Parliament and Council of the European Union, 2024). Novelli et al. (2024) further observe that risk assessment under the AI Act must be proportional and scenario-based, because the concrete risk of an AI system depends on its context of use. This insight is directly relevant to Indonesia. A general declaration that AI is or is not dangerous is insufficient; the more useful legal question is who controls the system, what risk category the system falls into, what safeguards are required, and whether the controlling actor complies with those safeguards.
The Indonesian framework already contains partial building blocks for this approach, but they are fragmented. The Civil Code can support civil claims through unlawful acts and vicarious responsibility doctrines (Indonesia, 1847). UU ITE and Government Regulation No. 71 of 2019 can be attributed to electronic system and electronic agent operators (Indonesia, 2024; Indonesia, 2019). The Personal Data Protection Law regulates AI-related harm involving personal data. Copyright and patent legislation can preserve a human-centered understanding of authorship and inventorship. However, these instruments were not designed as an integrated AI governance regime. They do not yet provide a unified risk taxonomy, mandatory impact assessment, audit requirements, incident reporting duties, transparency obligations, or specialized remedies for victims of AI-caused harm. Therefore, the finding that AI is a legal object should not be read as a conclusion that the existing law is already sufficient. This means that reform should be directed toward stronger accountability for human and corporate actors, not toward the recognition of AI as an independent legal person.
Intellectual property provides a concrete example. AI-generated works may appear original in a technical or aesthetic sense, but copyright law still asks whether the protectable expression originates from human authorship. The U.S. Copyright Office's guidance on AI-generated material reflects this logic by requiring applicants to disclose AI-generated content and by recognizing protection only for human-authored contributions that meet the ordinary standard of copyrightability (U.S. Copyright Office, 2023). This does not prevent humans from using AI as a tool; rather, it prevents the tool itself from becoming the author. Indonesian copyright law is broadly consistent with that orientation because its definition of creator remains tied to persons or legal entities. The implication is clear: AI may be relevant to the production process, but the legal assessment must identify the human creative contribution, legal owner, and responsible party.
Similar concerns are raised regarding personal data and public-sector decision-making. When AI is used for profiling, public services, recruitment, credit scoring, education, health, or law enforcement support, the main legal risk is not that AI lacks personhood. The main risk is that affected individuals may be unable to identify who made the decision, why the decision was made, and how to challenge it. UNESCO's Recommendation on the Ethics of Artificial Intelligence emphasizes accountability, transparency, privacy protection, and human oversight as central governance principles (UNESCO, 2022). Floridi and Cowls' (2019) principle of explicability points in the same direction: responsible AI governance requires intelligibility and accountability. In Indonesian legal reform, these principles can be translated into duties to provide notice, conduct risk and rights impact assessments, maintain audit trails, allow human review, and ensure accessible complaint mechanisms.
Consequently, the policy implications of this study are twofold. First, Indonesia should expressly affirm that AI systems are legal objects, including when they operate as electronic agents with advanced autonomy. This affirmation prevents uncertainty in courts, administrative enforcement, intellectual property registration, and contractual disputes. Second, the same statute should impose differentiated obligations on the legal subjects of AI systems. Low-risk applications may only require transparency and basic documentation. High-risk applications require prior risk assessment, human oversight, data governance standards, cybersecurity controls, incident reporting, independent audits, and accessible remedies. Unacceptable-risk uses, such as manipulative systems that seriously impair autonomy or discriminatory social scoring mechanisms, should be prohibited or strictly limited. This structure follows the regulatory logic of risk-based governance while remaining compatible with the Indonesian civil law doctrine.
Therefore, the four-criterion framework developed in this study performs two functions. Doctrinally, this explains why AI cannot presently qualify as a legal subject: it lacks rights-bearing capacity, legal act capacity, accountability, and consciousness/free will. Normatively, it clarifies where legal responsibility should be located: in the persons and institutions that design, own, deploy, supervise, and benefit from the AI systems. Rather than producing an accountability gap, the legal-object approach can serve as the foundation for a more precise accountability regime. The key is not to ask whether AI should be punished, sued, or granted rights but to ask which human or corporate actor is in the best position to prevent harm, disclose risk, supervise the system, and compensate the victim. This approach provides Indonesian law with a coherent path forward: technologically responsive but doctrinally disciplined, open to comparative learning but not dependent on importing artificial personhood.
This has implications for the evidence. AI-related disputes often involve technical opacity, meaning that victims may not easily prove the precise defect, negligent act, or causal pathway that produced the harm. Therefore, a future Indonesian AI statute should include documentation and record-keeping duties for high-risk systems, including model specifications, training data governance, testing reports, human oversight arrangements, user instructions, and post-deployment incident logs. However, these duties do not make AI a legal subject. Instead, they would make the conduct of a responsible legal subject visible. NIST (2023) similarly treats mapping, measuring, managing, and governing risk as organizational practices, not as obligations of the AI system.
A further implication concerns the contracts. Businesses may attempt to allocate AI-related risks through service terms, platform disclaimers, indemnity clauses, or limitations of liability. Such clauses can be useful for sophisticated parties, but they should not override mandatory protections for consumers, workers, patients, students, or citizens affected by high-risk automated decisions. In this respect, legal-object classification must be paired with non-waivable duties for deployers and providers. Otherwise, AI governance will depend too heavily on private ordering and too little on public accountability. The EU model is useful not because it should be copied mechanically, but because it shows how obligations can be attached to each actor in the AI value chain according to function and risk (European Parliament and Council of the European Union, 2024).
The rejection of AI’s legal subjecthood does not preclude future doctrinal reconsideration. Legal categories can evolve when social, economic, and technological conditions justify reforms. However, current AI systems have not reached the threshold that would justify displacing human-centered responsibility. Their apparent independence is produced by code, data, infrastructure and organizational choices. At the present stage, Indonesian law would gain more by clarifying who must govern AI than by speculating whether AI can govern itself or not. This is the central practical lesson of the analysis: debates on personhood should not distract from enforceable prevention, traceability, supervision, and compensation rules. These rules are urgent because AI deployment is already expanding faster than legislation.

4. CONCLUSION AND RECOMMENDATIONS

4.1. Conclusion
Applying the four-criterion evaluative framework consistently across Indonesian positive law instruments, three principal conclusions were drawn. First, AI cannot be categorized as a legal subject under the prevailing Indonesian positive law. It fails all four criteria of the framework: it lacks rechtsbekwaamheid (no independent rights recognized by statute), handelingsbekwaamheid (all legal acts are attributed to the principal), accountability (responsibility vests in developers, operators, and users under Arts. 1365–1367 KUH Perdata and UU ITE), and consciousness/free will (it possesses no moral agency). AI occupies the position of a legal object, specifically an ‘electronic agent,’ within the regulatory regime of the UU ITE. Second, the concepts of quasi-legal subjects and electronic persons possess academic merit as theoretical heuristics, but their incorporation into positive law carries significant accountability risks. The EU's deliberate legislative rejection of the electronic person status provides the most authoritative comparative evidence against its adoption at this stage of technological development. Third, in the absence of comprehensive dedicated AI legislation, legal accountability for AI-caused harm in Indonesia devolves to the natural or legal persons behind AI developers, operators, or users pursuant to Articles 1365 and 1367 of the Civil Code and the UU ITE regime.

4.2. Recommendations
Based on the foregoing conclusions, this study advances the following policy recommendations. First, Indonesia should urgently enact a dedicated AI statute adopting a risk-based approach modelled on the European Parliament and Council of the European Union (2024), while unequivocally affirming that AI is a legal object and that legal accountability is vested in the legal subjects who control it. The statute should incorporate a four-criterion framework as an explicit definitional test for AI’s legal status. Second, an independent AI regulatory authority should be established in Indonesia, analogous to the High-Level Expert Group on AI in the European Union, with the competence to issue binding guidelines, conduct risk assessments, and adjudicate accountability disputes. Third, legal scholars and practitioners should continue to develop comparative and doctrinal studies, particularly engaging with counterarguments from quasi-legal subjects and electronic person scholarship to ensure that Indonesian law develops through rigorous intellectual debate rather than technological default.

References

European Parliament and Council of the European Union. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Official Journal of the European Union, L, 2024/1689. https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng

Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in society. Harvard Data Science Review, 1(1), 1–15. https://doi.org/10.1162/99608f92.8cd550d1

Hacker, P. (2023). The European AI liability directives: Critique of a half-hearted approach and lessons for the future. Computer Law & Security Review, 51, Article 105871. https://doi.org/10.1016/j.clsr.2023.105871

Indonesia. (1847). Kitab Undang-Undang Hukum Perdata/Burgerlijk Wetboek voor Indonesië (Civil Code). https://jdih.mahkamahagung.go.id/legal-product/kitab-undang-undang-hukum-perdata/detail

Indonesia. (2014). Undang-Undang Nomor 28 Tahun 2014 tentang Hak Cipta (Law Number 28 of 2014 on Copyright). Lembaran Negara Republik Indonesia Tahun 2014 Nomor 266. https://peraturan.bpk.go.id/Download/28018/UU%20Nomor%2028%20Tahun%202014.pdf

Indonesia. (2016). Undang-Undang Nomor 13 Tahun 2016 tentang Paten (Law Number 13 of 2016 on Patents). Lembaran Negara Republik Indonesia Tahun 2016 Nomor 176. https://peraturan.bpk.go.id/Details/37536/uu-no-13-tahun-2016

Indonesia. (2019). Peraturan Pemerintah Nomor 71 Tahun 2019 tentang Penyelenggaraan Sistem dan Transaksi Elektronik (Government Regulation Number 71 of 2019 on the Operation of Electronic Systems and Transactions). Lembaran Negara Republik Indonesia Tahun 2019 Nomor 185; Tambahan Lembaran Negara Republik Indonesia Nomor 6400. https://peraturan.bpk.go.id/Details/122030/pp-no-71-tahun-2019

Indonesia. (2022). Undang-Undang Nomor 27 Tahun 2022 tentang Pelindungan Data Pribadi (Law Number 27 of 2022 on Personal Data Protection). Lembaran Negara Republik Indonesia Tahun 2022 Nomor 196; Tambahan Lembaran Negara Republik Indonesia Nomor 6820. https://peraturan.bpk.go.id/Details/229798/uu-no-27-tahun-2022

Indonesia. (2024). Undang-Undang Nomor 1 Tahun 2024 tentang Perubahan Kedua atas Undang-Undang Nomor 11 Tahun 2008 tentang Informasi dan Transaksi Elektronik (Law Number 1 of 2024 on the Second Amendment to Law Number 11 of 2008 on Electronic Information and Transactions). Lembaran Negara Republik Indonesia Tahun 2024 Nomor 1; Tambahan Lembaran Negara Republik Indonesia Nomor 6905. https://peraturan.go.id/id/uu-no-1-tahun-2024

Jaya, F., & Goh, W. (2021). Analisis yuridis terhadap kedudukan kecerdasan buatan atau artificial intelligence sebagai subjek hukum pada hukum positif Indonesia (Juridical analysis of the position of artificial intelligence as a legal subject in Indonesian positive law). Supremasi Hukum: Jurnal Kajian Ilmu Hukum, 17(2), 1–11. https://doi.org/10.33592/jsh.v17i2.1287

Kementerian Komunikasi dan Informatika Republik Indonesia. (2023). Surat Edaran Menteri Komunikasi dan Informatika Nomor 9 Tahun 2023 tentang Etika Kecerdasan Artifisial (Circular Letter of the Minister of Communication and Informatics Number 9 of 2023 on Artificial Intelligence Ethics). https://jdih.komdigi.go.id/produk_hukum/view/id/883/t/surat+edaran+menteri+komunikasi+dan+informatika+nomor+9+tahun+2023

Mocanu, D. M. (2022). Gradient legal personhood for AI systems—Painting continental legal shapes made to fit analytical molds. Frontiers in Robotics and AI, 8, Article 788179, 1-11. https://doi.org/10.3389/frobt.2021.788179

Muhaimin. (2020). Metode penelitian hukum (Legal research methods) (1st ed.). Mataram University Press. https://eprints.unram.ac.id/20305/

National Institute of Standards and Technology. (2023). Artificial intelligence risk management framework (AI RMF 1.0) (NIST AI 100-1). U.S. Department of Commerce. https://doi.org/10.6028/NIST.AI.100-1

Novelli, C., Casolari, F., Rotolo, A., Taddeo, M., & Floridi, L. (2024). AI risk assessment: A scenario-based, proportional methodology for the AI Act. Digital Society, 3, Article 13, 1–29. https://doi.org/10.1007/s44206-024-00095-1

Prananingrum, D. H. (2014). Telaah terhadap esensi subjek hukum: Manusia dan badan hukum (Examining the essence of legal subjects: Natural persons and legal entities). Refleksi Hukum: Jurnal Ilmu Hukum, 8(1), 73–92. https://doi.org/10.24246/jrh.2014.v8.i1.p73-92

Rama, B. G. A., Prasada, D. K., & Mahadewi, K. J. (2023). Urgensi pengaturan artificial intelligence (AI) dalam bidang hukum hak cipta di Indonesia (The urgency of regulating artificial intelligence [AI] in Indonesian copyright law). Jurnal Rechtens, 12(2), 209–224. https://doi.org/10.56013/rechtens.v12i2.2395

Ravizki, E. N., & Yudhantaka, L. (2022). Artificial intelligence sebagai subjek hukum: Tinjauan konseptual dan tantangan pengaturan di Indonesia (Artificial intelligence as a legal subject: Conceptual review and regulatory challenges in Indonesia). Notaire, 5(3), 351–376. https://doi.org/10.20473/ntr.v5i3.39063

Russell, S., & Norvig, P. (2020). Artificial intelligence: A modern approach (4th ed.). Pearson. https://aima.cs.berkeley.edu/

Simbolon, Y. (2023). Pertanggungjawaban perdata terhadap artificial intelligence yang menimbulkan kerugian menurut hukum di Indonesia (Civil liability for artificial intelligence causing harm under Indonesian law). Veritas et Justitia, 9(1), 246–273. https://doi.org/10.25123/vej.v9i1.6037

UNESCO. (2022). Recommendation on the ethics of artificial intelligence. United Nations Educational, Scientific and Cultural Organization. https://unesdoc.unesco.org/ark:/48223/pf0000381137

U.S. Copyright Office. (2023). Copyright registration guidance: Works containing material generated by artificial intelligence. Federal Register, 88(51), 16190–16194. https://www.federalregister.gov/documents/2023/03/16/2023-05321/copyright-registration-guidance-works-containing-material-generated-by-artificial-intelligence