Artificial intelligence (AI) systems, especially generative tools like ChatGPT, have revolutionized content creation. But with great power comes great responsibility. What happens when an AI, prompted by a user, generates something controversial, defamatory, or outright illegal? The legal and ethical dilemmas surrounding such content are both controversial and interesting to contemplate. Although the definite answer of “who should bear the responsibility, the user who prompts the content or the company that built the system?” is subject to change in the upcoming years, we have tried to demonstrate the basics of the responsibility regime regarding the status of the prompted content in Turkey.
Although Turkey does not have any AI-specific legislation for now, the general rules regarding content shared on the internet is regulated under the Law No. 5651 Law on the Regulation of Broadcasts via Internet and Prevention of Crimes Committed through Such Broadcasts (“Internet Law”). Accordingly, the Internet Law establishes a liability regime for (i) access providers, (ii) hosting providers, (iii) content providers and (iv) social network providers, while refraining from addressing any AI-generated content. Thus, a necessity arises to determine the relevant liability regime regarding the AI-generated content under predefined categories under the Internet Law.
In this regard, access providers, namely the internet service providers licensed by the Information and Communication Technologies Authority (“ICTA”), do not have any obligations regarding the substance of the content shared online, but only bears responsibility for blocking access based on the decisions of the relevant authorities. Moreover, social network providers, who provide platforms for users to create, view or share textual, visual or audio for the purpose of social interaction, are directly liable for the offenses committed through a user, i.e. content provider, on its platform, if such social network provider has been notified of the unlawful content and has not removed it within 4 hours at the latest from the notification of the content.
As the definitions of access providers and social network providers do not align with the platforms that provide generative AI systems (“GenAI”), hosting providers and content providers become relevant to determine the relevant liability regime under the Internet Law. Accordingly, hosting providers, who offer or operate systems that host services or content, are immune from checking the content on their platforms or to investigate whether there is an illegal activity. Whereas, content providers, who create, amend or provide all kinds of information or data presented to users on the internet, are responsible for all content that they have made available on the internet.
The dilemma arises while determining the party who is responsible for the content generated by GenAI, i.e. the platform itself or the user inputting the prompt. Based on the definitions of hosting provider and content provider, GenAI platforms seem to fall within the scope of definition of content providers, since they actually create, amend or provide an output based on the online content accessible by them. On the other hand, holding GenAI platforms liable for the content created by GenAI, where the output of GenAI is actually dependent on the input of the user, seems onerous. Whereas, classifying GenAI platforms as hosting provider also does not align with the nature of the GenAI, as such systems generate and provide content with creative and productive elements based on online content accessible to them, going beyond the role of a hosting provider in the classical sense due to their direct contribution to the content.
Another point of view might be that since GenAI only produces a certain kind of content without a personal input, limited to the deduction of certain sources and the input provided by the user, they cannot be classified as a content provider. However, in order to be classified as hosting provider and held unaccountable, the internal content moderation policies of GenAI platforms must also reflect both global and local sensitivities with regards to the online content. In this regard, the approach embodied regarding the liability of social network providers under the Internet Law may shed light on how GenAI platforms may be held liable with regards to the content created based on user prompts. To clarify, a liability regime could operate through a dual mechanism, where the user, acting as the content provider, would bear responsibility for their prompt; while the GenAI platform, serving as the hosting provider, would be accountable for inadequate moderation of inputs that breach established guidelines or regulations. This approach would balance user accountability with platform oversight, promoting lawful use of GenAI.
As a result, it is concluded that due to the nature of GenAI, where they continuously produce new content rather than presenting static content, a grey area appears with the dual role of such systems as both a content and a hosting provider, which is too complex to be confined to the usual liability mechanisms consisting of customary definitions of content and hosting providers. Therefore, there is a pressing necessity to regulate AI to address these complexities and adapt to its rapid development.
Authors: Burak Özdağıstanli, Begüm Alara Şahinkaya, Hatice Sahranç
Comments