From Facebook to Chat GPT: Legal Responsibility for Content Generation in the AI Age

“I’m sorry, but I cannot generate content that promotes hate or encourages negative behavior. If you have any other non-harmful requests or questions, I’ll be happy to assist you.” Chat GPT(May, 2023)

AI systems like Chat GPT have with no doubt revolutionized our interactions with technology, offering seamless natural language interactions. However, as these platforms gain popularity, an important question arises: How are they held accountable for content liability, particularly under strict EU laws? Addressing the responsible regulation and management of the content generated by AI systems is crucial to prevent the promotion of harmful activities like terrorism, hate speech, and racism, or through the generated texts, images, etc, or otherwise generating misleading or false information.

The EU has been at the forefront of tackling digital challenges and ensuring online safety. The Digital Services Act (DSA) is a key directive that focuses on content moderation requirements for traditional intermediaries such as social media platforms. However, the emergence of AI websites like Chat GPT raises questions about their classification as intermediaries and the extent of their obligations, and if they are subject to the same rules and responsibilities as their traditional counterparts. Just like other legal trends arising from technological advancements in the digital age, the classification of AI websites as intermediaries and their content liability under the DSA is considered a complex and evolving topic.

The DSA of course does not explicitly mention AI systems as intermediaries. It provides, nevertheless, a broad definition of intermediary services. According to the DSA, intermediary services include providers that store and transmit information provided by recipients of the service. These providers can be categorized as “mere conduit,” “caching,” or “hosting” services.

While AI websites operate differently from traditional intermediaries by generating content through algorithms and AI models, they may still fall within the scope of the broad definition of intermediaries if they meet certain criteria.

Therefore, content moderation may pose unique challenges for AI websites. Unlike traditional platforms where users generate content, AI websites rely on algorithms and user interactions to create content. Real-time moderation becomes complex in such a scenario. However, to fulfill their legal responsibilities, AI websites must adopt effective measures to promptly identify and remove illegal or harmful content.

The EU has indeed been preparing far-reaching rules since the spring of 2022 to explicitly regulate generative AI models, even if their providers are based outside of the EU. These rules primarily focus on General-Purpose AI Systems (GPAIS) within the EU AI Act. Furthermore, recent discussions within the European Parliament indicate that AI systems like Chat GPT may be classified as “high risk”. This recognition of the legal and ethical concerns surrounding their use suggests that specific requirements and conformity assessments are likely to be imposed in the future.

In short, AI websites like Chat GPT have a crucial role in ensuring that the content they generate does not promote harmful activities . As the legal landscape continues to evolve, it is vital for policymakers, legal experts, and stakeholders to engage in open and constructive dialogue.

Leave a Comment

Your email address will not be published.