10/01/2025 - The EU wants to use Article 50 of the AI Act to label deepfakes in order to protect citizens from manipulation. VAUNET is calling for practical rules: clearly label deepfakes, but do not slow down innovation in journalism.
Executive Summary
Reliable media needs leeway: VAUNET welcomes the objective of Art. 50 AI-Act to pro-tect society against misinformation, manipulation, fraud and identity fraud caused by deepfakes. Meaningful transparency measures can help to achieve this objective. How-ever, the implementation of transparency measures must be proportionate. This is es-pecially important regarding the professional use of AI in journalistic-editorial media.
Efficiency and innovation potential of AI must be protected: In a generally difficult eco-nomic environment with inflationary cost increases, it is even more important for media outlets to exploit the potential for efficiency and innovation offered by AI. The realisation of this potential should not be hampered by excessive transparency or labelling require-ments, especially when these could call into question the trustworthiness and value of editorial content.
A risk-based interpretation of the deepfake concept is necessary: The risk-based ap-proach of the AI-Act must be applied when interpreting the deepfake concept in Art. 3 No. 60 AI-Act and the transparency obligation in Art. 50(4) AI-Act. Content from regu-lated journalistic-editorial media companies that is created and distributed using AI does not pose the risks associated with deepfakes. For professionally created journal-istic-editorial media content, a narrow interpretation of the regulation is required, that is not sticked to the exact wording alone.
There is no transparency obligation needed if editorial control is provided: Transpar-ency as defined in Art. 50(4) AI-Act is not required if risks are excluded by other safe-guards. If private media outlets carry out human review of content created by AI or if editorial control and responsibility exist, there is no need for labelling or transparency obligations. The exception for press publishers under Art. 50 (4) AI-Act must be equally applicable to private broadcasters, in order to provide regulatory fairness.
Differentiation between professional and manipulative use of AI needed: When apply-ing the deepfake concept, a distinction must be made between professional media out-lets and third parties acting with the intention to cause harm or deceive. The objectives of Art. 50 AI-Act could be undermined if content created with the help of AI tools and published under journalistic and editorial control and responsibility had to be labelled in the same way as content created with the intention to deceive or harm.
AI used as assistive tool does not create deepfakes: The supporting or assistive use of AI systems without altering the meaning of a content does not trigger a labelling or trans-parency requirement. Thus, an excessive transparency requirement that encompasses assistive AI use could compromise the credibility of content that has been editorially checked and not altered.
Avoid regulatory conflicts: Audio and audiovisual media providers are subject to a vari-ety of legal transparency and labelling requirements. The interpretation of Art. 50 (4) and (5) AI-Act must take this into account. It is essential to avoid both legal uncertainty and excessive numbers of transparency notices, overburdening users.
Media- and genre-specific implementation: A code of practice should be limited to specifying the principles set out in Art. 50 (4) and (5) AI-Act. Providing a media-specific specification of these requirements is sufficient – and indeed necessary – and can be achieved by means of best practice examples and/or indicators that take into account the specific characteristics of the media genres (audio, video, image). This approach should ensure sufficient flexibility and scalability, too. It is also essential to consider the financial implications of implementation to ensure that regulation facilitates the effec-tive utilisation of AI, rather than impeding its growth.
Transparency requirement must be proportionate: It cannot be inferred from the AI-Act that labelling of each individual sequence – representing a deepfake – is necessary if that sequence is an integral part of a TV or radio programme. A general reference in the opening and/or closing credits of a programme can be sufficient.
The notion of ‚direct interaction‘ must be interpreted narrowly: Recommendation sys-tems, such as those found on streaming platforms or apps, do not meet the definition of “direct interaction” as outlined in Art. 50 (1) AI-Act. VAUNET is calling for a corresponding clarification of this point in a code of practice.