Identification and Protection against AI-Generated Content

Artificial Intelligence (AI) can be used for a multiplicity of creative ends but some of these can be dangerous. For example, deepfake material {synthetic material created by manipulating images or videos using advanced AI} can be used to spread disinformation on social media platforms (Ballesteros, 2024). For example, a deepfake image of a famous human could be used to broadcast messages on social media platforms, and these messages may run counter to the desires of the ‘real’ human who has been ’copied’ (see Figure 1).

Figure 1   AI and Facial Image Recognition

This has led to approaches to identify and counteract such AI-generated content. For example, steps can be taken to detect deepfake images (see Table 1).

Table 1   Steps to Detect Deepfake Images (Ballesteros, 2024)

Analyse Visual Inconsistencies
Scrutinise Audio Clues
Reverse Image Research
Inspect Video Metadata
Use Deepfake Detection Tools (e.g. Deepware AI)
Verify Official Sources
Educate Yourself and Others

Some companies have committed to identifying AI-generated material. For example, on 6th February 2024, Meta gave a clear public commitment that it would label AI-generated images posted on its social media, like Facebook and Instagram (RSF, 2024). Also, OpenAI has made moves to do something similar:

“Taking a major step towards transparency and authenticity in AI-generated visuals, OpenAI announced it will be embedding watermarks directly into images created by ChatGPT on the web and the company’s popular image generation model, DALL-E 3.” (Patel, 2024, p.1)

Also, various charters have been devised (e.g. Paris Charter on AI and Journalism) to provide principles for transparency, accountability, integrity and protection of humans in the use of AI-generated material (RSF, 2024).

AI-generated material can add to the creative enterprise of organisations and individuals. However, it can also be manipulated in dangerous ways for dangerous purposes. Therefore, ethical principles and transparency of authentication of how material has been generated are vital. Otherwise, technology that could otherwise be used to make the World a safer and better place could work in the opposite direction.

Dr Peter Sharp   20th March 2024

References

Ballesteros, E. (2024) How to Spot a Deepfake and Unmask Fake Media Content in 2024 {accessed from https://www.science.org/content/article/how-spot-deepfake-and-prevent-it-causing-political-chaos https://eddyballe.com/how-to-spot-a-deepfake/ on 19th March 2024}

Patel, V. (2024) OpenAI Announces Watermark for Authenticating DALL-E 3 Images, TechnologyArtificial Intelligence – International Business Times {accessed from OpenAI Announces Watermark For Authenticating DALL-E 3 Images | IBTimes UK on 20th March 2024}

RSF (2023) Reporters Without Borders – Paris Charter on AI and Journalism, Paris, 10th November 2023 {accessed from https://rsf.org/sites/default/files/medias/file/2023/11/Paris%20Charter%20on%20AI%20and%20Journalism.pdf  on 20th March 2024}

RSF (2024) Reporters Without Borders – Tech giants commit to identifying AI-generated content – a step forward for the right to online information at last! | RSF {accessed on 19th March 2024 from website https://rsf.org/en/tech-giants-commit-identifying-ai-generated-content-step-forward-right-online-information-last#:~:text=The%20joint%20undertaking%20by%20several,the%20visibility%20of%20reliable%20content. }

Categories Uncategorized
Design a site like this with WordPress.com
Get started
search previous next tag category expand menu location phone mail time cart zoom edit close