New Feature Aims to Identify AI-Created Content
In a move to improve transparency and safety, social media service Snap has announced that it plans to add watermarks to AI-generated images created using its tools. The watermark will be a translucent version of the Snap logo with a sparkle emoji and will be added to any AI-generated image exported from the app or saved to the camera roll.
What is the Purpose of the Watermark?
The watermark, which features Snap’s logo with a sparkle, denotes AI-generated images created using the company’s tools. According to Snap, its primary goal is to provide users with clear context and transparency about the content they are viewing or creating.
How Will Snap Detect and Prevent Removal of Watermarks?
Snap has stated that removing watermarks from images will violate its terms of use. However, it remains unclear how the company plans to detect and prevent the removal of these watermarks. TechCrunch has reached out to Snap for more information on this matter and will update the story accordingly.
Other Tech Giants Follow Suit
Microsoft, Meta, and Google have also taken steps to label or identify images created with AI-powered tools. These efforts are part of a broader industry-wide trend to increase transparency and accountability around AI-generated content.
Snap’s Approach to AI Safety and Moderation
In its blog post outlining safety and transparency practices around AI, Snap explained that it shows AI-powered features, like Lenses, with a visual marker that resembles a sparkle emoji. The company also lists indicators for features powered by generative AI.
Context Cards Added to AI-Generated Images
To provide users with even more context, Snap has added context cards to AI-generated images created with tools like Dream. These cards aim to better inform users about the content they are viewing and interacting with.
Partnership with HackerOne
In February, Snap partnered with HackerOne to adopt a bug bounty program aimed at stress-testing its AI image-generation tools. The company’s goal is to minimize potentially biased AI results and provide users with equitable access and expectations when using its app.
Controversy Surrounding ‘My AI’ Chatbot
Snapchat’s efforts to improve AI safety and moderation come after its ‘My AI’ chatbot sparked controversy upon launch in March 2023. Some users managed to get the bot to respond to potentially unsafe subjects, leading the company to roll out controls in the Family Center for parents and guardians to monitor and restrict their children’s interactions with AI.
Conclusion
Snap’s decision to add watermarks to AI-generated images is a significant step towards increasing transparency and accountability around AI-created content. As the use of AI-powered tools continues to grow, it is essential that companies prioritize user safety and moderation. By providing clear context and indicators for AI-generated content, Snap aims to promote a safer and more equitable experience for its users.
Future Developments
As the industry continues to evolve, we can expect to see more companies implement similar measures to improve transparency and accountability around AI-generated content. Stay tuned for updates on this topic as it unfolds.
Additional Resources
- Snap’s Support Page: Learn more about Snap’s approach to AI safety and moderation.
- HackerOne Blog: Read about the partnership between Snap and HackerOne to improve AI image-generation tools.
About the Author
Ivan Mehta is a global consumer tech journalist covering developments in the industry. He has previously worked at publications including Huffington Post and The Next Web, and can be reached at im.