Deepfakes are video forgeries that make people appear to be saying things they never did.
Twitter is working on a new policy to tackle deepfake videos on its platform that will address the content which could threaten someone’s physical safety or lead to offline harm. The micro-blogging platform has asked its users how best the synthetic and manipulated videos can be addressed.
“Deepfakes” are video forgeries that make people appear to be saying things they never did, like the popular forged videos of Facebook CEO Mark Zuckerberg and US House Speaker Nancy Pelosi that went viral recently.
“We’re always updating our rules based on how online behaviours change. We’re working on a new policy to address synthetic and manipulated media on Twitter, but first we want to hear from you,’ Twitter said on Monday.
“We need to consider how synthetic media is shared on Twitter in potentially damaging contexts; we want to listen and consider your perspectives in our policy development process and we want to be transparent about our approach and values,” Twitter Safety posted on its platform.
“Deepfake” techniques, which present realistic AI-generated videos of real people doing and saying fictional things, have significant implications for determining the legitimacy of information presented online.
In the coming weeks, Twitter will announce a feedback period so that users can help it refine this policy before it goes live.
At an event in California on Monday, Vijaya Gadde who is Legal, Public Policy & Trust and Safety Lead at Twitter, also said the company is going to make policy changes around manipulated videos.