WITNESS LEADS CONVENING ON PROACTIVE SOLUTIONS TO MAL-USES OF DEEPFAKES AND OTHER AI-GENERATED SYNTHETIC MEDIA

Read the detailed summary of discussions and recommendations on next-steps here

On June 11, 2018, WITNESS in collaboration with First Draft, a project of the Shorenstein Center on Media, Politics and Public Policy at Harvard Kennedy School, brought together 30 leading independent and company-based technologists, machine learning specialists, academic researchers in synthetic media, human rights researchers, and journalists. Under Chatham House Rules, the discussion was focused on pragmatic and proactive ways to mitigate the threats that widespread use and commercialization of new tools for AI-generated synthetic media, such as deepfakes and facial reenactment, potentially pose to public trust, reliable journalism and trustworthy human rights documentation.

WITNESS has for twenty-five years enabled human rights defenders, and now increasingly anyone, anywhere to use video and technology to protect and defend human rights. Our experience has shown the value of images to drive a more diverse personal storytelling and civic journalism, to drive movements around pervasive human rights violations like police violence, and to be critical evidence in war crimes trials. We have also seen the ease in which videos and audio, often crudely edited or even simply recycled and re-contextualized can perpetuate and renew cycles of violence.

WITNESS’ Tech + Advocacy work has frequently included engaging with key social media and video-sharing platforms to develop innovative policy and product responses to challenges facing high-risk users and high-public interest content. As a potential threat of more sophisticated, more personalized audio and video manipulation emerges, we see a critical necessity to bring together key actors before we are in the eye-of-the-storm, to ensure we prepare in a more coordinated way and to challenge technopocalyptic narratives that in and of themselves damage public trust in video and audio.

The convening goals included:

  • Broaden journalists, technologists and human rights researchers’ understanding of these new technologies, where needed;
  • While recognizing positive potential usages, begin building a common understanding of the threats created by– and potential responses to – mal-uses of AI-generated imagery, video and audio to public discourse and reliable news and human rights documentation, and map landscape of innovation in this area.
  • Build shared understanding of existing approaches in human rights, journalism and technology to deal with mal-uses of faked, simulated and recycled images, audio and video, and their relationship to other forms of mis/dis/mal-information
  • Based on case studies (real and hypothetical) facilitate discussion of potential pragmatic tactical, normative and technical responses to risk models of fabricated audio and video by companies, independent activists, journalists, academic researchers, open-source technologists and commercial platforms;
  • Identify priorities for continued discussion between stakeholders

Recommendations emerging from the convening included:

  1. Baseline research and a focused sprint on the optimal ways to track authenticity, integrity, provenance and digital edits of images, audio and video from capture to sharing to ongoing use. Research should focus on a rights-protecting approach that a) maximizes how many people can access these tools, b) minimizes barriers to entry and potential suppression of free speech without compromising right to privacy and freedom of surveillance c) minimizes risk to vulnerable creators and custody-holders and balances these with d) potential feasibility of integrating these approaches in a broader context of platforms, social media and in search engines. This research needs to reflect platform, independent commercial and open-source activist efforts, consider use of blockchain and similar technologies, review precedents (e.g. spam and current anti-disinformation efforts) and identify pros and cons to different approaches as well as the unanticipated risks. WITNESS will lead on supporting this research and sprint.
  2. Detailed threat modelling around synthetic media mal-uses for particular key stakeholders (journalists, human rights defenders, others). Create models based on actors, motivations and attack vectors, resulting in identification of tailored approaches relevant to specific stakeholders or issues/values at stake.
  3. Public and private dialogue on how platforms, social media sites and search engines design a shared approach and better coordinate around mal-uses of synthetic media. Much like the public discussions around data use and content moderation, there is a role for third parties in civil society to serve as a public voice on pros/cons of various approaches, as well as to  facilitate public discussion and serve as a neutral space for consensus-building. WITNESS will support this type of outcomes-oriented discussion.
  4. Platforms, search and social media companies should prioritize development of key tools already identified in the OSINT human rights and journalism community as critical: particularly reverse video search. This is because many of the problems of synthetic media relate to existing challenges around verification and trust in visual media.
  5. More shared  learning on how to detect synthetic media that brings together existing practices from manual and automatic forensics analysis with human rights, Open Source Intelligence (OSINT) and journalistic practitioners – potentially via a workshop where they test/learn each other’s methods and work out what to adopt and how to make techniques accessible. WITNESS and First Draft will engage on this.
  6. Prepare for the emergence of synthetic media in real-world situations by working with journalists and human rights defenders to build playbooks for upcoming risk scenarios so that no-one can claim ‘we didn’t see this coming’ and so as  to facilitate more understanding of technologies at stake. WITNESS and First Draft will collaborate on this.
  7. Include additional stakeholders who were under-represented in the 6/11 convening and are critical voices either in an additional meeting or in upcoming activities
    • “Global South” voices as well as marginalized communities in US and Europe
    • Policy and legal voices and national and international level
    • Artists and provocateurs
  8. Additional understanding of relevant research questions and lead research to inform other strategies. First Draft will lead on additional research.

 

For blog posts produced providing further details on next steps see:

 



Top

Visit our new GEN-AI microsite to keep up to date with our work on deepfakes and AI.

Take me there

Visit our new GEN-AI microsite