Google Implements Measures to Combat Nonconsensual Explicit Deepfakes

In response to the alarming increase in nonconsensual sexually explicit deepfakes, Google has announced new measures to mitigate the spread of these harmful images and videos. This initiative is a significant step toward protecting victims and reducing the visibility of explicit deepfakes in search results.

Google’s recent actions have led to a noticeable decrease in the prevalence of explicit deepfakes. According to Emma Higham, a Google product manager, new adjustments in Google’s ranking algorithms have already resulted in a more than 70% reduction in the exposure of these images. The search engine now prioritizes news articles and other nonexplicit content over potentially harmful results when users search for terms related to deepfakes. For instance, a previous search for “deepfake nudes Jennifer Aniston” that returned explicit content now displays articles discussing the broader impact of deepfakes on society.

One of the critical aspects of Google’s new measures is the focus on aiding victims. The company has simplified the process for requesting the removal of nonconsensual explicit imagery (NCEI) through an online form. Once a takedown request is honored, Google will also scan for and remove duplicates, preventing similar content from reappearing in search results. Additionally, websites with a high volume of successful takedown requests will face demotion in search rankings, further diminishing the reach of such harmful material.

These efforts are particularly crucial as the availability of AI image generators has surged, making it easier for individuals to create fake explicit images of anyone, from celebrities to minors. The rise in deepfake technology has disproportionately affected women and girls, with public figures and even students in middle and high schools becoming frequent targets. In 2023, more nonconsensual sexually explicit deepfakes were posted online than in all previous years combined, highlighting the urgent need for action.

Despite these advancements, Google acknowledges that its measures are not foolproof. The company has stated that it will not proactively scan for new deepfakes to remove them. Instead, it will only act upon flagged content. While this approach may limit the reach of harmful material, it also leaves gaps that could potentially allow new deepfakes to slip through.

Google’s announcement comes amid growing pressure from lawmakers to address the issue. Senate Judiciary Chair Dick Durbin has been vocal about the need for more stringent measures, recently introducing federal legislation that would allow victims to sue perpetrators of nonconsensual sexually explicit deepfakes. The proposed law has already passed the Senate and awaits a vote in the House.

Overall, Google’s new policies represent a crucial step in combating the proliferation of deepfakes. By prioritizing nonexplicit content and assisting victims in removing harmful images, the company aims to provide a safer online environment. However, as technology continues to evolve, ongoing efforts and vigilance will be necessary to protect individuals from the damaging effects of deepfake content.