Our AI Election Project Methodology and Other Notes:
- The instances listed must be found “in the wild.” This means we’re only going to include instances where generative AI is being used or found out in the world.
- This list is almost certainly an undercount. However many instances of generative AI journalists, researchers, experts, or anyone else manages to find, it’s likely that there are more.
- There may be some instances where it will be hard to tell whether a piece of media has been manipulated the old-fashioned way—a cheapfake—or if it’s an actual use of generative AI. We will include only instances where we or other outlets have a high degree of certainty that generative AI was involved.
- This might include context clues (i.e., a dead politician endorsing a current candidate); expert analysis that concludes something has likely been created with AI; self-admission (a person or company confirms they used generative AI); or experts themselves submitting instances that they have verified.
- We recognize it is possible to fall prey to the “liar’s dividend,” where someone may say something is faked in order to discredit real information. To the best of our abilities, we will look for other sources of confirmation, or note if we can’t find them.
- Some of the examples on our list are from 2023. This is because several countries had elections in the early months of 2024 or have campaign seasons that began in 2023, and we wanted to be able to account for these instances.
- We are including instances that involve a political figure or country where there is an election this year, even if their message is not political (i.e., a deepfake of a politician being used to promote a scam). This is because even if the message may not be political, it can harm a candidate or otherwise draw on their popularity or the timeliness of an election to gain traction.
- We are including instances that involve political messaging or elections, even if they do not involve a political figure (i.e., a deepfake of a celebrity appearing to endorse certain ideas, parties, or candidates).
- If an instance of generative AI spreads across borders, it will be counted in the countries affected, and that will be reflected on the map. For instance, if a group uses AI-generated ads to target multiple countries during the EU elections, it will be counted as an instance in each country.
- We are not limiting ourselves to one type of generative AI use. We will include any visual, audio, or text-based instances that we or other researchers, news outlets, or fact checkers can confirm. We also acknowledge that this may skew our data toward instances that are more readily identifiable. For instance, it may be easier to tell whether a video has been manipulated by AI than a piece of audio or text.
- Generative AI use may be more prevalent or readily identifiable in certain places more than others, and this too may skew our data. English, for instance, is widely spoken and used on the internet, meaning that it may be easier to use generative AI in this language as opposed to a smaller language, like Khmer or Thai.
- Some instances will be fully synthesized, meaning they’re entirely made up by AI, while others will be manipulated, meaning that they are the result of authentic content that has been altered using AI.
- For the purposes of this project, any manipulated media that shows up in a video (even if it’s just the audio that’s been manipulated) will be counted as a manipulated video. Manipulated audio will mean something like a voice note or a robocall, where there is no visual component.
- In addition to our own work surfacing and documenting these instances, we will also be accepting submissions. This, too, may skew our data toward instances or locations where our readers or sources are.
If you’re a journalist, researcher, or expert who would like to work with us on a particular story, or on the data as a whole, we’d love to hear from you.