360%20Digital%20Sherlocks_WHITE_BACKGROU

Information Operations

NOT FOR RELEASE: Please remember that videos and material are reserved to 2021 participants to the 360/Digital Sherlocks program and are not meant to be shared outside the cohort, downloaded, recreated, or disseminated in any way. Having been selected as a participant to this program, you will be expected to respect the confidential nature of DFRLab’s proprietary material. 


If you have any questions, please email us at DFRLab@AtlanticCouncil.org

Information Operations

Trainers

The sudden rise in disinformation we see today is a product of our sudden shift towards digital information consumption. The speed and volume of our information has skyrocketed, making it much easier to produce and peddle manipulated information: there are now high returns with low investment in manipulated media. Our goal at the DFRLab is to identify, expose, and explain this phenomenon.  

GLOSSARY

At the DFRLab, we do not use the term “fake news”: it is a catch-all used to describe mis- and disinformation and usually comes with political connotations, so it is unhelpful in crisis or high information events.   

 

The following is the vocabulary that the DFRLab uses to describe in-person and online information environments, which helps us communicate clearly and credibly. Further frameworks and definitions can be found in the DFRLab’s Dichotomies of Disinformation

  

Disinformation: False information spread with the intent to deceive. In order for information to qualify as “disinformation”, it is necessary to prove both that the information was false and that the intent was to deceive. The latter is often the most difficult. 

 

Misinformation: The spread of false information without an intent to deceive. (ex. sharing a false headline without reading the article.) 

 

Propaganda: The use of information to influence an audience for the political or ideological benefit of the source, not the audience’s benefit. Propaganda can be true or false; some of the most effective propaganda in history has been true. Usually executed by state actors.  

 

State-backed media outlet: A media outlet that is partly owned by the government or is owned by a state-owned government company. 

 

State-owned media outlet: A media outlet that is overtly or covertly owned by the government.  

 

Fringe media: A media outlet that, unlike a mainstream media outlet, may not have a wide distribution or following, that does not adhere to common journalistic ethics, making them irreputable sources of information. Among other things, these news outlets sometimes share conspiracy theories.  

 

Inauthentic Accounts: Accounts undertaking deceptive behavior online to mislead users, such as hiding or obscuring their identity or working to inflate a hashtag’s popularity. They often engage in Coordinated Inauthentic Behavior (CIB), which is defined below.   

 

Coordinated behavior: When a group of accounts work together to influence the online conversation, such as organized hashtag use, as a means of causing the hashtag to trend.  

 

Automated Accounts/Bots: A social media account that has been automated to surpass human capabilities for posting content online. Bots are frequently used to amplify specific information (often hashtags) to cause it to trend. (From DFRLab:#BotSpot: Twelve Ways To Spot A Bot

  • https://makeadverbsgreatagain.org/allegedly/ provides the timing of an account’s tweets + whether it is tweeting from an external app/service, and this information can be helpful in determining whether an account is a bot 

  • Be conservative in determining whether an account is a bot – the best indicator is whether the tweets are coming from an external platform that is automating tweets  

 

Sock Puppet: Online accounts run by a person who is masquerading as someone else, often with the intent of misleading other users. These accounts are used to manipulate discussion.  

 

Troll: A human being who systematically posts inflammatory, divisive, abusive, or hyper-partisan content, often under a cover of anonymity. The word “troll” is often not associated for attribution but for the behavior of the user. 

 

Coordinated Inauthentic Behavior (CIB): A Facebook-specific term for when a network of assets (i.e., accounts, pages, groups, or events) work to influence or mislead Facebook. Facebook uses a CIB designation as a policy enforcement mechanism, leading to the assets’ removal for violating their community standards. This concept has been adopted beyond Facebook and is now used on other platforms as well. (From DFRLab, an example of CIB that was taken down, from Facebook: Coordinated Inauthentic Behavior Explained)  

 

Spam and Platform Manipulation: Based on Twitter, this takes the form of many bot accounts flooding Twitter to drive traffic towards a certain topic, service, initiative, etc. There can also be inauthentic engagement that attempts to make accounts/conversations appear more popular than they are. These activities are often coordinated, and sometimes automated. 

 

  

INFORMATION OPERATIONS  

 

Information operations are a subset of interference operations – interference operations are maligned actions that are clandestine with the intent to harm. Information operations are the aspects of interference operations dealing with information environments.   

 

Information Operations can be summarized by the “ABC Framework” from Camille Francios at Graphika. These are vectors that create a spectrum of information operations:

InfoOps1-a.png
  • Actors involved in spreading/amplifying disinformation  

  • Deceptive behavior  

  • Manipulated, potentially harmful, disinformation content

Identification is the first step in countering malicious online activity, such as disinformation, foreign interference campaigns, or any form of inauthentic digital activity. Effective identification allows researchers to flag and respond to threats in a timely manner, limiting the impact of malicious activity online. Methods of identification:   

  • Conducting narrative analysis 

  • Cannot be left to external tools – depends on human analysis and pattern recognition   

  • Mapping out dissemination networks 

  • Using social media listening tools that help identify all uses of a keyword + reach + engagement + other helpful metrics 

  • Meltwater  

  • Brandwatch  

  • Buzzsumo 

  • If you do not have access to the above tools, you can also simply search keywords on a platform and go through the types of posts you’re seeing, their levels of engagement, and the narratives they’re pushing  

  • Monitoring known sources of disinformation 

  • Identifying “repeat offenders” (accounts and communities that are regularly used to spread mis- and disinformation) 

  • Identifying behavioral similarities across malign actors 

  

The impact of a disinformation or hostile narrative campaign is impossible to measure completely but can be generally assessed by answering two key questions: Did the campaign change the perception of its target audience? Did the campaign change the behavior of its target audience? Methods of investigating impact:  

  • Repetitive opinion polling 

  • Factors affecting public behavior (penetration of disinformation sources in public debate, change in public sentiment towards a certain topic, change in policy, boycott of a product or service as promoted by the disinformation campaign, etc.)   

  • Social media engagement metrics (be careful: shares, reads, reactions on social media posts do not necessarily mean that users believed in or agreed on the substance of the post and also do not account for artificial amplification) 

  

Attribution – the ability to say who is responsible for the planning, coordination, or execution of a given disinformation event – can be the most challenging part of open-source research. Typically, open source investigations identify behavior, not intent. It is also vital to note that attribution cannot always be achieved with high confidence. It is always better to acknowledge and disclose the limitations of open source research and identify shortcomings in attribution. We want to find the boogeyman, not create him.  

 

Those best placed to attribute inauthentic behavior are the platforms themselves, but they are constrained by legal, commercial, and technical limitations. Attribution often involves a combination technical evidence and a knowledge base on what the capabilities and incentives of known actors are.  

  

Overall, attribution indicators include:  

  • Linguistic signs 

  • Topic of the campaign (e.g., recurring narratives on geopolitical events) 

  • Website forensics (e.g., IP addresses, location of the original source) 

  • The money trail (i.e., the financial links between the source of the campaign and a foreign government or other entity) 

InfoOps2-a.png

There are also different levels of attribution, where completely operational attribution identifies individual accounts engaging in information operations, while strategic attribution is broad and categorical:

InfoOps3-a.png

Confidence assessments are a useful way to validate attributions, and the language you use should reflect your confidence level. The following describes useful mechanisms for confidence assessments:

InfoOps4-a.png

Another helpful tool for confidence assessments of attribution is the DFRLab’s Foreign Interference Attibution Tracker (FIAT) that gives an Attribution Score based on credibility, objectivity, evidence, and transparency on known cases.

 

Reporting on Influence Operations

 

When reporting, we not only report what we find but also how we found it, so our readers are able to gain a thorough understanding of where our data is coming from. It is important to be clear on your findings and not exaggerate/draw false conclusions because it can significantly undermine your efforts. Hedge your writing to include verbs that accurately communicate your degree of confidence in your findings. This is a part of storytelling and defensive writing, for which there is a Digital Sherlocks training on March 24th.

 

WEAPONIZATION OF INFORMATION  

 

The 4 D’s of Disinformation refer to four tactics frequently employed by hostile actors: dismiss, distort, distract, and dismay. This model was originally coined by Ben Nimmo to categorize Russian government messaging in the wake of the Crimean annexation but can also be applied to other hostile actors:

  • Dismiss: If you don’t like what your critics say, insult them

  • Distort: If you don’t like the facts, twist them at your advantage

  • Distract: If you’re accused of something, accuse someone else of the same thing or run as many conflicting narratives about the incident as possible to obscure the facts of the event

  • Dismay: If you don’t like what someone else is planning, try to scare their audience off with exaggerated predictions

InfoOps5-a.png

SOURCE VERIFICATION  

 

The main techniques to conduct source verification include the following. Each of these will be covered in depth in future Digital Sherlocks trainings, this is just an introduction:

 

  1. To determine the origin of a photo or image, analyze the: 
     

    • Pixel data

    • Metadata

    • Exchangeable image file (exif) data (e.g., digital camera model, shutter speed, focal length)

    • Reverse Image Search
       

  2. To corroborate the location of a given event depicted in an image or video, geolocate using context evidence, including: 
     

    • Landmarks such as apartment blocks, churches, schools, hospitals, parking lots, etc.

    • Languages visible on signs, stores, roads, billboards, etc.

    • Topography such as hills, mountains, waterfalls, forests, lakes, etc. 

    • Street features and other apparent objects such as benches, streetlights, traffic lights, etc. 
       

  3. To investigate who is behind a website, conduct forensic analysis through open source techniques to discover: 
     

    • Who manages the website

    • Who is publishing content of the website

    • Whether the website is part of a wider network