Disinformation Strategies and Tactics

From TCU Wiki
Feather.png

Date: May 28, 2020

Who: Gabrielle Lim, Researcher at the Technology and Social Change Research Project at Harvard's Shorenstein Center
Gabrielle Lim is a researcher at the Technology and Social Change Research Project (TaSC) at Harvard’s Shorenstein Center, as well as a fellow with Citizen Lab at the Munk School, University of Toronto. Her research focuses primarily on information controls and security, with a focus on disinformation and media manipulation.

What: Across the world more and more communities are being manipulated by fake news and misinformation. However, most people don't really understand how this is implemented. Join Gabrielle Lim, Researcher at Harvard's Shorenstein Center who will share with us common tactics and strategies used. In addition, learn:

  • Case studies from the United States, Iran, and China
  • How the life cycle of a media manipulation campaign works
  • Research methods for detection, tracking, and attribution
  • Factors that need to be considered when proposing counter-measures to combat disinformation and media manipulation

Slide Deck: https://cryptpad.fr/file/#/2/file/e+onWiI-MjVlGMH7vffcV9-d/ (file on CryptPad)


Notes

  • Many words are used to describe this phenomenon: misinformation, fake news, false information etc. There are such vague words that it causes problems. These are good definitions to use:

// Misinformation is information that is false but not shared intentionally//

// Disinformation is false information is shared with mal intent //

// Media manipulation is everything is under the sun. it’s a broad term. //

  • Information Disorder: The Essential Glossary

https://firstdraftnews.org/wp-content/uploads/2018/07/infoDisorder_glossary.pdf?x30563

  • This article explains difference between the terms:

https://medium.com/@mikekujawski/misinformation-vs-disinformation-vs-mal-information-a2b741410736

  • Researchers need to keep in mind that evidence of activity is not evidence of impact. For example if bots are tweeting at each other, does this matter? We should focus on impact and harms of misinformation, not that it just exist.
  • We also have to be nuanced when we label information as threat. This can go bad very quickly, and used to censor folks.
  • Just because false information exists, doesn’t mean the audience will take it, believe it. we are not blank robots
  • Citizens should look at reporting on disnfo with a skeptical eye. What is the motive? Agenda?


Life Cycle of Media Manipulation

  • Its a cyclical event. As the manipulation happens, it depends on how actors respond. It usually happens across multiple platforms. And different things happen in different platforms. You have to figure out the best way to combat these disinformation campaigns and sometimes doing nothing, and not bringing media attention to it is the best approach.


Trading up the Chain or Chain Reaction

This is when information is broke on very small platforms, like blog or forum, with the explicit aim of trying to bring it up to more official and larger media. The "Its Okay to Be White" is a good example of this. It began on 4chan, and posters were put across campuses. As a result, mainstream media and online influencers took the bait, and this had impact. For example, Austrailia parliament even debated a resolution to condemn anti-white racism.

  • Another example is “Endless Mayfly” that they suspect is Iranian linked operation. They spoofed real news outlet websites, to bait activists and journalist with fake information, so they can report on it as if it’s real. This resulted in a couple cases of widespread dissemination of politically sensitive false information and official responses from news outlet. For example, Reuters reported this.


Hidden Virality

  • This is when disinformation campaigns use content that goes viral but researchers/authorities/others can't see it because its in closed or encrypted apps like WhatsApp, or actors are able to circumvent detection altogether. Also, usually there is no API, making it difficult to get data at a scale.

There are many ethical considerations about this type of research because researchers need to get consent to go into this private spaces. They can't just do research in these spaces without consent, if not there are ehtical issues around this. However, there seems to be good research being done on Hidden Virality in public Whatsapp Groups.

  • Example of this is India and BJPs WhatsApp strategy. In fact, much of what they are distributing is straight up hate speech. They are directly disseminating political messaging on Whatsapp, and also promoting islamophobia. For example, almost a quarter of 60,000 messages they distributed was anti-muslim or islamophoobic. It was also super heavy message. This is a good article that explains how insidious Whatsapp (dis)information campaigns during elections in India get: https://the-ken.com/story/karnataka-whatsapp/
  • This is a report on the recirculation of fake images in India:

https://propaganda.qcri.org/bias-misinformation-workshop-socinfo19/Containing_the_spread_of_Fake_Images_using_Computer_Vision_and_Image_Processing.pdf


  • Another example is COVID19 copypoasta on WhatsApp. Giant blocks of text are copied and pasted or forwarded.
  • Other tactics include: Misinfographics, leaked forgery (falsified or doctored document leaked to public), Evidence collages and Cheap fakes vs deep fakes. For example, creating a fake video of a political having sex. It doesn’t even have to be perfect. Even grainy films have caused damage.
  • Image detection is way harder for researchers and monitors. Researchers have a lot of fear around deep fakes.


DETECTION AND ATTRIBUTION

  • Investigative ethnography doesn’t require super hard tech skills. They help you understand why people are more open to believing certain info, or contextualizing.
  • Cross-referencing with other academics or researchers is super important right now. For example, sharing their datasets; Also, Twitter regularly put out datasets of suspended accounts.
  • Looking for patterns for anomalous behavior (commonly used with bot detections.) However, what is a bot anyway? This is where it get tricky to figure out how distinguish a bot vs a human.

COUNTERMEASURES

  • We must look at both social and technical layers, provide repressive as well as regressive policies. Why do people want this fake information? Why is it going viral? These are questions we really should answer!
  • The threat of fake news is being used as justification for censorship. We should focus on the harms, not just the existence of false information. Also, measures need to be localized and appropriate for the sociopolitical context. Must be understood in tandem with censorship and other forms of information control.
  • We have to question people’s appetite for misinformation. For example, COVID19 misinformation is very popular among black Americans. Well, much of this is because of their distrust of medical establishment, that historically has mistreated them. There is a mistrust there that needs to be addressed. Another example, anti-vaxers, the biggest motivation for them is a mistrust of doctors, and the institutions that are looking out for them, etc.
  • We haven’t seen widespread distribution of deep fakes because the technology is not available to everyone yet. And other tactics are still easier.
  • Digital Tribalism – The Real Story About Fake News

http://www.ctrl-verlust.net/digital-tribalism-the-real-story-about-fake-news/

  • First Draft essential guides to information disorder, and other items:

https://firstdraftnews.org/long-form-article/first-drafts-essential-guide-to/

ADDITIONAL NOTES