Collaborative Conversation on Free Speech, Extremist Organizing & The Paradox of Intolerance

From TCU Wiki

Workshop: Collaborative Conversation on Free Speech, Extremist Organizing & The Paradox of Intolerance


Who: Tamara Grigoryeva, Creative Associates

Date: Tuesday, March 16th

Join us for a collaborative community conversation led by former IFF Fellow Tamara Grigoryeva, helping us draw the difference between free speech and civic organizing, and hate speech and extremist organizing. In this discussion we will explore:

  • Where do the social media platforms stand on this, what are the pressure mechanisms on them, if any.
  • What do the governments do to regulate these areas and these issues while not violating freedoms? Do the governments use the regulation argument to crackdown on civic actors?

// We will be hosting a 25 minute post-workshop networking exercise to allow folks to meet others who share their interest, and strengthen collaborations across various lines. Make sure to schedule in 25 minutes extra on your calendar, if you are interested in joining //

Tamara Grigoryeva is a journalist from Azerbaijan with a passion for internet freedom, data science and social media trends; a WHRD and LGBTQIA advocate based in Washington, D.C. and currently working at Creative Associates as the Media Lead on their Creative Development Lab.

>> Check out notes from other sessions here


| Slide Deck

Network Metastasizing is when groups meet in Social Media (like Facebook, Twitter), move to whatsap/telegram, cultivate their own coded language, memes etc., and then move to private groups to offline IRL meeting. This is currently happening in extremist circles.

Twitter and Tear Gas is a great gook to read about this phenomena

This has caused both detecting them and analyzing them more problematic. For example, Social Media companies use certain keywords to find hate groups, but then hate groups started using their own coded language, causing the keywords not to work anymore.

When groups move to private, its really hard to monitor them. They can’t be detected, researched or reported on. Only members can report on their group. Its very hard to web scrape and do cluster research of private groups as well.

A lot of the tools like “who posted What” ONLY work now with public groups and friends list, you can’t see all groups anymore. So the question is, how do yo monitor them?

Where does free speech end end and hate speech start. Where does Civic Organizing end and Extremist organizing start.

What Civil Society has been doing:

  • SBCC - social behavior change communications which means coordinated communications campaign
  • Proactive content observation and moderation. But is this enough? Do CSO have contacts at social media? Are the SM co’s care, fast enough?
  • Disinfo/hate speech detection. but then what happens when you detect it? This requires so much effort and resources/ie, expensive.
  • Counter and alternative content - also, expensive and is anyone reading the content?
  • Early warning systems

All these efforts cost a ton of money AND individuals engaged in those efforts can be in danger and/or suffer psycho-socially.

What are current mechanism of collaborations with social media platforms look like? We need much of real, active collaboration that isn't really happening.

Where do social media stand? Ie, what are they doing?

  • Community Standards
  • Transparency Reports
  • Social Media Ads Libraries
  • Collaboration with CSO

Where do Governments stand? Its quite tricky:

  • Lack of comprehensive up to date legislation
  • Lack of digisec and transparency to protect programs
  • Lack of proper monitoring and partnerships with SM
  • Use of privacy anti-disinfo laws to crackdown on activist.

What we as a civil society can do better. what we should be calling on governments to do?

Crowdtangle was acquired by Facebook but is fairly popular among CSOs doing research

If social media creates early warning or get better on their policies at disallowing hate speech on their platforms, will this mean bad actors will create and use their own dark corners, ie alternative social media platforms? Recently, Koo in India became wildly popular and it is where a lot of right-wing nationalists are moving to.

How RW extremists in India are using Telegram to organise, broadcast and coordinate

Also Telegram:

Much of the tools/info for CSOs doing this work are in english OR dont work in places with low internet. How can we fix this? Can places like Bellingcat and OCCRP do better at not being English-centered

an org doing work on this:

Who participated in this report:

Social media companies are not confident in their abilities to deal with non-Western languages

Misinfocon/Credibility Coalition community

Microsoft has a track record of self censorship based on their marketing analysis, an example is forcing medium safe mode in search results for the MENA region in

We need consistency on: can we get all social media companies to clarify their policies on when they take things down? Seems certain content get taken down in English but then stay in french, even though they pose the same risk.

Social Media companies also need to pump more resources and money into this issue internally, versus expecting CSOs to do free labor.

Shadow banning is also important and, to give one example, tends to really harm creators talking about LGBTQI+ topics on YouTube

Microsoft has a track record of self censorship based on their marketing analysis, an example is forcing medium safe mode in search results for the MENA region.

People suggest Ranking Digital Rights as a clearinghouse for understanding which companies will turn over what info, but some folks would like to see even more fine-grained info than that

EUDL also cosigned a leter that could interest folks here: