When we want to advocate for political causes – for example, against corporations that couldn’t care less about the environment, against an increasingly fascist world, against war and despots – we often do so via social media. Platforms like TikTok now shape our public sphere: they determine which issues gain visibility, who is heard, and how political debates unfold. For activists in particular, they are often the most important channel for generating attention, mobilising supporters and creating counter-public spheres. This is especially true for marginalised groups, such as the Palestinian diaspora, which uses digital spaces to make experiences of war, displacement and everyday life visible.
Yet this promise of visibility is ambivalent. Whilst platforms were long celebrated as a driving force for political participation, a far more critical picture is emerging today. The very infrastructures that enable mobilisation also facilitate surveillance, depoliticisation and repression. Platforms are not neutral spaces, but follow commercial logic, political interests and specific moderation regimes. And a look at their ownership structures quickly reveals which political agendas they tend to align with.
Against this context, Sarah Häusermann and I have examined in our latest paper how pro-Palestinian content creators on TikTok deal with these conditions. To this end, we interviewed 13 activists who regularly post political content. Our aim was to understand their experiences with content moderation, and in particular with less visible forms of regulation.
The results reveal a clear pattern: moderation is increasingly taking place behind the scenes. The technical term for this is ‘algorithmic visibility moderation’: content is not removed directly, but its reach is quietly restricted, for example by not appearing on the ‘For You’ page. This practice is often described as ‘shadowbanning’, and remains opaque and difficult to prove.
A key issue here is the perception of a lack of transparency. Many interviewees report that content is removed or restricted without clear justification. The Community Guidelines offer little guidance. TikTok deletes content that does not fit its political agenda – particularly perspectives from Muslims, anti-American and anti-colonial voices, etc. The appeal process is largely ineffective: Even when content is restored, it often no longer reaches its original audience (it has missed its window to go viral). This creates a sense of powerlessness and loss of control. Platforms can thus influence political visibility without users fully understanding what is happening or being able to effectively defend themselves against it.
This dynamic can be described using the concept of ‘platform gaslighting’. Users are systematically kept in the dark and begin to doubt their own judgements: Was the post really problematic? Did I do something wrong? Do I need to adjust my language? Instead of recognising structural problems, responsibility is individualised. Users are thus subtly steered in the desired directions.
At the same time, our data shows that activists are not sitting around doing nothing. They are developing a range of strategies to maintain their visibility despite these conditions. These include, for example, ‘Algospeak’ – the deliberate alteration of terms or use of emojis to circumvent moderation systems (e.g. the watermelon as a flag) – or the embedding of political content within entertaining, trend-based formats (e.g. make-up tutorials). Collective practices also play an important role: creators exchange knowledge, support one another and coordinate their activities to make targeted use of algorithmic logic.
However, these strategies are not accessible to everyone. Our sample comprises comparatively established creators with experience, a wide reach and networks. Less visible or less connected voices may already be missing. This is a classic bias in research: we primarily see those who, despite everything, still manage to remain visible.
The case of pro-Palestinian activists thus highlights a broader problem. When platforms decide which content gains reach and which does not, political visibility itself becomes a contested resource. The fact that this control is increasingly taking place behind the scenes, without labelling and without effective avenues for appeal, should alarm us.