Blog Post

Decisions Without Reasons: The Achilles’ Heel of Social Media Moderation

by Fernanda Dos Santos Rodrigues Silva, Luiza Correa de Magalhães Dutra, Paulo Rená da Silva Santarém

Fernanda Dos Santos Rodrigues Silva, Luiza Correa de Magalhães Dutra, Paulo Rená da Silva Santarém

Have you ever had a post removed or flagged as inappropriate on a social network and didn’t understand why? Well, you’re not alone. It turns out that our lack of information goes beyond how this process works. We lack knowledge of the history of content moderation decisions as well, i.e., the reasons for digital platforms’ interventions regarding posts made by users on social media. This gap prevents us from understanding the main problems users face concerning the procedures of this moderation.

Companies publish transparency reports periodically, but most of these documents only count total numbers, such as moderated content, user appeals, and compliance with court decisions. The actual situations and reasons behind the decisions that guide platform actions are not made public.

What do social media users complain about?

To shed light on this data, we at the Institute for Research on Internet and Society (IRIS) conducted a study of the main complaints made by users of Facebook, Instagram, Twitter, TikTok and YouTube on the Reclame Aqui platform, which is nationally recognized as a mediator between customers and companies for resolving issues in Brazil. We collected over a thousand complaints in total and thoroughly analyzed 449 that we identified as being of interest to our research.

Among the complaints we analyzed, a staggering 54.34% were concerned with the moderation procedure in cases of removal of social media posts and account suspension/blocking. This figure highlights how the opacity of this process contributes to user distrust and dissatisfaction, negatively impacting the public perception of the platforms.

More than that, this opacity harms both users and platforms: users are uncertain about how moderation works, while platforms face public distrust and speculation about their practices. Although full disclosure could be exploited by malicious actors, a basic level of transparency and adherence to due process could help balance the interests of both sides.

Automatic responses that don’t explain why

But what moves the user to make a former complaint? Inadequate justification was the main reason for complaints, totaling 128 – that means 52.46% of complaints about content moderation. In other words, there is a series of user complaints arising from the lack of transparency and clarity in the reasons presented by digital platforms when we talk about content moderation. In this category, there are, for example, cases of automatic responses that did not dialogue correctly with the arguments of the person who presented an appeal:

However, I believe that all the videos posted on the channel and all my actions within the platform comply with the [platform name hidden] Terms of Service and Community Guidelines.

 Therefore, I would like the platform to explain what these violations were and when they occurred, and if possible, I request a review of the decision and the reactivation of my channel.”

 We know digital platforms depend heavily on automation due to the massive volume of content uploaded every minute. However, complaints suggest that the absence of human oversight can undermine users’ right to appeal. When users challenge a moderation decision, they expect their arguments to be considered, but automated responses often fail to address these points or provide adequate justification. This can lead to frustration and render the appeal process ineffective, ultimately compromising users’ right to due process.

Figure 1 – Complaints about content moderation procedures regarding post removals and account suspensions/blocks

Source: own authorship.

As shown in the graph above, in second place, accounting for 22.54% of the complaints about moderation, are cases in which the platform did not answer the appeal against a moderation decision. This gap may fuel the perception that platforms are acting arbitrarily. Instead, companies should adopt a more rigorous and transparent approach to content moderation.

Sometimes, you may not even know

In a scenario where many people have used social networks as the main means of communication for their work, the lack of response to appeals against moderation decisions can leave them completely helpless and affect their earnings. The same can be said in cases in which the user noticed removed content, but claims not to have received any notification about it (9.02%).

 (…) Recently, I discovered that several of my videos, which accumulated more than 2 million likes, were deleted without any prior notice or adequate explanation. These videos represented hours of creative work, dedication and interaction with my followers, and their abrupt removal has negatively impacted my presence on the platform.

 Furthermore, It is extremely disturbing to see that these deletions were carried out without my consent and, until now, I have not received any convincing justification from the support team of [platform name hidden]. The lack of transparency and effective communication from the platform is unacceptable and disrespectful to content creators who invest their time and effort in contributing to the [platform name hidden] community..(…)

 The percentages of complaints in the categories of inaccessible platform design (4.51%) and lack of appeal/contest/review tools (7.38%) are relatively low, but they indicate significant areas of concern. The constant review of moderation processes, as well as the optimization of platform interfaces and functionalities, are essential to ensure not only the maintenance of effectiveness and accessibility in these aspects but also to make platform actions increasingly transparent and aligned with users’ needs.

The problem is not just removing posts without giving reasons

Finally, our study also pointed out that 45.66% of the complaints analyzed did not deal with removal of social media posts or account suspension/blocking. These complaints instead address various aspects of content moderation The most frequent issues are general dissatisfaction with the moderation process (36.1%) and requests for moderation of third-party content (30.24%).

Figure 2 – Other complaints about content moderation on social media platforms not related to post removals and account suspensions/blocks

Source: own authorship.

In general complaints, users expressed frustration with how moderation operates (or fails to operate), including dissatisfaction with content recommendations made by the platform’s algorithm. Complaints about the lack of clarity in these recommendations, especially when users don’t recall engaging with similar content, highlight concerns about the opacity of recommendation algorithms. While personalized experiences through data collection are often justified, these complaints reveal a failure to provide clear, effective personalization. Additionally, many users complained about moderation actions applied to third-party content, such as influencer channels or accounts they follow.

The other categories include problems with monetization (10.24%), problems with the recommendation or reach of the content (10.24%), restriction of functionalities on the platform due to a moderation decision (10.24%), problems generated by the platform’s age limitations (1.95%), and moderation requests (1.46%), each of which has a lot of grounds for further research and exploration.

To learn more about our research, visit the IRIS website and read our final report. Also, stay updated on all our discussions about online content moderation.

Authors

Fernanda Dos Santos Rodrigues Silva

 

 

 

Paulo Rená da Silva Santarém

 

 

 

Luiza Correa de Magalhães Dutra