Social Media Platforms Fail to Curb Disinformation and Hate Speech During India-Pakistan Conflict

The recent escalation of tensions between India and Pakistan has witnessed a surge in disinformation, hate speech, and censorship across digital platforms. The Association for Progressive Communications (APC) has expressed grave concern over the inadequate response of major social media companies to this alarming trend. These platforms’ engagement-driven business models, which prioritize user engagement and profit maximization over safety and human rights, have exacerbated the spread of harmful content, particularly during this sensitive period. The lack of effective moderation and the amplification of polarizing content have created a dangerous online environment, fueling animosity and undermining peacebuilding efforts.

The surge in online hate speech and misinformation has targeted vulnerable communities, particularly religious and gender minorities. Anti-Muslim rhetoric, fueled by right-wing accounts, has proliferated in India, contributing to a global rise in Islamophobia. This rhetoric poses direct threats to the safety and security of marginalized communities like Kashmiris and Indian Muslims. Simultaneously, gender-based abuse, employing dehumanizing language and slurs, has escalated on both sides of the border, further deepening societal divisions and perpetuating harmful stereotypes. This toxic online environment not only harms individuals directly targeted but also undermines social cohesion and trust.

Social media platforms, including X (formerly Twitter), Facebook, YouTube, and TikTok, have failed to implement effective measures to counter the spread of harmful content. X’s recent statement condemning the Indian government’s account blocking orders, while welcome, has not been followed by concrete action to curb misinformation. The platform’s community notes feature, intended to combat misinformation, has proven ineffective and easily manipulated. Similarly, other major platforms have not taken meaningful action against hate speech and disinformation, demonstrating a lack of commitment to user safety, particularly during times of conflict.

The inadequate response of social media companies is rooted in their business models, which prioritize engagement and profit. Opaque algorithms and monetization policies amplify divisive and inflammatory content, creating a self-perpetuating cycle of outrage and hostility. The architecture of these platforms, designed to maximize user engagement, inherently rewards sensationalism and polarization, making harmful content a feature, not a bug. This crisis is not an isolated incident but rather a stark illustration of how platform logics consistently undermine safety and accountability. The failure to address these systemic issues necessitates urgent structural reforms.

The current situation mirrors previous instances where social media platforms have played a detrimental role in conflicts, such as the spread of anti-Rohingya propaganda in Myanmar. In the India-Pakistan context, platforms have amplified unverified and sensationalist content, exacerbating nationalist sentiments and deepening divisions. Furthermore, they have enabled state-sponsored censorship and suppression of dissent, particularly through the Indian government’s directive to block thousands of accounts on X, including those of news organizations, journalists, and fact-checkers. This crackdown on online expression significantly restricts the free flow of information, which is crucial during a volatile situation. Pakistan’s recent lifting of its ban on X, ostensibly to participate in the “narrative war,” further complicates the situation, as does their blocking of Indian YouTube channels and websites.

To address these critical issues, the APC calls for urgent action from social media platforms. These platforms must establish crisis protocols and safeguards against extremism, including rapid moderation escalation and independent oversight. They must protect free expression by resisting state censorship orders and publicly documenting takedown actions. Mandatory human rights due diligence, including regular impact assessments and mitigation measures, is essential. Transparency in algorithmic practices and equitable content moderation, including investments in multilingual and context-aware moderation, are crucial. Finally, platforms must proactively demonetize harmful content by disabling ad revenue and algorithmic boosting for accounts repeatedly flagged for violations. These measures are necessary to ensure that social media platforms contribute to a safer and more informed online environment, especially during times of conflict and crisis.

Share.
Leave A Reply

Exit mobile version