Bangladesh’s Digital Battlefield: Gendered Disinformation Threatens Democratic Participation

The 2024 general elections in Bangladesh witnessed a disturbing trend: the weaponization of Facebook for disseminating gendered disinformation and abuse, disproportionately targeting women and marginalized communities. A comprehensive study by the Tech Global Institute, analyzing nearly 25,000 Facebook posts, has exposed the mechanics and impact of this digital assault on democratic participation. The study challenges traditional disinformation frameworks developed in the Global North, highlighting the need for context-specific approaches when analyzing online abuse in countries like Bangladesh. With digital platforms becoming increasingly central to political discourse, the online spread of gendered abuse translates into real-world consequences, silencing voices and undermining democratic processes.

The study reveals a calculated campaign of online harassment, ranging from body-shaming and homophobic slurs to sexualized disinformation and threats. A staggering 70% of the 1,400 posts flagged as gendered attacks contained sexual insinuations, while others employed discriminatory remarks based on religion, ethnicity, or sexual orientation. This targeted abuse primarily focused on female political figures, particularly those from opposition parties, attempting to discredit them and deter women’s participation in politics. This echoes findings by ActionAid, indicating that two-thirds of Bangladeshi women experience online harassment. The research underscores the inadequacy of existing methodologies for identifying abusive content. These methods, often reliant on English-language lexicons or US-centric case law, fail to capture the nuances of abuse in Bangladesh’s digital landscape. For example, benign Bengali terms are weaponized to stigmatize and feminize political actors.

Furthermore, the research highlights the technical challenges in detecting and mitigating this abuse. Machine-led keyword filters proved ineffective due to the linguistic complexity of Bengali and the prevalence of "Banglish," a hybrid of Bengali and English. The research team adopted a "human-in-the-loop" methodology, developing a continuously evolving corpus of harmful terms validated by expert review and iterative testing. This approach proved more effective in identifying context-specific abusive content. The study uncovered a complex web of opaque affiliations and deceptive practices operating within the Bangladeshi Facebook ecosystem. Hundreds of pages masquerading as legitimate news outlets or community organizations were identified as hubs for coordinated disinformation campaigns. This sophisticated network employed tactics like disseminating identical content – often manipulated images or deepfakes – across multiple pages and groups within seconds, rapidly amplifying malicious narratives and reinforcing harmful stereotypes.

The analysis of targeted attacks revealed a clear political bias: 93% were directed at individuals unaffiliated with the ruling Awami League. This suggests a strategic deployment of gendered abuse to suppress dissent and consolidate power. While men were also targeted, often with slurs questioning their masculinity, women faced the brunt of the attacks. The top ten most targeted individuals were primarily female opposition politicians, subjected to relentless personal attacks and fabricated allegations. Gender-diverse communities were also not spared. Despite legal recognition of hijras as a third gender in 2013, derogatory terms remain prevalent in online political discourse. Even prominent figures like Prime Minister Sheikh Hasina, while receiving supportive coverage, were not immune to this toxic online environment. The study raises concerns about the long-term impact of this gendered online abuse, fearing it may discourage future generations of women and gender-diverse individuals from participating in political life or engaging in civic discourse.

While Bangladesh has legal frameworks in place to address online abuse, including the Cyber Security Act 2023, enforcement remains inconsistent. Opposition figures often face dismissal of their complaints, and victims are burdened with disproving defamatory claims, creating significant barriers to justice. Meta’s policies against hate speech and harassment, while offering theoretical protection, are criticized for inconsistent enforcement and an inability to detect contextually coded abuse in non-English languages. Automated systems struggle to identify subtle or misspelled slurs, and self-reporting mechanisms place an undue burden on victims. The study calls for a multi-pronged approach to address this growing threat to democratic participation. This includes improving platform accountability, developing more sophisticated detection mechanisms that account for linguistic and cultural nuances, strengthening legal frameworks and enforcement, and empowering individuals and communities to combat online abuse. It also emphasizes the need for further research to fully understand the complex dynamics of online abuse and its impact on democratic processes in diverse cultural contexts.

The study’s findings paint a concerning picture of the state of online discourse in Bangladesh and the urgent need for action. The weaponization of gendered disinformation and abuse poses a serious threat to democratic participation, particularly for women and marginalized communities. Addressing this challenge requires collective action from platforms, policymakers, civil society organizations, and individuals to create a safer and more inclusive digital environment where all voices can be heard without fear of harassment or intimidation. Only then can the promise of democratic participation be fully realized in the digital age.

Share.
Exit mobile version