Bangladesh’s Digital Battlefield: Gendered Disinformation Plagues 2024 Election
The 2024 general elections in Bangladesh witnessed a disturbing trend: the weaponization of social media, particularly Facebook, to spread gender-based disinformation. A comprehensive study by the Tech Global Institute reveals the extent of this online abuse, highlighting the inadequacy of existing frameworks designed primarily for Western contexts when applied to the complex socio-political landscape of Bangladesh. The research, based on an analysis of almost 25,000 Facebook posts, exposes how coordinated networks exploited the platform to disseminate targeted attacks, disproportionately affecting women and marginalized communities. This digital onslaught had tangible real-world consequences, reinforcing existing societal biases and potentially deterring future female and gender-diverse participation in politics.
The study unveils a disturbing pattern of gendered attacks directed primarily at political figures, especially women from opposition parties. Over 70% of the 1,400 posts flagged as gendered attacks contained sexual insinuations, while nearly 20% included discriminatory remarks based on religion, ethnicity, or sexual orientation. The remaining 10% focused on behavioral shaming, often without factual basis. This targeted abuse ranged from body-shaming and homophobic slurs to the fabrication of sexualized disinformation and explicit threats. Unlike more overt forms of harassment, this campaign often employed coded language and culturally specific slurs, making detection challenging for automated systems relying on English-language lexicons or Western legal frameworks.
The Tech Global Institute’s research underscores the limitations of current methodologies for identifying abusive content. Traditional keyword filters, trained primarily on English data, fail to capture the nuances of Bengali and the prevalent use of “Banglish,” a hybrid of Bengali and English. The fluid, relational nature of abuse within the Bangladeshi digital ecosystem also poses a significant challenge. Seemingly innocuous Bengali terms can be weaponized to feminize and stigmatize political actors. For instance, “shamakami” (homosexual) is often deployed as a derogatory term, while comparisons of public figures to pornographic icons are used as a form of character assassination. Recognizing these challenges, the research team adopted a “human-in-the-loop” methodology, developing an evolving corpus of harmful terms validated by expert review and iterative testing to better identify and categorize abuse.
Beyond individual acts of harassment, the study revealed a sophisticated network of disinformation operations. Approximately 700 Facebook pages masquerading as legitimate news outlets or community organizations served as hubs for coordinated campaigns. Identical content, including doctored images and deepfake videos, was disseminated across multiple pages and groups within seconds, amplifying malign narratives and reinforcing harmful gender stereotypes. This coordinated strategy, coupled with the coded nature of the abuse, points to a deliberate attempt to manipulate public perception and suppress dissent. The fact that 93% of these attacks targeted individuals unaffiliated with the ruling Awami League suggests a strategic use of gendered disinformation to consolidate power.
While men were also targeted, often with slurs questioning their masculinity, women bore the brunt of the online abuse. The ten most frequently attacked individuals were predominantly female opposition politicians, subjected to relentless personal attacks and fabricated allegations of sexual misconduct. This targeted harassment creates a hostile environment for women in politics, potentially discouraging future participation and perpetuating existing gender imbalances in leadership. Gender-diverse communities were also targeted, highlighting the pervasive nature of discrimination within Bangladesh’s digital sphere. Even figures like Prime Minister Sheikh Hasina, while also receiving supportive coverage, were not immune to online attacks, illustrating the widespread and often indiscriminate nature of this digital warfare.
Existing legal frameworks in Bangladesh, including the Cyber Security Act 2023, criminalize online abuse. However, enforcement remains inconsistent, and victims, particularly those affiliated with the opposition, often face significant hurdles in seeking redress. Complaints are frequently dismissed, and the burden of disproving defamatory claims rests with the victims, discouraging them from pursuing legal action. Similarly, while Meta’s policies prohibit hate speech and harassment, their implementation falls short in the context of non-English languages and culturally specific forms of abuse. Automated systems struggle to detect contextually coded slurs, and self-reporting mechanisms place an undue burden on victims who are already facing significant online harassment – a burden exacerbated by the fear of further reprisal or victim-blaming. The Tech Global Institute’s findings highlight the urgent need for more robust platform accountability and improved enforcement mechanisms to combat the insidious spread of gendered disinformation and protect vulnerable communities in the digital space. The study also underscores the chilling effect of such attacks, potentially deterring future generations of women and gender-diverse individuals from engaging in political life.