Tech Giants Wage War on Misinformation in Australia, Removing Thousands of Videos and Ads
Australia’s digital landscape has become a battleground against misinformation, with major technology companies stepping up their efforts to combat the spread of false and misleading content. Thousands of videos, deceptive advertisements, and fake profiles originating in Australia have been purged from online platforms in the past year, a testament to the growing concern surrounding the proliferation of misinformation. This concerted action comes as part of a voluntary industry commitment to the Australian Code of Practice on Disinformation and Misinformation, requiring participating companies to submit transparency reports detailing their efforts.
Leading the charge against harmful content were video-sharing giants TikTok and YouTube, which collectively removed over 25,000 videos containing false and misleading information. These platforms, known for their vast reach and potential for viral spread, have become prime targets for misinformation campaigns. Meanwhile, Meta and Google, dominant forces in online advertising, focused their efforts on tackling unverified and deceptive election ads, identifying and removing thousands of such ads ahead of crucial elections. These revelations underscore the significant role these platforms play in shaping public discourse and the potential for manipulation.
The transparency reports, published Thursday, provide a glimpse into the scale of the problem and the measures being taken to address it. Eight major tech companies, including Google, Meta, Twitch, Apple, and Microsoft, participated in this initiative, outlining their strategies for identifying and removing misleading content, implementing safeguards for users, and collaborating with fact-checking organizations. However, notably absent were detailed reports from social media platforms X (formerly Twitter) and Snapchat, raising concerns about transparency and accountability within the industry.
TikTok’s transparency report revealed the removal of a staggering 8.4 million videos from its Australian platform in 2024, including over 148,000 videos flagged as inauthentic. Of particular concern were the nearly 21,000 videos that violated the company’s "harmful misinformation policies," highlighting the platform’s vulnerability to the spread of potentially damaging content. Impressively, TikTok reported that 80% of these harmful videos were removed before they could even reach users, demonstrating the effectiveness of proactive detection and removal strategies.
Google’s efforts focused heavily on YouTube, where over 5,100 misleading videos originating in Australia were removed, a small fraction of the more than 748,000 misleading videos removed globally. This highlights the international scope of the misinformation challenge and the need for coordinated global action. Election integrity also emerged as a key concern, with Google rejecting over 42,000 political ads from unverified advertisers in Australia. Meta echoed these concerns, reporting the removal of over 95,000 ads for non-compliance with its policies on social issues, elections, and politics.
Meta’s comprehensive approach to combating misinformation included the removal of over 14,000 ads in Australia that violated misinformation rules, as well as the takedown of 350 Facebook and Instagram posts for spreading false information. The company also utilized warning labels on 6.9 million posts based on fact-checks from partner organizations, providing users with context and encouraging critical evaluation of online content. However, Meta’s recent announcement of ending fact-checking in the US raises questions about the future of these practices in Australia, potentially leaving a gap in efforts to combat misinformation.
The fight against misinformation presents a complex balancing act for tech companies, requiring them to weigh the importance of free expression against the need to protect users from harmful content. Shaun Davies, the Digital Industry Group’s code reviewer, acknowledged this challenge, emphasizing the difficulty of moderating online content without stifling legitimate discourse. He also highlighted the increasing role of artificial intelligence (AI) in both the creation and detection of misinformation, noting that some companies are leveraging AI tools to identify and flag potential violations. The emergence of AI-generated content and its potential for misuse, particularly in the creation of convincing deepfakes and political ads, represents a new frontier in the fight against misinformation, requiring innovative solutions and proactive measures.