AI-Powered Disinformation Campaign Targets Canadian Climate Action at the Local Level
A sophisticated disinformation campaign leveraging artificial intelligence is sweeping across Canada, targeting municipal governments and aiming to dismantle local climate action plans. This campaign, orchestrated by a group called KICLEI (Kicking International Council out of Local Environmental Initiatives), utilizes a custom-built AI chatbot to generate polished emails, reports, and even speeches designed to persuade councillors to abandon climate initiatives. The group’s primary target is the Partners for Climate Protection (PCP) program, a voluntary net-zero initiative adopted by numerous municipalities across the country. KICLEI opposes the PCP program due to its association with ICLEI Canada, which has ties to the United Nations, fueling the group’s narrative against global agendas.
The campaign’s impact is already being felt. At least 14 municipalities have hosted KICLEI presentations, and some, like Thorold, Ontario, have voted to withdraw from the PCP program altogether. Lethbridge, Alberta, recently voted to significantly reduce its emissions reduction targets. These decisions followed a barrage of KICLEI-generated materials disseminated to council members, some of which contain misinformation, according to climate scientists. The campaign’s effectiveness lies in its ability to exploit the trust placed in local officials by crafting personalized messages that appear to originate from concerned constituents, effectively masking the centralized nature of the operation.
KICLEI, founded by Freedom Convoy activist Maggie Hope Braun, has developed the "Canadian Civic Advisor" chatbot using OpenAI’s ChatGPT technology. This chatbot is programmed to produce tailored communications that downplay the urgency of climate change, focusing instead on "real pollution" and framing arguments in a moderate, civic tone to maximize persuasiveness. The chatbot’s instructions specifically direct it to “de-emphasize the climate catastrophe narrative” and focus on “practical environmental protection measures” that do not prioritize CO2 reduction. The chatbot also draws from KICLEI’s central repository of reports and articles, some of which contain scientifically inaccurate information.
This AI-driven campaign raises alarming concerns about the potential for artificial intelligence to be weaponized for spreading misinformation. Experts warn that this technology significantly lowers the cost of disseminating false information, making it easier to influence public opinion and policy decisions. While many advocacy groups utilize AI, KICLEI’s campaign stands out due to its targeted approach and the potential for its tactics to be replicated and scaled to influence other policy areas. The ease with which the chatbot can generate personalized and persuasive messages makes it a powerful tool for manipulating public discourse and undermining evidence-based policymaking.
KICLEI’s campaign extends beyond targeted emails and presentations. The group publishes regular articles on its Substack platform, many of which contain misleading claims about climate science. These articles, disseminated widely to councillors, frequently misrepresent the work of prominent climate scientists. For instance, KICLEI has cited research to falsely claim that there is only a 0.3% consensus among scientists regarding human-caused climate change. In reality, the consensus is closer to 99.9%. The group has also distorted the findings of studies on the role of CO2 and water vapor in climate change, misrepresenting scientific data to downplay the urgency of addressing greenhouse gas emissions.
The sheer volume of KICLEI’s outreach is overwhelming local officials, who describe being inundated with emails and reports. Some councillors have reported these emails as spam, but with limited success. This deluge of information makes it challenging for councillors to discern credible sources and make informed decisions, creating an environment ripe for manipulation. The campaign’s effectiveness lies not only in the persuasive nature of the AI-generated content but also in the sheer volume of information, which overwhelms local officials and creates an environment where misinformation can thrive. The targeted and personalized nature of the campaign also makes it challenging for councillors to distinguish genuine constituent concerns from orchestrated disinformation efforts. This situation underscores the urgent need for strategies to counter AI-powered disinformation campaigns and protect the integrity of democratic processes.