Opinion

Technology and Misinformation in Pakistan

Exploring how technology fuels misinformation in Pakistan and the urgent need for digital literacy and fact-checking.

M

Manahil Khan

Author

12 min read
Technology and Misinformation in Pakistan

Communication has been revolutionized all around the globe, but specifically in Pakistan, it has had a negative effect, leading to an increased ecosystem of misinformation, because of the influence of technology. Using algorithms, social networks, and viral apps like Instagram and Facebook has enabled the spreading of fake news to millions of people in a matter of hours or even seconds. Algorithms also reduce the threshold of what is considered a believable fake. Similarly, a lack of digital literacy as well as a lack of balanced media accountability imply that many Pakistani citizens do not have access to the resources or the motivation to object to the endless outbursts of questionable concepts. Such complexities created by policy disconnection and active efforts by politicians will not be filled by technological cures.

The answer is in a comprehensive reaction that is better platform protection through transparency of algorithms and content policy that takes into account local conditions, with investment in education as the initial step to train common thinking, and the equal application of media legal regulations that prevent the inaccurate preservation of truth without the freedom of expression. It is only through these social and institutional aspects, along with the technological, to contain the reinforcement of these falsehoods and to protect national discourse that Pakistan can hope to prevent the exaggeration of misinformation.

Other than open platforms, private messaging apps make up a central point of the misinformation circle in Pakistan. WhatsApp, specifically, is used internally and thoroughly, as studies have shown that it is used during times of crisis, in Pakistan, as a major means of communication. Importantly, the encrypted/closed-group architecture of the application can seriously hinder any attempts to moderate the content and ensure that facts are factual. According to a report, “Countering Disinformation in Pakistan of WhatsApp: Lessons and Recommendations for Digital Journalism,” by “Waqas Naeem and Adnan Rehmat” usage in the COVID-19 surge of 2020 as analyzed on the platform, taking a set of 341 WhatsApp groups as public and 7,000 messages, found that about 14 per cent of messages included verifiably false or misleading information. It is important to note that among all types, it has been found that misinformation has a longer lifespan than true information; posts that included false information were reported to have lasted the longest. It follows that rumors, once formed, are even less suspicious of being corrected or factualized than before they are formed. According to all this suspicious mutual suspicion, Pakistani journalists always consider WhatsApp as the least valid source of news observed in the report by Waqas Naeem and Adnan Rehmat.

Pakistan fact-checkers have found a pattern where fake news is common in WhatsApp groups and only spreads to general public platforms.  According to the report by Waqas Naeem and Adnan Rehmat, they noted that fake news is usually placed behind closed WhatsApp chats, which then transfers into Twitter, and only afterwards spreads onto Facebook, by which time it has already taken on mass reach.  Therefore, by the time the mainstream platforms are exposed to the rumor, it is already spreading like wildfire, and it is too late to root it out; response time suggests that any efforts to counter it by official medium or platform tagging can only take effect once a substantial number of people have already fallen prey to the fake news.

This phenomenon has an observable effect on the sociopolitical space in Pakistan. Cases of misinformation that are relayed through WhatsApp have sparked social disaster and, in other instances, violent clashes. It has been determined that organized misinformation is the reason why mob violence against alleged blasphemy is incited. According to an event report, “Big Tech and the Misinformation Crisis in South Asia,” it indicates that Pakistani digital platforms were used to arrange so much collective aggression, wherein fake actions that were created online multiplied religious enmity. An article published by Accountability Lab Pakistan, “The devastating consequences of Misinformation, Disinformation, and Fake News on Society,” illustrates an example where in 2017 lynching of university student Mashal Khan, in which fake Facebook profiles and fake news allegedly claiming the student committed blasphemy, allowed a group of student confederates to order him out of his house and face him with violence. According to Zohra Khatoon, fake news cost the life of Mashal Khan in seconds. There are things in these episodes that reinforce the idea that, in Pakistan, just as in other settings, misinformation is not just another idle talk but can trigger violent events very fast. In addition, rumors based on sectarian incidents or even matters of honor have led to fatal events once allowed to spiral out of control through unmonitored internet attention.

These issues were further considered serious with the 2024 general election in Pakistan. According to a news report on Voice of America named “Deepfakes, Internet Access Cuts Make Election Coverage Hard, Journalists Say,” by Neelofer Mughal, during the subsequent weeks before the date of the polling, February 8, AI-generated fake news and images spread on social media. Those were very realistic edited video and audio files that claim to portray the public officials making inflammatory statements or calling on the masses to mobilize. As an example, an alleged viral alteration where Imran Khan encouraged supporters to boycott the election was immediately denounced by his party as false. Similarly, the clips created by other AI interestingly showed that the candidates were either proclaiming boycotts or making provocative remarks; thus, the involved politicians had to take a stand and declare their denial publicly. The report also warned that leaders with high profiles are particularly vulnerable to these tricks: media researcher Sadaf Khan has found that once a prominent figure like deeply immersed in a deepfake, the likelihood of letting the population get deceived grows significantly higher. The risk is that voters might work with false communications unknowingly, thus losing confidence in the electoral process. According to field reports, this is the case. One Islamabad-based journalist, Asad Toor, noticed that the AI videos seemed to electrify the turnout of Khan, a massive PTI voter turnout, prompted by an AI-generated appeal on behalf of their leader, Imran Khan, shifted the electoral balance, he said. On this basis, the potential impact on voter behavior by the viral AI material might have had a great impact.

According to the same report by Neelofer Mughal, Pakistan struggled with mass disinformation phenomena, and some institutional defenses were also put to a stern test. The day before the electoral exercise, the caretaker government imposed a temporary internet blockade on the problematic districts, allegedly because the state was fighting against threats of terrorism. However, this shutdown simultaneously held back those media houses and fact-checking organizations from handling fabricated accounts efficiently. As the Election Commission of Pakistan gave orders to broadcasters and news channels to comply with its code of conduct, minimal regulation was enforced on social media and messaging applications. Observers considered these blackouts a risk of being worse, giving more ground to rumors, impairing corrective mechanisms, and under any circumstances.

A larger trend is apparent in these recent developments in Pakistan, artificial intelligence tools are becoming advanced enough to drive down the price of generating convincing fake news. Analysts have seen that, regardless of the underperformance of worst-case model decline in the same year, the already emerging prevalence of AI-produced deepfakes, be it visual, audio, or textual in nature, already has a noticeable effect on the information ecosystem. According to an analysis, “Gauging the AI Threat to Free and Fair Elections," by Shanza Hassan and Abdiaziz, several democracies ' election campaigns have fallen victim to AI-mediated robocalls, synthetic voice-enabled impersonations of politicians, or semblance videos with actual persons issuing discourses they did not prepare. The experience of Pakistan, therefore, shows that this threat is not abstract, the news coverage of proximate India and the Western democracies records AI scripts and deepfake videos that influenced opinion matters and increased sociopolitical divides such as fake videos by Bollywood stars publicly endorsing a party that went viral during the 2024 election in India. The threat is that generative models, such as voice and face models, image and text models, are becoming more affordable; hence, companies capable of crafting fake content at scale implement this technique to spread fake narratives in addition to ordinary fiction all around the globe. This means that Pakistan needs to be ready to face not only textual and visual disinformation but also audio deepfakes and artificial intelligence-enhanced video, as they are becoming new challenges for our modern experts.

According to the article by Waqas Naeem and Adnan Rehmat, in order to undertaking this challenge requires a complex solution. Technology companies need to improve their framework systems. An example is the recent open-awareness campaigns initiated by WhatsApp in countries like Pakistan, such as full-page advertisements in the newspapers, to teach computer users how to look at fraudulent messages. Facebook and TikTok claim to use AI tools to filter and remove misinformation and hate speech, yet, because most of the conversation is private, the devices are inherently restricted by this fact. The role of civil society and the journalists themselves represents the point of intersection, Pakistani NGOs have conducted media-literacy classes and published guides tailored to local contexts that can be followed to negate fake news. Reporters polled in Pakistan strongly suggest more fact-checking resources; one study found 68 per cent of digital reporters agreed that better fact-check training was their number one most urgent need to counteract disinformation. Altogether, even the ones working at the forefront of news production feel that they are being ill-equipped and demand more institutional support.

Policy measures seem to fall behind the technological shift in the environment. The main cyber law in Pakistan is the Prevention of Electronic Crimes Act, which has clauses that can be used to prevent negative online activities, but it is irregularly applied. According to Waqas Naeem and Adnan Rehmat, stricter regulation requirements should be encouraged, such as the disclosure of paid political content, and increasing the accountability of social media platforms. Media literacy is the weakest element that constantly comes out. A large number of Pakistani Internet users, especially people living in rural areas, do not have the capabilities required to differentiate between information that is real and that which is fake. Increased efforts should be promoted through improving responsible consumption of media, such as incorporating spot fake news lessons into colleges and universities or scheduling workshops that can teach students and other professionals about how to identify fake news. The coordination of governmental bodies, civil society, and technology companies also plays a crucial role, the Ministry of Information and related non-governmental organizations have already organized several conferences on media literacy, which means that these public-private collaborations should evolve.

In brief, technology and politics have been converging in Pakistan, which has tightened the challenge of misinformation. The high growth of Internet access, the mass use of mobiles has made a significant portion of the younger population more dependent on online news outlets, The sorts of contents that are most engaging and, more often, sensational are prioritized above others due to algorithms in place, the existence of encrypted messaging applications like WhatsApp encourages rumors to go unchecked, and messages generated by artificial intelligence are already rapidly impacting Pakistani population. The result has been disastrous, undermining trust in institutions, polarizing the public discourse, and, in some cases, causing violence which could be linked to online lies.

In the future, the life of misinformation will require attention at all times in Pakistan. Governments and platforms across the globe are still learning how to deal with the risks associated with generative AI. Meanwhile, civil society activists and media stakeholders in Pakistan can put up initiatives to promote better digital literacy and stricter fact-checking models. AI-generated content is already starting to change the way political information flows, and Pakistan’s context demonstrates that changes may take place fast and on scale. Controversial implications, coordinated action in creating awareness, enhancing transparency, and holding those who propagate falsehoods accountable will make the difference in the sustenance of good-quality public thinking.

Tags

#Misinformation#Pakistan#Technology#FactCheck#DigitalAwareness

Share this article

Comments (0)

Leave a Comment

No comments yet. Be the first to share your thoughts!