3Ds’ of Artificial Intelligence and Social Peace

Artificial intelligence (AI) has revolutionised various sectors of life including defence, economy, research and development. Despite having immense advantages, there is a darker side to this technology that compromises the integrity of data. Nowhere is this more obvious than in Pakistan, where the development of AI technology has sparked a tsunami of deep fakes, defamation and disinformation (3Ds) that are posing challenges for society. 

AI software can create false information for internet users to increase engagement without concern for its veracity, hence accelerating the spread of false information. Due to AI algorithms, search engines like Google and social networking platforms (Twitter, Facebook, WhatsApp) give websites a higher priority than vetting the accuracy of the material based on active user engagement. This ineffective method feeds the vicious circle of deception, and ignorance. For instance, during the COVID-19 pandemic, the efficacy of the senna makki herb as a virus remedy spread rapidly due to AI algorithms on Pakistan's social media. Likewise, Deep fakes, which are produced using freely accessible AI software, make the content appear realistic to intentionally spread false information and mislead the target audience. An instance of AI generated disinformation was circulation of Imran Khan images on social media after his arrest. The fact-finding report showed that the image carried a watermark of an AI application called mid journey. The AI generated defamation campaigns are also used with the intent of political victimisation of individuals. This has been carried out through the preparation of false audio leaks by giving AI software a data training in the form of victim’s prior speeches to generate his/her false audio in order to defame them and undermine a person’s credibility.  

Artificial intelligence-based 3Ds in Pakistan are heavily influenced by both external and internal players. Externally, foreign state-sponsored organisations and non-state actors take advantage of Pakistan’s inability to take concrete measures (firewalls, network security, encryption etc.) in the internet environment to disseminate false information and sway public opinion, jeopardising political stability and fomenting unrest. The EU Disinformation Lab report (2020) is a case in point. The report reflected on generation of manipulative content for fake media outlets, think tanks and NGOs by the Indian government backed Srivastava Group in order to defame Pakistan at international fora. The role of AI-powered algorithms and tools are significant in the 3Ds campaign due to utilisation of information from the European Parliament Website to create articles through automation of content which means that no human had written the content. Internally, various organisations, including rival political parties and extremist groups, use AI-based deep fakes to delegitimize opponents and spread false information, for influencing public opinion in their favour and weakening confidence in democracy as a whole. Additionally, certain Pakistani media organisations fall prey to AI-based 3Ds by prioritising sensationalised information resulting in polarisation in the society at the cost of social cohesiveness. 

AI-based 3Ds hold severe implications for Pakistan as it induced polarisation, intolerance, and de-democratization. Divisive narratives that target issues like religion or ethnicity fuel hostility and mistrust among different groups. This further breed intolerance and prejudice, eroding the principles of inclusivity and tolerance essential for a diverse democracy in the country. Additionally, these tactics pose a threat to democracy by undermining trust in democratic institutions through distorted information and discrediting political figures. Consequently, citizens may disengage from the political process, weakening the democratic fabric of the country. The process of National integration is also hampered as 3Ds exploit existing fault lines, deepening divisions and hindering efforts to foster a shared national identity.  

The government’s initiatives to limit the negative effects of AI are not on par with the pace of 3Ds spreading inside Pakistan. Even the National Cyber Security Policy (2021) of the country does not cover all aspects of AI generated threats. It gives reference to AI as a Cyber Security threat. But it is unable to define the ways in which AI can contribute to cyber security through data manipulation rather than data breach. Also keeping in view that there is a lack of particular law that deals with AI generated 3Ds threat, the policymakers have generated the first draft of National Artificial Intelligence Policy (2023) that generally talks about the promotion of ethical use of AI to bring socio-economic development in every sector of life. However, it does not define what measures can be employed to prohibit the unethical use of AI. Similarly, the Law on Prevention of Electronic Crimes Act (2016) was enforced with the aim to punish the culprits for using 3Ds against politicians and government institutions. In reality, the relevant authorities apply the act mainly on those AI based 3Ds campaigns which targets the government institutions and politicians. It shows the inability of the relevant authorities to implement the act in true spirit, especially against the 3Ds campaigns that target the general public. 

Apart from government measures, there are two non-governmental initiatives including AFP Fact Check Pakistan and Soch Fact Check (member of International Fact-Checking Network-IFCN) that offer fact-checking services in Pakistan through their accounts on social media platforms. Besides, the Centre for Excellence in Journalism and the Global Neighbourhood for Media Innovation, also organize seminars and workshops to help local journalists for developing their fact-checking skills. However, no organization focuses on awareness programmes to prevent the vulnerable segment of the society from becoming an easy target of 3Ds campaigns. Such vulnerable segments have limited exposure to other viewpoints but they have easy access to social media platforms like Twitter, WhatsApp, Facebook, which creates challenges for them to differentiate between true and false news, making social media a new venue for the spread of 3Ds. 

To mitigate the negative effects of AI-based 3Ds, the architects of National Artificial Intelligence Policy need to define the unethical use of AI and measures that can ensure the safeguards against AI generated cybersecurity threats such as firewalls for filtration of false and right information. The Prevention of Electronic Crime Act needs to address 3Ds in all domains including social and economic issues not only in the political sphere. Government may take measures to establish fact-checking departments to promote digital literacy and awareness campaigns for fostering social peace through inclusive narratives. 

The writer is pursuing her MPhil at National Defence University (NDU), Islamabad and is currently associated with Islamabad Policy Research Institute (IPRI)

ePaper - Nawaiwaqt