fbpx

REI Insights

Misinformation has far-reaching effects: Will ChatGPT widen the citizen trust gap in government?
February 27, 2023
Reading Time: 4 minutes

Misinformation has become a significant issue in recent years. International-financial propaganda fueling the war in Ukraine, vaccine misinformation undermining public health efforts during the COVID-19 pandemic, and hoaxes about rigged election results threatening democracy worldwide. Despite surrounding criticism, technology and social media companies like TikTok, Bing, Facebook, and others continue to influence behavior and change how people perceive the world around them—sending industry and academic professionals to sound the alarm against the manipulation of algorithms. A Pearson Institute and the Associated Press-NORC Center for Public Affairs Research study found that: 

  • 95% of Americans identified misinformation as a problem while trying to access important information 
  • 6 in 10 are at least someone concerned that their friends or family members have been part of the problem  
  • 75% blame social media users and technology companies. 

Misinformation challenges citizens of all ages, education levels, and political perspectives – with damaging effects on various societal sectors, including government, the media, national security, and public health. However, the emergence of generative AI technologies like ChatGPT could increase existing concerns while creating new ones. Leaving many to wonder what is next. 

The far-reaching effects of ChatGPT on citizen trust
The use of ChatGPT could have severe negative impacts on citizen trust. While the technology has the potential to streamline communication and improve efficiencies, it also poses a risk to privacy and personal information. Government must be transparent about how they use generative AI technologies to protect citizen data from misuse or abuse by malicious actors. Failure to do so could erode the trust they’ve spent years to sustain and, for some federal agencies, rebuild.  

Even when data is publicly available, its use can breach contextual integrity – a fundamental principle in legal privacy discussions. It requires that individuals’ information is not revealed outside of the context in which it was initially produced. ChatGPT relies on large amounts of data to generate responses, and with that can come a risk for biases and inaccuracies in the data that could lead to discriminatory or harmful outcomes – especially for non-native English speakers. For example, suppose the data used to train ChatGPT is biased against certain groups or contains inaccurate information. In that case, this could result in unfair treatment for those individuals when interacting with government agencies or other chatbots. Using bad data could create room for hate groups using ChatGPT to spread misinformation – creating divisiveness across vulnerable populations and communities.  

Data scientists and engineers must prioritize diversity and inclusion when training ChatGPT models while implementing robust safeguards to promote fairness and accuracy as the system formulates responsible and ethical data. Government must treat ChatGPT data with the same level of sensitivity as traditional data while also factoring in data protection protocols when handling private conversations. Citizens need to feel confident that their personal information is being handled responsibly and that their voices are heard without fear of surveillance or censorship. 

With great power comes greater responsibility
Malicious actors are using ChatGPT to rapidly write attack code, placing government at risk for faster domestic and international cyberattacks. Leveraging security experts, federal agencies will need to address where IT infrastructures are vulnerable to mitigate the potentially devastating impacts of threat actors. Perhaps the most critical of these vulnerabilities is the pervasiveness of antiquated IT systems. Not only are these systems more prone to bugs and outages, but the more legacy IT and code an agency has, the more complex it becomes to secure a rapidly growing number of interconnected devices and applications. Government agencies must recognize where they are vulnerable to address these potentially devastating impacts.  

The emergence of generative AI not only brings a host of innovations. However, with the ability to generate human-like responses and mimic natural language, ChatGPT can create convincing phishing emails, social scams, and other forms of cyberattacks. These malicious actors may use ChatGPT to craft messages that trick individuals into divulging sensitive information or downloading malware onto their devices. Individuals and organizations need to be aware of the potential risks of using AI-powered tools like ChatGPT and take steps to safeguard themselves against threats. Government agencies, in particular, must understand and curate the data used to train any generative model implemented internally or provided in support of citizen services. 

While ChatGPT is an innovative technology and has the potential to revolutionize citizen services across government, caution must be used to protect citizen trust and mitigate data biases that may arise from the data imputed into the system. Federal agencies can take steps to protect against malicious attacks on their IT systems – ensuring better citizen outcomes that can help harness the power of ChatGPT:  

  • Strengthen policies around implementing robust encryption protocols that malicious actors cannot easily compromise. 
  • Update existing technologies to remain at the forefront of cybersecurity while responding quickly to potential threats or vulnerabilities.  
  • Ensure robust identity management systems are in place to verify connected entities.
  • Document and use appropriate Agency-curated datasets to pretrain models 

  

Zeswitz Andrew edited 300x240 1Andy Zeswitz is CTO at REI Systems. He drives REI’s overall vision and strategy for technology and innovation.

Connect with Andrew on LinkedIn