Government agencies that want to incorporate generative artificial intelligence (GenAI) into their workflows must maintain a delicate balancing act.
In truth, GenAI has the potential to usher in a new era of governmental efficiency and effectiveness. However, to gain those benefits, government agencies must address security concerns, protect private citizen data, and carefully consider other risks associated with GenAI technology. If agencies do not anticipate these risks and build the necessary safeguards, they could jeopardize their integrity and further degrade the public’s trust in government institutions, which is already at an all-time low.
Jump to ↓
Concerns about GenAI |
Guidelines and best practices |
Standards are emerging |
Transparency equals trust |
Building trust into new features and tools |
Toward an ethos of honesty |
Concerns about GenAI
Security and privacy concerns top the list of risks associated with GenAI, but there are many other areas of concern. Among them:
Infrastructure: The government’s legacy IT systems are outdated and may not have the power to run GenAI systems or adequately protect them.
Misinformation/disinformation: GenAI’s propensity to generate false facts or answers is well known, leaving opportunities to use GenAI to spread disinformation and propaganda.
Confidentiality: When formulating answers, GenAI models containing confidential data could inadvertently reveal sensitive information, creating a data breach.
Bias: GenAI systems have been shown to reflect racial and cultural biases embedded in their training data, so GenAI models need to be continuously monitored to make sure their output is fair and impartial.
Intellectual property: When training GenAI models, the government could mistakenly use writing, photos, videos, artwork, or scientific data that is protected by intellectual property laws—and be held liable for it.
Guidelines and best practices
To improve the efficiency and security of government operations, GenAI must navigate these and other issues with a cautious, clear-eyed view of the stakes involved. Agencies should prioritize developing guidelines for the safe, ethical use of GenAI and establish universally observed best practices.
Fortunately, some government agencies have been issuing ethical guidance for the use of artificial intelligence (AI) systems for years and are now extending those guidelines to include GenAI.
For example, the National Institute of Standards and Technology (NIST) has issued a risk management framework for GenAI that includes 33 pages of potential GenAI risks and suggested actions. The document is intended to help businesses and organizations of all kinds evaluate and manage the risks associated with GenAI. As a cautionary note, the framework warns that AI can intensify traditional software risks and that GenAI in particular can exacerbate existing AI risks and create new risks.
Standards are emerging
The NIST’s framework, along with guidance from other government agencies—notably the Office of Management and Budget (OMB) and Department of Homeland Security (DHS)—is part of an emerging set of standards and guidelines for the use of GenAI in government.
These guidelines tend to focus on:
- Aligning GenAI deployment with agency priorities and needs
- Prioritizing ethics, safeguards, and responsible use of GenAI
- Risk mitigation: data security, privacy, bias, system vulnerabilities
- Transparency and accountability
- Regulatory compliance
- Misinformation and inappropriate or illegal content
- Training and awareness programs for government employees
These are general guidelines. Each agency must decide how to apply them to the circumstances and needs of their specific organization.
Transparency equals trust
It should be mandatory for government-backed GenAI to be managed in a way that ensures security and maintains citizens’ trust in the system overall. Transparency and accountability are the cornerstones of trust, so GenAI systems should not operate in the shadows, protected from public or institutional scrutiny. Rather, governments should be transparent about how they are using GenAI and for what purposes.
To further bolster this trust, when incorporating GenAI technology into existing systems, developers should ensure that users can distinguish between GenAI-generated content and other content. Additionally, GenAI-generated content should clearly indicate the source material used to formulate its answers. By prioritizing transparency at both the governmental and technological levels, the integrity of GenAI systems can be safeguarded.
Building trust into new features and tools
Building transparency up front is key. Many government agencies use Thomson Reuters CLEAR investigative software to vet businesses and similar entities. To make CLEAR searches more efficient and user-friendly, Thomson Reuters recently introduced a GenAI-powered tool to the software called a “Risk Analysis Summary,” or RAS.
The RAS was designed with transparency in mind. For users, the RAS provides a concise written summary of a given search result and identifies any issues or activity that may merit further investigation. The RAS doesn’t just provide this information, though—it also cites all original sources, with links, and labels for all GenAI-generated content.
Thomson Reuters wants users to trust the results of CLEAR searches, so it built the mechanisms of trust—transparency and accountability—into the GenAI enhancements for this popular tool. CLEAR has consistently prioritized transparency, setting it apart from competitors. These enhancements further strengthen Thomson Reuters’ established reputation for delivering reliable and transparent solutions.
Toward an ethos of honesty
Government entities planning to introduce GenAI capabilities into their operations must approach their projects with openness and honesty. While GenAI can enhance the efficiency of many government functions, public trust and support for these initiatives depend on governments demonstrating that they have considered and addressed concerns about safety, security, privacy, and accuracy.
Going forward, as GenAI improves, constant monitoring and evaluation will be necessary to ensure that GenAI-powered software programs and features are doing what they are intended to do. These guardrails have yet to be built, but they are essential if governments want to leverage the benefits of GenAI without compromising what is left of the public’s trust in government itself.
To learn more about the use of GenAI in government, download the whitepaper, “GenAI in Government: Maximizing Benefits While Mitigating Risks.”

White Paper
GenAI in government: Maximizing benefits while mitigating risks
Explore the possibilities ↗
Disclaimer
Thomson Reuters is not a consumer reporting agency, and none of its services or the data contained therein constitute a ‘consumer report’ as such term is defined in the Federal Fair Credit Reporting Act (FCRA), 15 U.S.C. sec. 1681 et seq. The data provided to you may not be used as a factor in consumer debt collection decisioning, establishing a consumer’s eligibility for credit, insurance, employment, government benefits, or housing, or for any other purpose authorized under the FCRA. By accessing one of our services, you agree not to use the service or data for any purpose authorized under the FCRA or in relation to taking an adverse action relating to a consumer application.