As the private sector rushes to find practical (and profitable) applications for generative artificial intelligence (GenAI), government agencies in the public sector have been more careful about the risks and rewards of this emerging technology.
Officials recognize that GenAI, if deployed responsibly, has the potential to improve the quality, responsiveness, and efficiency of various government services. However, concerns about data privacy, security, accuracy, information bias, and accountability set off alarm bells for the gatekeepers of government procurement, who are often risk averse.
Jump to ↓
Are we at the risk of falling behind? |
Addressing the GenAI concerns among government |
Bridging the GenAI gap in government services |
A matter of urgent concern |
Are we at the risk of falling behind?
As the private sector continues to push the envelope on GenAI innovation, decision-makers in the public sector who are carefully weighing the good, bad, and ugly of GenAI are in danger of inviting another kind of risk —the risk of falling behind.
Public sector leaders must understand and address the full spectrum of risks associated with GenAI to unlock its potential for transforming government services. If they don’t, the gap between technology adoption in the private and public sectors will continue to widen, challenging public faith in government institutions’ effectiveness and relevance.
Addressing the GenAI concerns among government
Admittedly, GenAI presents a rocky risk landscape for decision-makers in the public sector. Among the most significant risk factors for governments are:
Data privacy and security: Government systems rely on huge databases full of sensitive personal data. Any misuse, abuse, distortion, or compromise of this data could threaten the integrity of the entire system and prevent citizens from receiving the services they need, further eroding public trust in government. Vulnerability to attacks from foreign adversaries is also a concern.
Ethical considerations: Studies have shown that GenAI systems sometimes reflect the biases built in from the information it was trained on, possibly creating inaccurate or unfair decisions. Few people truly understand how GenAI works, so if there is a problem, explaining it and fixing it could be difficult. GenAI’s massive energy consumption is also an underlying ethical issue.
Legal and compliance risks: The regulatory framework around GenAI is fluid at best, with legislators introducing new and sometimes conflicting laws to keep up with technological advancements. Governments risk liability if decisions by attorneys, legislators, or other officials rely on GenAI-generated content that is incorrect, plagiarized, or fabricated.
Untrained workforce: Few agencies have personnel who are trained to use GenAI responsibly and ethically, which increases the risk of misuse or abuse.
Bridging the GenAI gap in government services
If governments are going to modernize their technology, they must do so with GenAI in mind. This means developing a framework of safeguards to ensure GenAI is used safely, ethically, and securely. It’s important to prioritize when and where it should be deployed and build guidelines to transform legacy systems to newer GenAI-capable systems, without interrupting ongoing government services.
A viable framework: To develop an operational GenAI framework that prioritizes safety, ethics, and security, governments must clearly define how the technology will be used, who will use it, and for what purpose. For example, the Department of Homeland Security (DHS) created a “Playbook for Public Sector GenAI Deployment,” which outlines actionable steps for GenAI deployment, beginning with aligning GenAI projects with an agency’s mission and priorities. Other agencies can model their efforts on the DHS playbook, but each must evaluate its own risks, needs, and priorities.
Oversight mechanisms: To consolidate AI regulation at the federal level, legislators are considering a 10-year ban to prevent state and local governments from enacting AI laws. Regardless of the outcome, they must develop oversight mechanisms with strict guardrails for GenAI use and ensure agencies are accountable for compliance. Human involvement is critical, as AI cannot be trusted to regulate itself.
Technical safeguards: Data security and privacy protection must be a top priority for any
GenAI system, especially those that involve citizen records. Standard security protocols such as end-to-end encryption, “zero trust” architecture, and continuous monitoring should be implemented. Additionally, GenAI may require more customized and targeted safeguards
Evaluation and testing: To ensure compliance with government regulations, risk-management frameworks must be updated to include guidelines for the assessment and mitigation of GenAI risks. Rigorous testing protocols are crucial as well. GenAI tools should always be tested offline before going live, and performance evaluations should involve legal, cybersecurity, and ethics experts.
A matter of urgent concern
There are several measures to consider before introducing GenAI into government systems. However, GenAI’s potential to improve government services is so profound that learning how to manage the technology’s risks now, before it’s too late, is a matter of urgent national concern.
Government leaders must better understand GenAI to regulate it responsibly. They should pair sensible safeguards and rigorous deployment standards with expert human oversight to harness GenAI’s potential while minimizing risks to government systems.
The government needs GenAI, but it must implement it correctly. Trust in government institutions is at an all-time low, and how the government manages GenAI could significantly impact whether trust levels rise or fall.
To learn more about the benefits of GenAI, read the whitepaper, “GenAI in Government: Maximizing Benefits While Mitigating Risks.”

White Paper
GenAI in government: Maximizing benefits while mitigating risks
Explore the possibilities ↗
Disclaimer
Thomson Reuters is not a consumer reporting agency and none of its services or the data contained therein constitute a ‘consumer report’ as such term is defined in the Federal Fair Credit Reporting Act (FCRA), 15 U.S.C. sec. 1681 et seq. The data provided to you may not be used as a factor in consumer debt collection decisioning, establishing a consumer’s eligibility for credit, insurance, employment, government benefits, or housing, or for any other purpose authorized under the FCRA. By accessing one of our services, you agree not to use the service or data for any purpose authorized under the FCRA or in relation to taking an adverse action relating to a consumer application.