Skip to content

Our Privacy Statement & Cookie Policy

All Thomson Reuters websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.

Artificial Intelligence

Why AI still needs regulation despite impact

· 6 minute read

· 6 minute read

Generative AI sparks excitement and concerns. Businesses grapple with AI's impact, seeking regulation for clarity and risk mitigation.

Jump to:

 The big picture

 Why this issue matters

 Understanding the impact of AI

 

The big picture

In November 2022, a chatbot called ChatGPT was released into the world. In just over a year, it has exploded in use.  

To call ChatGPT simply a chatbot is to vastly understate its capabilities. In fact, it’s the best-known example of what’s called generative artificial intelligence (AI). When prompted by the user, generative AI can produce content including text, images, and analysis of complex data, often in just seconds. It’s a technology that’s generating plenty of excitement—and fear. Generative AI also can create synthetic data, information that appears to be “real” but is actually made up.  

The other type of AI, called predictive AI, uncovers patterns and trends that allow users to make predictions and devise strategic actions. Both types of AI certainly will benefit and even transform many industries and professions, in a variety of ways. But for many businesses, the potential impact of AI could look more like disruption. Aware of the potential dangers, governments are pursuing AI regulation. The goal: to keep the impact of AI from causing the kind of disruption that results in destruction—of businesses, of data privacy, or of government programs. Every industry will need to be aware of technological advances, even if it means using AI to safeguard their business

Why this issue matters

It’s a smart move for just about any business to be prepared for all that AI can do, what challenges it might create, and how AI regulation might address those challenges. 

To many AI developers, the risks of AI aren’t significant enough to require rigorous oversight. They argue that AI regulation could stifle innovation and result in vague or overly complex rules that wouldn’t do what they’re intended to do, especially given the high-speed pace of change.  

Those arguing for AI regulation counter that if unmanaged, the impact of AI could be profoundly damaging. The risk of AI is that it could allow misinformation to be spread throughout the online world by creating fake but seemingly real images and videos, similar to what happened to Taylor Swift with AI this month. Business and individual data might also be more easily stolen through “believable” emails and phone messages due to human error and oversight. (Some content creators have engaged in legal action against AI developers, claiming that generative AI is using their words and images without credit or compensation). Because of the risks of AI, many businesses that might embrace the technology in their operations have been slow to fully adopt it. These businesses are looking for clarity from AI regulation so that they’re not liable for any misuse.  

Efforts are underway to manage the social impact of AI through regulation. In early December, the European Union passed the AI Act, which seeks to provide governmental management of the risks of AI. Two months earlier, President Biden issued an executive order intended to promote AI development while establishing guidelines for federal agencies to follow when designing, acquiring, deploying, and overseeing AI systems. Among other objectives, the executive order seeks to establish testing standards to minimize the risks of AI to infrastructure and cybersecurity. 

The White House isn’t the only federal entity exploring AI regulation. In July 2023, the U.S. Securities and Exchange Commission (SEC) proposed rules regarding data analytics by investment advisers, including the use of AI. These rules would require investment firms to be sure that AI tools don’t put the firm’s interests above those of a client. SEC chair Gary Gensler has also expressed worries that potential industry overreliance on a small number of AI providers might negatively disrupt the U.S. financial system.  

At the more local level, New York and other states have established or are considering AI regulation. All this said, AI regulation efforts have really just begun. And that makes sense, given how uncertain its effects will be. Government regulators are seeking to protect the public, the economy, and establish trust with AI without choking off innovative applications of an emerging technology.

 

Future of Professionals Report: How AI is the catalyst for transforming every aspect of work

Future of Professionals Report

How AI is the Catalyst for Transforming Every Aspect of Work

View report

 

 

Understanding the impact of AI

Of course, the investment advisory industry is by no means the only profession that will be affected by AI and how it is (or is not) regulated. To get a sense of how this technology could benefit and change businesses of all kinds, here are a couple of notable examples.  

The impact of AI on the tax profession

One of the most crucial roles that tax professionals play is providing high-quality advice to their clients, especially if it protects them from cybercrime during tax season. AI can automate tedious tasks such as data gathering and analytics, thus allowing these professionals to focus on higher-level advisory work. This would also allow them to add new skills and deliver more value to those clients, whether they’re internal or external. AI could transform many other aspects of the profession, including talent development and service capabilities. AI could also create opportunities for gains in productivity and internal efficiencies, such as more effective and faster client communications and service. 

The impact of AI on the legal profession

For legal professionals, AI could facilitate new opportunities for growth and service. Case in point: the time and capability to identify new markets. Corporate departments could be liberated from many rote tasks and dedicate more of their time and efforts to supporting their company’s growth strategy. Indeed, many firms and departments are already putting AI to work 

President Biden’s October 2023 executive order also may create additional opportunities for legal professionals as they will need to advise and represent clients on AI-related legal issues, including compliance, liability, and intellectual property. Legal professionals will need to become knowledgeable about AI not only as a legal issue but also as a tool for drafting documents and conducting research. And they’ll be required to learn how to use AI ethically as well as effectively. 

The source of these insights is Future of Professionals, a report released in August 2023 by Thomson Reuters, which surveyed more than 1,200 individuals across the globe who are employed in the legal, tax, global trade, risk management, and compliance fields in professional firms, corporate in-house departments, and government agencies. The survey showed that 67% of respondents believe that the impact of AI will be significant on their profession over the next five years. And 66% think that AI will create new professional career paths.

All the more reason, then, that this powerful technology be properly regulated—so that its numerous benefits outweigh its risks 

More answers