U.S. lawmakers say that without proper legislative oversight, AI could become a national security risk or a hindrance to educational integrity. Little legislation currently exists to regulate AI, which is a significant concern for US policymakers. Like other aspects of technology, the dangers and pitfalls of an innovative and transformative technology tend to outpace laws. Also: Here’s how ChatGPT works | How you can use ChatGPT Following the widespread adoption of ChatGPT, lawmakers are trying to stay on top of the technology’s growth and are attempting to foresee the technology’s dangers. The US Chamber of Commerce expects AI to contribute $13 trillion in global economic growth and for every business and government agency to use AI by the end of the decade. According to the Chamber’s report, AI can increase economic opportunities and income, decrease the costs of medical and scientific trials, and simplify consumer interactions. But the Chamber recognizes that there’s an opportunity for jobs to be displaced at the hands of AI and insists the government has a responsibility to ensure citizens are protected. Also: Scammers are using AI to impersonate your loved ones. What to watch out for The report also mentions that China is the US’ biggest competitor in AI development. The US hopes to work closely with lawmakers in the EU to tackle “key legal challenges that currently impede industry growth.” However, the Chamber acknowledges that tackling those challenges can lead to increased collaboration or ignite fierce international competition. The US Chamber of Commerce aims to push the government to address how AI can negatively affect people’s lives and work, negatively impact the economy, and make international interactions tenser. To properly regulate AI’s exponential growth, there are five pillars the government must address, according to the Chamber: efficiency, neutrality, proportionality, collegiality, and flexibility. Also: Just how big is this new generative AI? Think internet-level disruption To be efficient, lawmakers should apply existing laws, regulations, and guidance to AI regulation and focus on filling any gaps current legislation leaves open. To remain neutral, laws regarding AI should only concentrate on fostering public trust and protecting the rights of citizens. To be proportional, lawmakers should regulate AI from a risk-based perspective. To be collegiate, collaboration among federal agencies is paramount, as AI is “cross-cutting, complex, and rapidly changing,” according to the Chamber. To remain flexible, the federal government should be open to a nonbinding and self-regulating approach to monitoring the risks of AI.