Skip to content
Longterm Wiki
Back

Senate testimony, 2023

government

This 2023 Senate testimony represents a prominent industry perspective opposing broad AI regulation in favor of sector-specific application-level rules; useful for understanding the policy debate around compute-based regulatory thresholds and the White House executive order on AI.

Metadata

Importance: 55/100opinion pieceprimary source

Summary

Andrew Ng argues before the U.S. Senate that AI should be regulated at the application level rather than the technology level, treating AI as a general-purpose technology analogous to electricity. He contends that regulating AI by model size or compute is as misguided as regulating motors by horsepower, and that sector-specific regulations targeting concrete harms in healthcare, underwriting, and autonomous vehicles are the appropriate policy approach.

Key Points

  • AI is a general-purpose technology like electricity; its risks and benefits depend on specific applications, not the underlying technology itself.
  • Application-level regulation (medical devices, underwriting fairness, autonomous vehicle safety) is preferred over technology-level regulation.
  • Using model size or compute thresholds to define regulatory risk is flawed, as both large and small models can cause harm depending on use.
  • Ng explicitly criticizes the White House executive order's use of computational scale as a risk metric.
  • Government's role should be to enable beneficial AI applications while targeting regulations at concrete, identifiable harms in specific sectors.

Cited by 1 page

PageTypeQuality
The Case Against AI Existential RiskArgument58.0

Cached Content Preview

HTTP 200Fetched Mar 15, 202621 KB
Written Statement of Andrew Ng
Before the U.S. Senate AI Insight Forum
AI is a General Purpose Technology with numerous
beneficial uses and vastly overhyped fears of catastrophe
AI is a General Purpose Technology with numerous use cases
AI refers to a large class of software that helps to make decisions, often by using mathematical equations to
process data. When thinking about risks of AI, it is important to recognize that it is a General Purpose
Technology, similar to electricity and the internet. If someone were to ask what electricity is good for, it is
almost hard to answer because there are so many use cases. Today's AI is useful for answering questions,
tutoring students, acting as a brainstorming partner, copyediting, customer support, generating art, self-driving
cars, deciding whether to underwrite a loan, detecting disease, and many more applications.
The generality of AI means that -- like electricity -- some can use it for nefarious purposes. But General
Purpose Technologies such as electricity, the internet, and the internal combustion engine have been
responsible for some of the greatest increases in people's wellbeing. As in those cases, government has an
important role to play to enable AI to lift up citizens while minimizing harmful applications.
However, it is important to differentiate regulating applications (which we need) vs. regulating the technology
(which is ill-advised).
We need good regulations for AI applications
AI technology is used in applications in healthcare, underwriting, self-driving, social media, and other sectors.
With some applications, there are risks of significant harm. We want:
● Medical devices to be safe
● Underwriting software to be fair, and not discriminate based on protected characteristics
● Self-driving cars to be safe
● Social media to be governed in a way that respects freedom of speech but also does not subject us to
foreign actors’ disinformation campaigns
When we think about specific AI applications, we can figure out what outcomes we do want (such as improved
healthcare) and do not want (such as medical products that make false claims) and regulate accordingly.
A fundamental distinction in decisions about how to regulate AI is between applications vs. technology.
Nikola Tesla's invention of the AC (alternating current) electric motor was a technology. When this technology
is incorporated into either a blender or an electric car, the blender or car is an application. Electric motors are
useful for so many things it is hard to effectively regulate them separately from thinking about concrete use
cases. But when we look at blenders and electric cars, we can systematically identify benefits and risks and
work to enable the benefits while limiting risks.
Whereas motors help us with physical work, AI helps us with intellectual work.
In the case of AI, engineers and scientists will typically write software and have it learn from a lot of data. This
AI system may live in a company's datacenter an

... (truncated, 21 KB total)
Resource ID: 46c0b419e8afe3fb | Stable ID: ZTU0MjU5OD