My prediction of common sense AI regulation.
AI regulation is an increasingly confusing and contentious issue. Elon Musk, himself an AI pioneer, has signed a petition calling for a six-month pause in artificial intelligence training; Tyler Cowen, a libertarian economist, has gracefully quote blogged a call for specific regulation; and the most prominent voice on AI regulation is Eliezer Yudkowsky, an extremist that prefers to airstrike rogue data centers than to train AI at all.
All this to say, there is little in the way of common sense regulation to stand behind to control an obviously powerful and unexplored technology. What I aim to do, then, is predict what common sense regulation will look like when all is said and done. Additionally, I speculate on why these common sense proposals aren’t more common.
1. All companies training new artificial intelligence will have to adopt the state of the art in terms of cyber-security and alignment procedures
Like I said, common sense. Any company running an artificial intelligence that is missing crucial alignment features will be forced to shut it down, as this would pose a danger to the public. Additionally, they would need to follow cybersecurity standard practice to prevent copies of their model from leaking to bad actors or to prevent their API being used to train smaller models.
2. Governments will keep track of and license large amounts of compute
E.g. the powerful graphics chips required to train large models. Being able to regulate artificial intelligence is dependent on our government keeping track of who its developers are and who has access to it. Accusations of a surveillance state would fall on deaf ears, as no one who isn’t themselves bitcoin mining or training large models has a need for the type of compute that AI organizations use.
3. Treat alignment failures like a case of neglect; take away access privileges until the issue is fixed
If an AI organization fails to align their artificial intelligence, leading to a catastrophic alignment failure, their model should be run in an isolated instance on a government server for further experimentation and alignment. Additionally, as an incentive to further research, the company should lose the license to train further large models until they verifiably fix the issue.
As promised, the reason that common sense proposals like this aren’t seen elsewhere is that they’re based on the future’s common sense, eg the sense that companies of researchers shouldn’t be directly liable for the choices of an essentially lab-grown intelligence. In fact, one might read the line of support for this essay as essentially the reverse: we must accept that these models are independent and impossible to sufficiently punish before we can make progress on AI regulation.