The U.S. government is introducing a proposal to prevent foreign entities, particularly from China, from using U.S. cloud computing for AI model training, the U.S. Commerce Secretary Gina Raimondo announced this week, reports Reuters. The Biden administration sees it as an effort to safeguard national security and the U.S. technological superiority. Meanwhile, Chinese entities can still access services deployed in Europe and the Middle East.
“We can’t have non-state actors or China or folks who we don’t want accessing our cloud to train their models,” Raimondo told Reuters. “We use export controls on chips,” she noted. “Those chips are in American cloud data centers so we also have to think about closing down that avenue for potential malicious activity.”
The critical aspect of this initiative is called ‘Know Your Customer,’ and it requires U.S. cloud companies to identify their foreign users rigorously. This regulation prevents entities from countries like China from accessing U.S. cloud resources for artificial intelligence development. Raimondo compares this to the existing export controls on high-performance AI processors (such as Nvidia’s H100), highlighting the need to close potential avenues for malicious activities using U.S. technology on American soil. The initiative resulted from a proposal back in October regarding blocking Chinese entities’ access to U.S.-based cloud technology.
The proposed regulation imposes significant responsibilities on cloud computing firms. These companies must verify foreign customers’ identity, maintain user identification standards, and certify their compliance annually. This rule is part of a larger strategy to ensure that U.S. cloud platforms are not exploited for potentially hostile AI development. Meanwhile, China sees them as a way to curb the development of its economy.
Partly, the U.S. regulations against certain entities accessing the American cloud already work. In October, President Joe Biden enacted an executive order mandating that developers of AI systems, which could pose threats to U.S. national security, economy, public health, or safety, must disclose their safety test outcomes to the U.S. government before making these systems available to the public.
Industry responses to these measures have not been positive, at least in public. General counsel at NetChoice, a tech industry trade group, Carl Szabo, criticized the executive order’s implementation as potentially illegal, arguing that it could deter international collaboration in AI.