The U.S. Commerce Department today issued a report in support of “open-weight” generative AI models like Meta’s Llama 3.1, but recommended the government develop “new capabilities” to monitor these models for potential risks.
The report, authored by the Commerce Department’s National Telecommunications and Information Administration (NTIA), finds that open-weight models broaden generative AI’s availability to small companies, researchers, nonprofits and individual developers. For these reasons, the government shouldn’t place restrictions on access to open models, the report suggests — at least not before investigating whether restrictions might harm the market.
The sentiment echoes recent comments from FTC Commission chair Lina Khan, who believes that open models can let more small players bring their ideas to market and, in doing so, promote healthy competition.
“The openness of the largest and most powerful AI systems will affect competition, innovation and risks in these revolutionary tools,” Alan Davidson, assistant secretary of Commerce for Communications and Information and NTIA administrator, added in a statement. “NTIA’s report recognizes the importance of open AI systems and calls for more active monitoring of risks from the wide availability of model weights for the largest AI models. Government has a key role to play in supporting AI development while building capacity to understand and address new risks.”
The report comes as regulators domestic and abroad weigh rules that could restrict or impose new requirements on companies that wish to release open-weight models.
California is close to passing SB 1047, which would mandate that any company training a model using more than 1026 FLOP of compute power beef up its cybersecurity and develop a way to “shut down” copies of the model within its control. Overseas, the EU recently finalized compliance deadlines for companies under its AI Act, which imposes new rules around copyright, transparency and AI applications.
Meta has said that the EU’s AI policies will prevent it from releasing some open models in the future. And a number of startups and big tech companies have come out against California’s law, which they claim is too onerous.
The NTIA’s model governance philosophy isn’t completely laissez-faire.
In its report, the NTIA calls for the government to develop an ongoing program to collect evidence of the risks and benefits of open models, evaluate that evidence and act on those evaluations, including imposing certain restrictions on model availability if warranted. Specifically, the report proposes that the government research the safety of various AI models, support research into risk mitigations and develop thresholds of “risk-specific” indicators to signal a change in policy might be needed.
These and the other steps would align with President Joe Biden’s executive order on AI, noted Gina Raimondo, U.S. Secretary of Commerce, which called for government agencies and companies to set new standards around the creation, deployment and use of AI.
“The Biden-Harris Administration is pulling every lever to maximize the promise of AI while minimizing its risks,” Raimondo said in a press release. “Today’s report provides a roadmap for responsible AI innovation and American leadership by embracing openness and recommending how the U.S. government can prepare for and adapt to potential challenges ahead.”
Source : Techcrunch