# Meta Joins AI Security Collective to Guarantee Accountable Improvement

Table of Contents
Meta Joins AI Security Collective to Guarantee Accountable Improvement
Because it continues to develop extra superior AI fashions, and work in the direction of automated basic intelligence (AGI), Meta can be eager to determine greatest follow guardrails and security requirements to make sure that AI doesn’t… effectively, enslave the human race.
Amongst different issues.
That’s why at the moment, Meta has introduced that it’s becoming a member of the Frontier Mannequin Discussion board (FMF) , a non-profit AI security collective that’s working to determine {industry} requirements and laws round AI growth.
As defined by FMF:
“As a non-profit group and the one industry-supported physique devoted to advancing the security of frontier AI fashions, the FMF is uniquely suited to make actual progress on figuring out shared challenges and actionable options. Our members share a need to get it proper on security – each as a result of it’s the fitting factor to do, and since the safer frontier AI is, the extra helpful and helpful will probably be to society.”
Meta, together with Amazon, will be part of Anthropic, Google, Microsoft, and OpenAI as members of the FMF mission, which is able to ideally result in the institution of best-in-class AI security laws. Which might assist to save lots of us from counting on John Connor to steer the human resistance.
As per Meta’s President of World Affairs Nick Clegg:
“Meta has lengthy been dedicated to the continued progress and growth of a safer and open AI ecosystem that prioritizes transparency and accountability. The Frontier Mannequin Discussion board permits us to proceed that work alongside {industry} companions, with a give attention to figuring out and sharing greatest practices to assist hold our merchandise and fashions protected.”
The FMF is presently working to determine an advisory board in addition to varied institutional preparations, together with a constitution, governance and funding, with a working group and govt board to steer these efforts.
And whereas a robot-dominated future could appear far-fetched, there are lots of different issues that the FMF will likely be masking, together with the technology of unlawful content material, misuse of AI (and how one can keep away from it), copyright, and extra (observe: Meta additionally just lately joined the “Security by Design” initiative to stop the misuse of generative AI instruments to perpetrate youngster exploitation).
Although for Meta particularly, the hazards of AGI are certainly prescient.
Meta’s Basic AI Analysis workforce (FAIR) is already working in the direction of the event of human-level intelligence, and simulating the neurons of the mind digitally, in what would equate to “considering” in a simulated atmosphere.
To be clear, we’re not anyplace near this as but, as a result of whereas the newest AI instruments are spectacular in what they’re capable of produce, they’re, actually, extremely advanced mathematical programs, which match queries with responses based mostly on the info that they’ll entry. They’re not “considering”, it’s simply an estimation of what logically comes subsequent, based mostly on the parameters of a given query.
AGI will be capable of do all of this by itself, and really formulate concepts with out human prompts.
Which is a bit scary, and will, after all, result in extra issues.
Therefore the necessity for teams like FMF to supervise AI growth, and be sure that these in command of such experiments don’t unintentionally information us in the direction of the top instances.
Andrew Hutchinson