Meta may suspend AI projects over security concerns

As artificial intelligence technology continues to advance rapidly, Meta has announced that it may halt the development of AI systems it deems excessively risky, Kazinform News Agency correspondent reports.

Meta
Photo credit: Pixabay.com

The company stated that it is prepared to suspend work on advanced AI systems if they pose a critically high level of risk. This approach is part of a new threat assessment framework outlined in the Frontier AI Framework document.

The primary focus is on several key areas that could lead to catastrophic consequences, including cybersecurity, chemical and biological threats. Decisions on whether to proceed with or halt development are made after analyzing potential threats, testing the system, and modeling possible risks.

Meta employs a three-tier risk assessment system. If a model reaches a critical threat level, its development is halted, and mitigation measures are implemented to reduce the risk to a moderate level, if possible.

In cases of high risk without a unique danger, access to the system will be restricted to a limited group of specialists, with additional risk reduction measures in place. AI classified as moderate-risk may be deployed, but with enhanced security measures.

Earlier, Kazinform News Agency reported that SoftBank and OpenAI had agreed to establish a joint venture to promote AI services for corporations.

Most popular
See All