Datagrom AI News Logo

Meta says it may stop development of AI systems it deems too risky

Meta says it may stop development of AI systems it deems too risky

February 3, 2025: Meta Weighs Risks in AI Development - Meta, led by CEO Mark Zuckerberg, announces its Frontier AI Framework to assess and possibly halt risky AI systems. The policy differentiates between high-risk and critical-risk AI, with the latter posing catastrophic threats, such as aiding in severe cyber or biological attacks. Risk assessments rely on expert input due to the lack of definitive evaluation metrics.

Unlike its open AI approach, Meta may restrict access to these systems, contrasting with China's less-protective DeepSeek. The framework highlights Meta's cautious approach to balancing AI advancements with societal safety.

Link to article Share on LinkedIn

Get the Edge in AI – Join Thousands Staying Ahead of the Curve

Weekly insights on AI trends, industry breakthroughs, and exclusive analysis from leading experts.

Only valuable weekly AI insights. No spam – unsubscribe anytime.