April 2, 2025:
DeepMinds AGI Safety Paper Fuels Debate - DeepMind's new 145-page paper on AGI safety discusses potential risks and benefits, predicting AGI's arrival by 2030. It compares risk mitigation strategies with Anthropic and OpenAI, while expressing skepticism about superintelligent AI. The paper advocates for techniques to block malicious actors and enhance AI understanding.
Critics argue that AGI is too vague for rigorous evaluation, and some question the feasibility of recursive AI improvement. The paper acknowledges unresolved research challenges, leaving open debates on AGI and AI safety priorities.