The Practical Challenges of an AI Moratorium
From Rational Animations.
Rational Animations is a fully remote international team of about 40 artists, writers, and tech dorks collaborating together. We are fans of dogs, stories, animation, learning, and doing good.
With our YouTube channel, we aim to promote good-thinking, altruistic causes and help ensure humanity’s future goes well. Among these topics, we are particularly focused on AI Safety and AI Alignment: making sure present and future AI Systems are aligned with human values and don’t cause our extinction.
We also offer animation production and writing services. Don’t hesitate to contact us!
From Rational Animations.
From Rational Animations.
From Rational Animations. If superintelligent AI could cause human extinction, why don’t we simply stop building ever more advanced AI? This proposal is widely debated. In this video, we outline the main arguments, practical difficulties, and proposed responses. ▀▀▀▀▀▀▀▀▀SOURCES & READINGS▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀ PauseAI proposal: https://pauseai.info/proposal OpenAI protest: https://www.bloomberg.com/news/newsletters/2024-02-13/ai-protest-at-openai-hq-in-san-francisco-focuses-on-military-work Anthropic CEO on AI taking over coding: https://www.entrepreneur.com/business-news/anthropic-ceo-predicts-ai-will-take-over-coding-in-12-months/488533…
From Rational Animations. This time, we talk about AI risks beyond our usual focus on rogue AIs: malicious use and accidents. In particular, we look at how AI could undermine democracy, enable automated cyberattacks on critical infrastructure, lower barriers for biological and chemical misuse, concentrate power in a few governments or corporations, and cause large-scale…
From Rational Animations. This time, we talk about AI risks beyond our usual focus on rogue AIs: malicious use and accidents. In particular, we look at how AI could undermine democracy, enable automated cyberattacks on critical infrastructure, lower barriers for biological and chemical misuse, concentrate power in a few governments or corporations, and cause large-scale…
From Rational Animations. How do we rigorously measure AI’s intelligence? We don’t really know. What we know is that measuring intelligence is tricky, and if we’re not careful, our tests might not measure what we intend. We explore this topic by starting with the story of Clever Hans, a horse who seemingly could do arithmetic.…
From Rational Animations. In this video, we explain how Anthropic trained "sleeper agent" AIs to study deception. A "sleeper agent" is an AI model that behaves normally until it encounters a specific trigger in the prompt, at which point it awakens and executes a harmful behavior. Anthropic found that they couldn’t undo the sleeper agent…