AI is changing FAST so rather than doing a full Crash Course series of 12+ episodes, we’ve prepared a mini-series of just the basics. Crash Course will never tell you what to think and we’re not the type of organization that responds to breaking news in real time. Instead, we’re here to offer a zoomed-out foundation upon which to base your own opinions as you continue to learn from other outlets about the world that’s changing around us.
In 1965, Moore’s Law predicted how computers would become smaller, faster, and more powerful than ever before. But here in 2025, we’re on the brink of an even bigger revolution. In this episode, we explore where AI's been, what it can do, and where it might be going. AI benchmarks and scaling laws help us understand what AI is, and could be, capable of.
AI is taking over…or is it? And what would that look like anyway? The smarter it gets, the better AI will become at doing– basically everything. That could lead us to a bright future, with booming economies and tons of free time for us human beings. But it could also lead us to something a lot darker. In order to prepare for that transition, we have to try and understand what it would really look like.
Way back in the 1930s, Alan Turing gave us a glimpse of the power of computers with a hypothetical machine that, he said, could solve any computable problem. But that was nonsense…right? Well thanks to recursive progress, maybe not. In this episode, we’ll explore how humans (and computers) make progress, and try to find out just how powerful AI could become.
Could a robot dedicated to a good cause end up destroying the world? Well, maybe. In this episode, we explore how powerful AI could end up causing us harm, regardless of what it’s programmed for. Between misuse by humans, alignment problems, and instrumental goals, even AI built with good intentions could end up breaking bad…unless we do something about it.
The future of AI is maybe beautiful, maybe scary, and definitely uncertain. But we do have a say in how it rolls out. From lab policies to international treaties, people all over the world are trying to figure out how to build and use AI in safe, responsible ways. But when the stakes are so high, can humanity really come together and keep AI under control?