There is a quiet panic creeping through classrooms and coding labs. Artificial intelligence now writes code faster than students can learn it. It solves equations without showing its work. It answers exam-style questions with unsettling confidence. For a generation raised on speed and shortcuts, the conclusion feels obvious: perhaps computer science has peaked; perhaps maths is finally optional.That conclusion is wrong and the people who know it best are the ones who helped build the systems causing the panic. Over the past year, an unlikely consensus has emerged among global tech leaders. They are not urging students to chase the newest programming language or master the trendiest AI tool. Instead, they are pointing insistently backwards to subjects many students are eager to escape: Mathematics, computer science fundamentals, physics, and theoretical thinking.This is neither nostalgia, nor academic romanticism. It is a warning about where value is draining out of the system. AI is not eliminating intelligence, it is commodifying the mechanical parts of it. The first skills to flatten are those based on repetition: Syntax memorisation, framework fluency, surface-level coding competence. What survives, and in fact becomes rarer, is the ability to reason from first principles, to model a problem abstractly, to understand why a system behaves the way it does when it fails.That is why some of the sharpest minds in technology are now sounding almost conservative. They are arguing that maths is not a hurdle but a filter; computer science is not about writing code but about structuring thought; and difficulty, far from being a flaw in education, may be its last remaining quality check.In an age obsessed with shortcuts, these voices are making an unfashionable case: that the hardest subjects still matter—not despite artificial intelligence, but because of it. Here, we examine why some of the world’s most influential tech figures are pushing students back to maths and core computer science at the very moment many are tempted to move away.
Pavel Durov : Maths trains independence
The provocation began quietly. In mid-2025, Telegram founder Pavel Durov posted advice aimed at students weighing their options in an AI-dominated future. “If you’re a student choosing what to focus on, pick mathematics,” he wrote. No emojis, no caveats.Durov did not frame mathematics as a guarantee of employment. He framed it as a discipline that forces independent thinking. Maths, as he has often argued, does not allow the luxury of imitation. You either understand the problem, or you don’t. In a world where AI offers instant answers, that distinction matters more, not less. Durov’s subtext was unmistakable: reliance on tools is rising; intellectual self-reliance is becoming scarce.
Elon Musk : First principles, or nothing
Durov’s post drew a response from Elon Musk that became viral precisely because of its brevity. Replying publicly on X, Musk wrote: “Physics (with math).”The two words were not a curriculum. They were a philosophy. For Musk, physics is the arena where first-principles thinking is unavoidable, assumptions are tested against reality, not convenience. Maths is the language that makes that testing precise. When Musk says “physics (with math)”, he is rejecting surface competence. He is arguing that as systems grow more complex—rockets, autonomous vehicles, large-scale AI—the penalty for shallow understanding becomes catastrophic.In the AI era, Musk’s message was stark: tools will change weekly; first principles endure.
Sam Altman : High-leverage moment
By late 2025, a different anxiety had taken hold: that AI had made computer science itself a poor academic bet. Speaking at Stanford University in a public conversation with cryptography professor Dan Boneh, OpenAI CEO Sam Altman addressed that fear head-on.“This is a really cool time to be studying computer science,” Altman said. “It’s a high-leverage moment, especially if you’re interested in AI.”Altman’s phrasing was deliberate. “High leverage” does not mean easy returns. It means that understanding foundational systems now carries outsized impact later. AI, in Altman’s telling, is not replacing computer science; it is concentrating power in the hands of those who understand how these systems are built, constrained, and deployed.For students, the implication is uncomfortable but clear: shallow familiarity will age badly. Structural understanding will compound.
Demis Hassabis : The discipline of difficulty
If Altman speaks as a strategist, Demis Hassabis speaks as a product of academic rigour. In a 2025 conversation on the Lex Fridman podcast, the Google DeepMind CEO reflected on the formative impact of his education.“I took some very difficult math and theoretical computer science courses,” Hassabis said. “They taught me how to think deeply and rigorously—and how to persist when things were hard.”He returned to the theme later that year at public forums, cautioning students against abandoning maths and theory simply because AI tools appear to make them redundant. The real value of those subjects, Hassabis argued, lies not in the content itself but in the cognitive training they impose: precision, patience, and the ability to wrestle with problems that resist quick solutions. In an era where answers arrive instantly, the capacity to sit with uncertainty becomes a competitive advantage.
Sergey Brin : Passion and caution in the AI era
At a time when students are hearing two conflicting narratives — AI will replace jobs and AI will replace degrees — Google co-founder Sergey Brin offered one of the most grounded responses in January 2026 while speaking to a new generation of engineers at Stanford University.His words were simple but layered. “I chose computer science because I had a passion for it. It was kind of a no-brainer for me. I guess you could say I was also lucky because I was also in such a transformative field,” he said. Brin’s emphasis on curiosity rather than credential chasing was deliberate. He pointed out that his own journey — from a Stanford graduate student to co-architect of Google — was driven by interest, not fear-based career calculus. In an age of generative AI, where models such as Gemini and ChatGPT can write and debug code, that distinction matters more than ever. Importantly, Brin did not stop at passion. He also addressed the anxiety about automation head-on. With characteristic candour, he quipped, “I wouldn’t go off and switch to comparative literature because you think the AI is good at coding. The AI is probably even better at comparative literature, just to be perfectly honest anyway.”His point was twofold: Don’t flee STEM out of fear of automation, and don’t assume AI’s current performance undermines the value of structured learning.
If you ignore the fame…
Fame is a distraction. Take the celebrities out of it — Musk tossing “physics with maths” like it is a mic-drop, Altman selling computer science as a “high-leverage” bet from a Stanford stage, Hassabis sounding like the class topper who actually enjoyed theoretical CS, Brin reminding everyone he picked CS because he genuinely liked it — and the argument stops being glamorous. In fact, it starts being annoyingly sensible. AI is not stealing intelligence, it is bulk-discounting the easier bits of it. The stuff that once passed as skill — routine coding, formula application, template thinking — is now a vending machine. What still refuses to automate is judgment: Spotting the bad assumption, knowing when an answer is plausible and when it is polished nonsense. That is why they keep returning to maths and core CS.
