
In a move that might signal a seismic shift in how talent is evaluated, Meta has confirmed that it is actively testing AI-enabled interviews where coding candidates can use an artificial intelligence assistant during the assessment process.The update was first reported by 404 Media, which accessed an internal memo titled “AI-Enabled Interviews—Call for Mock Candidates.” The document invites existing employees to volunteer as test subjects for mock interviews involving AI support, noting that this model is “more representative of the developer environment that our future employees will work in.”It’s not just an experimental idea. Meta appears to be laying the groundwork for a new industry norm, one that acknowledges how AI is already deeply embedded in real-world workflows.
From help to hiring: Why Meta is betting on AI tools
Confirming the internal pilot, a Meta spokesperson told 404 Media, “We’re obviously focused on using AI to help engineers with their day-to-day work, so it should be no surprise that we’re testing how to provide these tools to applicants during interviews.”This is the same company where, just months ago, CEO Mark Zuckerberg stated on the Joe Rogan Experience podcast that he expected AI agents to soon function as “midlevel engineers” capable of writing code for the company. “Over time,” he added, “we’ll get to a point where a lot of the code in our apps is actually going to be built by AI engineers instead of people engineers.”So why test humans in a way that doesn’t reflect that reality? That’s the core logic behind Meta’s pilot. By allowing AI during interviews, the company is signalling that it’s less interested in testing raw memory or speed, and more in evaluating how future employees might collaborate with AI systems under real conditions.It’s a big pivot from conventional wisdom and it’s catching on across the Valley.
A glimpse into the future: Cluely, Columbia and ethical grey zones
If Meta is at one end of the curve, Chungin “Roy” Lee is the outlier who saw it coming just a bit too early.In early 2024, Lee, a then 21-year-old computer science student at Columbia University, developed Interview Coder, an AI tool that could discreetly assist candidates during coding interviews. It captured screen activity, picked up audio cues, and suggested real-time responses. For users, it was a lifeline. For institutions, it was a line crossed.Columbia suspended Lee for a year, citing violation of academic integrity. The scandal could have ended the story but it didn’t.Lee moved to San Francisco, rebranded the product as Cluely, and turned it into a stealth-mode AI assistant offering undetectable support not just in interviews, but also exams, meetings and sales calls. By mid-2025, Cluely had raised over $20 million in funding, including a $15 million Series A round led by Andreessen Horowitz.Lee remains defiant about his journey. Speaking to the Associated Press, he said: “Everyone uses AI now. It doesn’t make sense to have systems that test people as if they don’t.”While Cluely sits in a controversial corner of the AI productivity movement, Meta’s recent decision suggests the narrative is shifting. Rather than penalising candidates for using AI tools, some companies are now building interviews around them.
The interview cheat code or new literacy?
The trend raises an important question: If AI is going to be an integral part of professional life, should interviews continue to pretend it doesn’t exist?For many Silicon Valley employers, the answer may be no. Hiring teams are increasingly interested in how candidates interact with AI, rather than how they perform in isolation. As coding environments become more collaborative with GitHub Copilot, Google Gemini Code Assist and Meta’s own Code Llama now common in developer stacks, pure unaided performance is losing relevance.Meanwhile, allowing AI during interviews might make it harder to distinguish between strategic assistance and dependency. It could also widen the gap between candidates trained in AI-integrated environments and those coming from more traditional setups.
What does this mean for future applicants?
For candidates preparing to enter the tech workforce in 2025 and beyond, Meta’s pilot could change how they approach not just interviews but learning itself. It normalises the presence of AI in high-stakes environments. It also redefines what companies consider “cheating.”The emphasis now appears to be shifting from what you know to how you adapt. In that landscape, the ability to prompt, parse and co-create with AI may soon become just as important as any core coding language.As Chungin Lee’s story shows, the tools that once got students suspended may very well become the tools that help them get hired. Meta’s pilot is not just a test of software; it’s a preview of a job market where human-AI collaboration isn’t an advantage, it’s the baseline.TOI Education is on WhatsApp now. Follow us here.