🤖 AI SuperThinkers Daily
Part 1: AI Agents
Google’s “Agent Smith” Autonomously Codes While Employees Sleep
Google employees are using an internal AI tool called “Agent Smith” that automates coding tasks asynchronously — meaning it works in the background without an active laptop. The tool has become so popular that Google had to restrict access to handle the surge in demand.
Named after The Matrix antagonist, Agent Smith builds on Google’s “Antigravity” agentic coding platform and can interact with various internal tools. Employees can check in and give instructions via their phones, marking a significant shift toward autonomous AI workers.
Google cofounder Sergey Brin emphasized in a recent town hall that AI agents will be a “big focus” for Google this year, hinting at tools similar to “OpenClaw.” The company is now expecting employees to adopt AI tools, with some teams factoring AI usage into performance reviews.
⚡ Quick Hits
Wayfound.ai CEO Tatyana Mamut says her team of two engineers — now managing AI agents instead of writing code — ships more features than her 30-person team at Amazon did in 2017. “The functions of product, design, and engineering are collapsing into one function,” she says. But she warns of “agent slop” — problems that arise when companies deploy agents without proper supervision. Traditional SaaS companies that don’t become “agentic” will be “dead in five years.”
Vercel’s CEO says AI agents are now doing the work of individual contributors, making everyone a manager. As agents handle execution, humans shift toward oversight, strategy, and decision-making — essentially becoming “mini CEOs” of their agent workforce.
Dedicated AI agents are now taking over operations of e-commerce retailers, eliminating administrative work and allowing creative directors and owners to focus on ideating new products. This represents a major shift from AI assisting with tasks to AI autonomously running business operations.
Part 2: AI News
A federal judge temporarily blocked the Trump administration from designating Anthropic as a “supply-chain risk to national security,” calling the move “arbitrary, capricious” and potentially designed to “punish Anthropic.” The ruling came after Anthropic refused to allow its AI in fully autonomous weapons or surveillance of Americans. Judge Rita Lin wrote: “Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary for expressing disagreement with the government.”
In an ironic security lapse, Anthropic accidentally revealed details about “Claude Mythos” (part of a new “Capybara” tier) — a model it calls a “step change” in AI capabilities with major advances in reasoning, coding, and cybersecurity. The leaked blog warns it “poses unprecedented cybersecurity risks” and represents “an upcoming wave of models that can exploit vulnerabilities in ways that far outpace defenders.” Cybersecurity stocks plunged following the news.
Major tech companies set ambitious climate goals at the start of the decade, but AI’s massive energy demands are complicating those commitments. Data centers running AI models consume enormous amounts of electricity, with some worrying it’s “locking in more fossil fuels” as companies struggle to meet power needs with renewable sources alone.
Starting next month, AI systems will begin prescribing mental health medications, eliminating the weeks-long wait and high costs typically associated with prescription renewals. Patients will be able to get prescriptions faster and cheaper through AI-powered consultations.
Manuel Kroiss, deeply involved in developing xAI’s core large language model technology, becomes the 10th co-founder to leave Elon Musk’s AI company ahead of its anticipated IPO. The departures raise questions about internal challenges at one of the AI industry’s highest-profile startups.
