A recent article on Ars Technica described a new capability from Anthropic’s Claude Code (among other AI’s): it can take control of a user’s computer and complete tasks on their behalf. It can open applications, click through interfaces, navigate workflows, and carry out multi-step actions—all from a simple instruction.

To some, especially those that primarily use AI for prompts, image generation and such, this may come as a surprise. Even I, a fairly heavy AI user, was caught somewhat off guard. But when you think about it, this development is really the natural evolution of automation and shouldn’t be surprising (nothing with AI surprises me these days). It also points to something more fundamental: we’re entering a phase where software isn’t just used by people anymore. It’s starting to be used by AI acting on behalf of people.

And once this shift begins, it changes how everything needs to be built.

From Interfaces to Outcomes

For years, we’ve built software around a predictable pattern. Developers design interfaces, users interact with them, and tasks get completed step by step. Even when AI entered the picture, it largely stayed in a supporting role—helping users think, write, calculate, or plan. But tools like Claude Code (and others) represent a transition into something new. Instead of guiding the user through a process, AI can now execute that process directly. The user describes the outcome, and the system handles the rest. This is the emergence of agentic AI—software that doesn’t just assist, but acts.

This Isn’t Entirely New—But It Is Different

It’s worth noting that the idea of software controlling other software isn’t new. Tools like Microsoft Power Automate already allows users to automate repetitive tasks by scripting actions like clicking buttons, filling out forms, and moving data between systems. Even Copilot tied to personal 365 accounts include some of these abilities (I’ll be exploring these further in a separate post).

But tools like Power Automate rely on predefined workflows and is somewhat clunky and not very efficient. It does exactly what it’s told to dd based on the flow that’s been built. If something changes—a button moves, a layout updates, or a field behaves differently—the automation often breaks and needs human intervention.

What’s changing now is the layer on top. With AI systems like those described in the Ars Technica article, the user no longer defines the steps—they define the outcome. The system determines how to get there, adapting along the way.

That shift—from scripted automation to goal-driven behavior—is what makes this moment different.

When the UI Stops Being the Product

That shift has immediate implications for how we think about user interfaces. If an AI can operate your application the same way a human does—clicking buttons, filling out forms, navigating menus—then your interface is no longer the primary experience. It becomes an intermediary layer that may or may not even be seen. What matters instead is whether your system is understandable and predictable enough for an AI to use reliably. Clarity starts to outweigh cleverness. Consistency becomes more valuable than creativity. The question quietly changes from “Is this intuitive for a user?” to “Is this operable by an agent?”

APIs Become the Real Surface Area

At the same time, APIs move from being a secondary consideration to the center of your product. AI agents don’t actually want to click through your interface if they don’t have to—they do it because they’re forced to. Given the choice, they will always prefer structured, direct access to functionality. This means that the real surface area of your application isn’t the frontend anymore—it’s the set of actions your system exposes. Whether you’re building WordPress plugins, Bubble apps, or data tools, the critical design question becomes: how easily can an external agent call into this and get a predictable result?

From Features to Capabilities

This leads to a more subtle but important reframing of what we build. Features used to be designed around user interaction. A pricing calculator, for example, exists so that a person can input values and receive an answer. But in an agent-driven world, that same functionality becomes a capability—something an AI can invoke as part of a larger chain of actions. The distinction matters because it changes how you structure your systems. You’re no longer just designing experiences; you’re assembling building blocks that can be orchestrated.

The Risk: Software That Disappears

There’s a harder truth underneath all of this. A large category of software—especially tools that revolve around repetitive workflows, structured inputs, and predictable outputs—becomes increasingly vulnerable in this environment. If an AI can perform those steps directly, the value of the interface itself begins to erode. Users won’t log into a dashboard to complete a process if they can simply ask for the result and have it delivered. What gets displaced isn’t necessarily the functionality, but the way that functionality is packaged and accessed.

The Opportunity: Moving Up the Stack

At the same time, this shift opens up a different layer of opportunity. As the lower-level mechanics of software become automated, the value moves upward. Tools that simply track or store information become less compelling than those that interpret it, guide decisions, or adapt to the user. A writing log evolves into a writing coach. A pricing tool becomes an assistant that suggests strategy. An educational product transforms into something that responds dynamically to the learner. The more your product participates in thinking, rather than just recording, the more resilient it becomes.

Developers as Directors

For developers, this also changes the nature of the work itself. AI systems are increasingly capable of handling the mechanics of development—writing code, debugging issues, setting up environments, and even testing workflows. The whole vibe coding movement is a testament to the enhanced AI capabilities The role of the developer shifts accordingly. Instead of focusing primarily on implementation, the emphasis moves toward direction: deciding what to build, how systems should behave, and how different pieces connect. The skill is less about producing code line by line and more about shaping systems that others—including AI—can execute.

The Hidden Challenge: Reliability

Of course, handing control over to AI introduces its own set of challenges. These systems are not perfectly reliable. They misunderstand instructions, take unintended paths, and behave in ways that are difficult to predict. When an AI is interacting with your software, it won’t always follow the happy path you designed. It will click the wrong thing, attempt actions out of order, and push your system into edge cases. That makes robustness and guardrails essential. Validation, clear constraints, and predictable outcomes become critical—not just for user experience, but for system integrity.

A New Default for Software

What the Claude Code story ultimately points to is a broader transformation and a future of software design that may be very different than what it is today. Software is no longer used exclusively by humans. Increasingly, it is used by AI acting on behalf of humans. That may sound like a small distinction, but it has far-reaching consequences for how we design, build, and think about digital products.

The question developers need to start asking isn’t just how a user moves through their application. It’s how an intelligent agent would approach it, what actions it can take, and how reliably it can achieve a goal. Because if this trends accelerates, as it seems sure to, the most important user of your software might not be the person sitting at the keyboard—but the system working on their behalf.