2025 was touted as the year of agents. However, we’re halfway through the year, and we have a fragmented market with a lack of clear productivity gains. 

It’s clear now that the success of enterprises deploying AI agents won’t come from automating everything, but rather by automating the right things. There’s a need to continuously reshape the line between human action and machine execution as context changes. Think of that line as a partition function: a living boundary that constantly adapts to new context and allocates attention where it adds the most value.

‍

The Problem: Bad Partitions = Bad Products

Most automation platforms still pick one of three brittle strategies:

  1. Static expert rules (RLHF, prompt‑roulette) - Powerful but costly and slow to adapt. This is the path pursued by firms like OpenAI and DeepMind, where training ever-larger models is bolstered by synthetic and curated data from structured tasks.
  2. Classical RPA - Top‑down workflows that shatter when the UI or process drifts. These platforms, such as UiPath or Automation Anywhere, enable users (often consultants) to manually identify which parts of their job to offload to bots.
  3. Tool-use data - This passive screen scraping results in lots of “exhaust data,” little understanding, and creepy UX. In theory, this generates the right kind of data: rich, grounded, contextual. In practice, it can lack understanding of intention and be challenging to scale up effective learning from this data.

All three choke on the same issue: enterprise behavior isn’t tidy. You don’t just need the right data; you need the right context and understanding of it. Workflows are often undocumented, idiosyncratic, and fluid, diverging by team, quarter, and even by individual. Additionally, organizational structures and incentives could involve competing local incentives, necessitating negotiation and complex stakeholder management, which is often missed in agent design. At two different companies, the same job title may involve entirely different software stacks. The richest intent—the priorities, trade‑offs, and strategic decisions we make every day—lives in people’s heads, not spreadsheets or logs. It’s easy to automate an email, but not as easy to do it at the right time with the right approach. 

‍

Splitting the Effort

The deeper problem is the partitioning of the labor itself. In physics, a partition function describes how energy is distributed across states. In enterprise automation, we might ask: how should effort be split between human and machine? If you have too coarse a partition, and automate everything or nothing, the result is either disruption or underperformance. Too fine, and you risk death by a thousand handoffs.

We need newer paradigms for effective human-AI collaboration. This will involve crafting new deferrals at tasks where the bottleneck will be expertise and not speed. For example, LLMs write code much faster than humans can evaluate it. 

Here is a simple mental model for the role of humans, agents, and the partition to achieve a productive harmony:

  • Human: sets objectives, clarifies ambiguity, approves edge cases, and verifies outputs.‍
  • Agent: reliably executes high-confidence steps and aggregates states.‍
  • Partition: moves as confidence scores and objectives evolve, assigning tasks to either the human or the agent.

‍

Task & Context

Having a high-functioning partition requires context engineering that helps the agent make informed decisions. Today’s enterprise workers need a partition that’s adaptive, continuous, and unobtrusive. The partition should flex as work evolves and learn from the worker, guiding them when helpful, receding when not. Tools like Clay, Tines, and n8n, which empower users to design light-touch automations or agents, hint at this future.

Today’s reality feels like a wild-west mashup of queries and contexts, colliding haphazardly in a single chat. We lack the adaptive, nuanced partitioning that makes interaction with AI not just useful, but intuitive. Large models need to be able to move from smart generalists to deft specialists, seamlessly creating and forgetting contexts as needed. Partitioning is becoming a key challenge as we move from hand-crafted agent scaffolding to systems that rely on scale—an idea highlighted by Noam Brown. As models scale, our approach to partitioning needs to evolve too.

The agents we come to rely on will do two things at once:

  • Task partitioning: Decide, moment‑to‑moment, which subtasks agents can own with confidence and which need human judgment.
  • Context partitioning: Spin up dedicated “micro‑contexts” so that a sales‑call strategy doesn’t bleed into dinner‑planning prompts.

When these partitions are adaptive, agents will feel as integral as a top human assistant. And a real, game-changing opportunity emerges when platforms can combine user-defined scaffolding (outcomes & guardrails) with ambient, passively collected data, forming a continuous feedback loop that improves both the model and the human.

‍

The Partition Problem

Founders who treat partitioning as a first-class problem will need to understand how their products can reduce integration time, learn enterprise-specific nuances fast, and earn user trust while establishing boundaries. Beyond inference compute budgets, users will wrangle cognitive loads where building the right mental models for deferrals to agents will be critical. Right now, users are constantly caught in the middle, forced to decide if the agent is capable enough to act alone, needs to be taught via demonstration, or is simply blocked and requires direct intervention.

The teams that get this delicate behavior right will unlock a smarter partition function that frees us not just from tedious tasks, but from tedious work altogether. We won’t feel like we’re babysitting automations, but like we’re empowered and one with the machine.

If you’re building with a labor partition in mind, I’d love to meet.

‍


Thanks to Divy Thakkar for additions to this post, shared in his personal capacity. And thanks to Aya Somai for sharing her thoughts on the concept.