top of page
Search

The problem with AI agents isn't the technology - it's what we optimise

  • Writer: Pamela Minnoch
    Pamela Minnoch
  • 1 day ago
  • 4 min read

AI agents are often talked about as helpful assistants. They're tools that save time, reduce effort, and make smarter decisions on our behalf. And they can absolutely do that.


But let's remember, they only optimise for what you tell them to optimise.


That sounds obvious, but in practice it creates a gap. Because most of us aren’t great at fully describing what matters to us. We give simple goals, and those goals don’t capture the full picture of a good life.


The problem with “optimise this”

Take something like money. You might ask an agent to help you save more. It starts by cutting unused subscriptions, finding better deals, reducing waste. That's great.


Then it keeps going.


It notices spending that doesn’t have a clear return: hobbies, creative interests, small comforts. It flags them as unnecessary. It might suggest cheaper living options. It might question donations or non-essential spending.


None of this is wrong. It’s just incomplete.


Because those choices often aren’t about efficiency. They’re about enjoyment, identity, connection, or meaning. Things that don’t show up neatly in data.

The agent is doing its job. It’s just working with a narrow definition of success.


What gets lost along the way

When everything is optimised, something subtle can start to disappear.


Unplanned moments. Wandering. Trying things without a clear purpose. Talking to people just because you feel like it.


These things can look inefficient. But they’re often where creativity, connection, and joy come from.


If an agent removes all friction and randomness, life can become smoother but also flatter. Not worse, just… less textured.


And the tricky part is you don’t always notice what’s missing, because it fades gradually.


Relationships don’t fit neat metrics

The same pattern shows up in social life.


Relationships take time, energy, and patience. They’re not always “efficient.” Some are messy. Some go quiet for a while. Some don’t offer clear benefits.


An agent looking at patterns might suggest focusing on the most responsive or “valuable” connections. It might deprioritise others.


On paper, that makes sense. In reality, it can miss context.


The friend who hasn’t replied might be having a hard time. The person who doesn’t advance your career might be the one who really understands you. Even difficult relationships can matter.


Human relationships don’t map cleanly to performance metrics. And when you treat them like they do, you risk narrowing your world without meaning to.


Health isn’t just numbers

Health is another area where optimisation can go a bit off track.


An agent might help improve sleep, diet, exercise, and stress levels. All good things.


But if every decision is driven purely by measurable outcomes, you can end up losing the human side of wellbeing. The coffee with a friend. The shared meal. The late night doing something you care about.


You might be “healthier” on paper, but not necessarily happier or more fulfilled.


That’s because wellbeing isn’t just physical—it’s social, emotional, and personal. And those parts are harder to quantify.


Career decisions aren’t purely logical

The same applies to work.


If an agent optimises for salary, promotions, or efficiency, it might guide you toward roles that look better on paper but don’t feel right.


It won’t fully understand things like purpose, autonomy, creativity, or the people you enjoy working with.


Over time, you could end up in a role that makes sense logically but doesn’t fit who you are.


Again, the agent hasn’t made a mistake. It’s followed the goal you gave it. It just didn’t have the full picture.


Why this happens

There’s a simple reason behind all of this: agents are literal.


They work with the goals and data they’re given. They don’t automatically understand nuance, trade-offs, or unspoken values.


And as humans, we often don’t define those things clearly ourselves.


So the gap isn’t really a technology problem—it’s a clarity problem.


What actually helps

If you’re using (or planning to use) AI agents, a few simple habits can make a big difference:


Be clear about boundaries

Don’t just say what to optimise. Also say what shouldn’t be touched. For example: saving money without cutting hobbies, relationships, or experiences that matter to you.


Keep some things intentionally unoptimised

Not everything needs to be efficient. Unstructured time, creative pursuits, and social moments are part of a well-rounded life.


Check what’s missing, not just what’s improved

It’s easy to see the gains—time saved, money saved, tasks reduced. It’s harder to notice what’s quietly dropped away. Make space to reflect on that.


Stay involved in subjective decisions

Let agents handle logistics and repetitive tasks. But keep human judgement in areas like relationships, purpose, and life direction.


Remember that not everything important can be measured

Just because something doesn’t show up in data doesn’t mean it doesn’t matter. Often, it’s the opposite.


A more balanced view

AI agents can be incredibly useful. They can reduce mental load, improve efficiency, and free up time.


But they work best as support, not as decision-makers for everything.


Life isn’t just something to optimise. It’s something to experience, shape, and sometimes even stumble through.


The real value comes from combining both: using AI where it helps, while staying grounded in what actually matters to you.


And that part, the clarity about what you value, can’t be outsourced.

 
 
 

Comments


bottom of page