The spectator at the desk
On the troubling emptiness of extraordinary productivity
It’s genuinely intoxicating. We’ve come to call it vibe coding, but as an experience, it’s much more about the vibes than about the coding. I think the phrase originates in the idea that you’re coding based on your vibes, but honestly, the vibe that’s created in you by coding with an AI is much more interesting, and a much bigger and more important question to have in mind as we navigate the coming months and years.
I’m a big fan of working with AI. I use it a lot. I think we’re at the beginning of something transformative. But it’s going to throw up some difficult problems. This is a post about one of them.
I sit, focussed on the screen: surrounded by them, in fact. The world falls away. Robot’s cursor blinks at me, promisingly. What do I want? What magic should I conjure from the machine? The warmth of possibility spreads across my chest. I’m excited. The typing begins. A careful patchwork of ideas, experience, battle scars and optimism emerges.
Robot goes to work, weaving its multidimensional symbolic representation of the sum of almost the whole of human conceptual knowledge into my little patchwork - the idle wonderings of a Monday morning - and, from a flurry of exchanged tokens, working code emerges. A new website is born. It’s not perfect. It’s not complete. But it exists, and is mostly good enough. And, with a few more exchanges, something has been achieved. Something exists which didn’t before, and which, just a couple of years ago, would have required days or weeks of human effort and attention.
Over lunch, I bathe in the warm and luscious waters of this extraordinary productivity. Lunch, if we’re honest, is a bit irritating. I’d rather be talking to Robot, revelling in the delight of this creative power. Noticing that it would be just a bit nicer if it had this affordance, or that style, and summoning that from nothing in the time it takes to get a glass of water. Finally, I am the technomage of my childhood imaginings. I’m in my element. I’m flying. I’m a person who likes to get things done, and this is a lotta things.
Martha Lane Fox wrote the other day that “the price of initiative is collapsing”. She described her own experience:
In thirty minutes with my iPad, I went from “I want an app that does seating plans for parties” to a working version — guest names stored, tables generated, usable interface. I am not a developer. The AI did the building. I did the thinking and the deciding.
And she asks the natural question, and the one that has been front of mind for me for the last couple of months too: what does this mean? What are the implications? How will this influence us, our systems, our organisations and the world?
The day ends, and I struggle to sleep. Too many thoughts. Too many imagined conversations with Robot. I jot one or two of them down as I settle. Eventually sleep comes, and I wake on Tuesday more tired than usual. Late nights will do that. I wake, eat toast, check in on the day with my partners and grunt lovingly to the children as they leave for school.
The morning’s familial duties complete, I resume my seat. Overnight, Robot has been busy. I made sure to leave it working on something before I went to bed. To do otherwise would have felt wasteful. Why not leave it at work, into the wee hours? It won’t mind.
I stretch, take a sip from the first of many cups of tea, and unlock my computer. A reassuring summary of work sits before me. Robot has been busy, and it has taken care to explain its work to its human. I set about testing its changes. It is immediately clear that they are not very good, but that’s ok. Just some problems to fix.
We engage in conversation. Sometimes, I ask Robot for its opinion. Noticing that it was quite insightful, I congratulate it. This seems odd. But it can’t hurt to be polite. Later, it does something stupid, and I chide it gently. It duly apologises, and we continue. As I interact with what it’s building, frustration sets in. This feature isn’t right. It just feels off.
I take a break, imagining past conversations with clients in similar situations. Designers don’t like to be told to “make it pop”. Their understandable frustration arises from the fact that though the client’s feelings about their work are reasonable and probably wise, this is not an effective way of communicating them to others. What does it mean to make it “pop”?
I return from my break, set on abiding by my own counsel. I tell Robot we’re going to step back from the problem and discuss it properly. After a productive conversation and some probing questions, it does a much better job. But the problem is bigger now, and the tasks take longer.
Time passes as Robot works diligently on my behalf while I watch, and occasionally interject. The work is going well. My frustration eases, but the intoxication induced yesterday does not return. I realise that I have spent the last couple of hours watching credible-looking bits of code and reasoning scroll by. I am, in fact, bored.
There’s something familiar about this feeling. Every improvement in tooling trades away some of the texture of work. Generally that’s a good trade: there are several clichés best avoided here, about the nobility of labour and the romanticisation of manual work. We’ve been making that trade for centuries, and mostly it’s been worth it. Our cleverer, better tools have been an extraordinary boon for humanity. But each step thins the connection between the person and the work, and AI is now doing the same for intellectual work.
Large Language Models are concept machines. They work on conceptual knowledge. It is literally impossible for them to be anything else, or do anything more than that. Knowledge is great, but it is distinct from experience, and we need both. Karl Ove Knausgaard wrote beautifully about this last year:
It feels as if the whole world has been transformed into images of the world and has thus been drawn into the human realm, which now encompasses everything. There is no place, no thing, no person or phenomenon that I cannot obtain as image or information. One might think this adds substance to the world, since one knows more about it, not less, but the opposite is true: it empties the world; it becomes thinner. That’s because knowledge of the world and the experience of the world are two fundamentally different things.
In the past, on a good coding day, I would have been in flow. It’s a wonderful and mysterious experience. It’s hard to define. But it feels like a close, immediate connection between me and the object of my craft and attention. An interplay, an interwoven dance of ideas and outcomes, made possible by skilful use of my ideas, experience and technical knowledge as applied to the task at hand. It’s a wonderful feeling. It’s meaningful.
It’s also one of the things that drew me to wood turning. You feel a piece of wood in your hands, inspect it, examine its weight and its figure, and load it into the lathe. You present the chisel to the work carefully, knowing that an inept approach will ruin it. As you adjust the chisel, and it pares away the wood, you can feel how your movements, the chisel, the machine and the wood interact with each other. How the slightest movement of your forearm or thumb in one direction or another will adjust the cut. Your attention narrows. The chisel ceases to feel like a cutting tool. It feels more like the medium you have at hand for a particular kind of relating. Slowly, you discover the form, and a bowl emerges. Sometimes, you look at the bowl, and you love it. This time, it was a good conversation. Other times, less so. But whatever the bowl looks like, it’s an inextricable combination of the wood and the human experience.
A technical, knowledge-based grounding is important to this process, and it’s where we all begin. The theory. In flow, this is not present: you don’t need to think about maintaining the correct placement of the chisel, or the correct pressure on the work. That has been internalised. You can just be present with the experience, the relating, the smell, the sound, the feeling and let the rest happen. Satisfying bowls are born, not specified.
Although computers are much more complicated than lathes, and the technical skill they require is greater and harder to acquire, working with them can and does have this quality too, for many people. Some are drawn to it for that. Others for reasons more pragmatic or prosaic. I’m somewhere in the middle. But there’s very little of that to be had with Robot. We are not in flow. I am not connected to the work. I am part spectator, part supervisor. The bowl is being made by a CNC machine; a lathe controlled by a computer. Despite appearances, it’s not the result of a conversation, of a deep way of relating. It’s the result of a very particular kind of knowledge exchange. Nothing about my experience has been made inextricable.
On Wednesday morning, I consider the experience of the previous day and decide I’m just not using the tool correctly. If I’m spending too much time watching it, I need to optimise. I must be bringing my attention to the problem at the wrong moments. I can dial that in. Optimising systems is firmly in my skillset. Let’s go!
As I settle down with my third cup of tea, I start a new project. I feel the flutter of excitement return. There’s nothing quite like a greenfield project: all idea, all imagining, no constraints, no existing codebase or ideas to fit into. I decide that what I need is multiple copies of Robot, working autonomously. And that what I need to do is carefully specify their work, and carefully review the results. No more watching Robot work! Just working with the ideas and the outputs.
So, with a little prompting, Robot sets about creating a framework for more robots to do more work. It’s a brave new world. I don’t only have one Robot now: I have a team. A swarm. The future has arrived. I set my new swarm of Robots to work.
After a remarkably short time, my Robots have done their work. I look at the results, and overwhelm brings me down to earth with a thud: I now have an extraordinary amount of reviewing, testing, tweaking and updating to do. I sigh, stiffen my resolve with the fifth cup of tea, and set to it. The morning’s excitement is decisively absent. With a growing sense of embattlement, I review change after change, mostly approving the work with minor amendments.
In a brightly depressing moment, I realise that what I have actually conjured for myself is a brave new world in which I shuffle pull requests around while discussing my goals and ambitions with a highly advanced statistical model that doesn’t have any feelings. Deciding that no amount of tea could possibly be adequate, I step away from the computer, and go outside to muck out the chickens.
The aroma of soiled chicken bedding is delightfully grounding. I spend the rest of the day doing things around the house.
Being in relation to a machine is a curious thing. It can absolutely be as rewarding as being in relation to anything else. The flow, the deep connection to the work and the problem, is a magical thing. But - for me, anyway - conversing with the AI doesn’t feel like that at all.
Knausgaard, in the same piece, touched on this too:
I bought my very first computer in 1990. It was a used Olivetti, already outdated, with a floppy-disk drive and extremely simple graphics, really just an advanced typewriter. But it also had a few games, including Yahtzee, which I opened one afternoon, only to find myself sitting there for hours. It was hypnotizing in a way I had never experienced before. This was a bit weird, because I would never have dreamed of sitting and playing Yahtzee alone in my bed with physical dice; that had no appeal and would have been more than a little pathetic. So what was the difference? What did the dice on the screen have that real dice didn’t?
Knausgaard’s computer created a sense of relation that the physical dice couldn’t. Coding with AI does this too. But not all relations with machines are the same. The lathe demands your whole self: hands, attention, judgment, all woven tightly into the work. The feedback is immediate and continuous. Being deeply in flow with the code does something similar. But AI asks for something narrower: it wants your ideas, your direction and your approval, but not your hands, and not your sustained attention. The relation exists, but it’s thin. It lives only abstractly, in the exchange of concepts.
I think this feeling of flow is, for many people, an essential part of what creates a sense of meaning in their work. But the speed at which Claude Code and similar tools can produce work is extraordinary and it is going to change things profoundly. We’re simply not going to not use it. It’s just too useful.
“Never in human history have we discovered something useful and then chosen not to use it.”
Winston Duarte, in Persepolis Rising by James S.A. Corey
I am confident, though, that one of the consequences of using it in a widespread way is going to be a further thinning of the world: a deeper entrenchment of our existing bias towards over-conceptualising, a reduction in the genuine grappling with the world and the craft of work that is a source of meaning and purpose for many people in white-collar jobs. If we want to use AI well, we have to find ways for people who use it professionally to get their meaning, flow and real engagement from some other aspect of their work.
We need to carefully design our interactions with AI agents: what are the human touchpoints, the most effective places to inject our experience, goals, preferences, feedback and ideas? How do we work with these tools in ways that enlarge us? Wednesday’s experiment was an attempt at this. It proved more complicated than I thought, and that was just in trying to build a solution for me: in the context of a team, the problem is harder. People are trying things, though, and we shall see.
For developers and product teams, the good news is that the primary challenge of software development has never been “building the thing”: it’s deciding what thing to build in the first place. This is a problem we will have forever, and there’s an enormous amount of value to add, and meaning to be found, in tackling it. That’s the problem development teams really exist to solve: working out how, in the midst of the multi-stakeholder, multi-user, technically-complex, resource-constrained chaotic world of competing ideas and priorities, we can still do good work that makes things better for people.
Because there’s no AI that can do that, and there’ll always be more work to be done than people to do it.


Great read - you might want to check out this excellent piece by Daniel Thorson on similar themes https://intimatemirror.substack.com/p/the-human-alignment-problem