There is a question I keep returning to when I think about AI, and it is not the one most companies seem obsessed with.
The usual question is: how much faster can we work?
That question matters. I am not going to pretend speed is irrelevant just to appear more spiritually advanced than a quarterly business review. Speed matters because time matters. If something that used to take three hours now takes twenty minutes, that is not a small change. If a first draft, a meeting summary, a research synthesis, a product outline, or a client preparation document can be produced faster, then something real has shifted. There is enough research now to show that AI can produce measurable gains in certain knowledge-work tasks, including controlled studies where professionals completed realistic writing tasks faster and with higher quality when supported by generative AI: Generative AI at Work, Science.
But speed is only the surface.
The deeper question is much more uncomfortable.
If AI gives time back, what is my work actually for?
That question sounds philosophical, but it is also brutally practical. Because the moment AI starts removing friction from work, it also starts removing excuses. It becomes harder to hide behind the blank page, the overflowing inbox, the administrative drag, the repetitive reporting, the endless context switching, and the comforting fiction that we would do more meaningful things if only the operational world gave us permission.
At some point, the excuse gets weaker.
And then we have to decide what we do with the time, attention, and energy that comes back to us.
For me, that moment arrived when multitasking stopped feeling like the relevant conversation. Before AI became part of my daily work, I often thought in terms of obligations. What do I have to deliver? What do I need to prepare? What needs to be structured, answered, cleaned up, summarized, organized, or moved forward? That world is familiar to anyone working in modern knowledge work. We do not only perform tasks. We constantly carry the mental cost of remembering, switching, restarting, and reloading context.
Then AI started taking over parts of that load.
Not all of it. Not magically. Not without review. This is not one of those fairy tales where the machine arrives and humans finally drink wine in linen shirts while productivity expands into the sunset. But enough changed that I began asking a different question.
Not only: what do I have to do?
But: what else do I want to do?
That distinction is massive.
Suddenly, I had fewer excuses for not writing my novel, not developing the app idea that kept circling in my head, not sharing my thoughts publicly, not using the domain name I had secured years ago to build something meaningful, not helping my wife with her own business-building initiative, not reading the book I kept claiming I wanted to read.
AI did not create those desires.
It removed some of the friction that made them easier to postpone.
And that is where the real conversation begins.
AI does not just save time. It changes the meaning of time.
There is a cheap version of the AI productivity conversation that treats time as a recoverable corporate asset. Save five hours per week. Automate repetitive tasks. Reduce administrative load. Increase output. Very neat. Very measurable. Very suitable for a slide with a suspiciously optimistic icon.
But time is not only capacity.
Time is possibility.
When AI gives time back, the most important question is not whether we can produce more of the same. The question is whether we can become more intentional about what deserves our attention. That applies to individuals before it applies to companies. If a professional gets two hours back, the obvious temptation is to fill those two hours with more of the same noise. More emails. More calls. More internal updates. More content. More reporting. More evidence that one is busy enough to remain safely employable in a system that still confuses movement with value.
Humanity has built entire careers out of looking unavailable in Outlook, so clearly we are doing splendidly as a species.
The better possibility is different. It implies that “more e-mails” do not automatically mean “more results”. It is a clear economy of quality over quantity.
The time AI gives back should create room for better work, not just more work. Better conversations. Better preparation. Better thinking. Better learning. Better relationships. Better recovery. Better decisions. Better observation. Better contribution: if I threw a third ball into the air, I am no longer afraid of dropping all of them.
AI became a sparring partner that helps me hold context, restart faster, structure thoughts, challenge blind spots, and move from vague intention to something I can actually work with. It does not replace my judgment. If anything, it demands more of it. But it reduces the mental tax of restarting, which means I can take more ideas seriously without fearing that each new idea will collapse the rest of my life into a pile of unfinished tabs and mild self-disgust.
That is not just productivity.
That is cognitive relief.
And cognitive relief changes what people dare to attempt. It also connects directly to the argument I made in AI Has Lowered the Cost of Starting. It Has Raised the Standard for Finishing: AI makes the beginning easier, but it does not remove the human responsibility to decide what deserves to be finished.
The hidden tax was never only the task. It was the switching.
A lot of professional work is not hard because each individual task is hard. It is hard because the mind has to keep changing rooms.
You prepare for a client meeting, then answer an internal message, then review a deck, then remember a budget question, then switch into strategy mode, then respond to a personal obligation, then return to the original thought and discover that the whole thing has evaporated. The workday becomes less like deep work and more like trying to cook five meals in five kitchens while someone keeps moving the ingredients.
This is one of the reasons AI can feel so powerful even when it does not technically “do the job.” It helps bridge the gaps between contexts. It reminds you where you were. It gives you a first structure. It reduces the pain of re-entry. It can hold parts of the messy continuity that the modern workplace has happily outsourced to human anxiety. And it does so withouth rolling its eyes back when you ask too many questions.
That matters because context switching has a real emotional cost. Microsoft’s Work Trend Index has repeatedly framed modern work as overloaded by meetings, messages, and fragmented attention, which is a polite corporate way of saying that knowledge work has become a machine for grinding down focus: Microsoft Work Trend Index.
For me, the fear was never only that a new idea would take time. It was that a new idea would destabilize the rest. I would avoid starting things because starting them meant creating another open loop, another unresolved track, another obligation to remember. AI made some of that less dangerous. It helped me create enough scaffolding around ideas that they no longer had to live entirely inside my head.
This is why the conversation about “time given back” needs to become more precise.
AI does not only save minutes.
It saves re-entry energy. It reduces the penalty for stopping and starting. It lets people resume thinking without paying the full cognitive toll each time. It can turn scattered ambition into structured progress. That is a different kind of value than simple automation.
There is a warning here, though, and I think it matters.
I spent almost the first forty years of my life working without AI. That taught me prioritization the hard way. I had to learn what not to start, what not to chase, what to ignore, and what to let die. That discipline becomes even more important when AI expands what feels possible. The danger is that people receive this new superpower without first learning constraint. If everything becomes easier to begin, the ability to choose becomes more important, not less.
That is one of the great paradoxes of AI.
It gives us more capacity, but it also demands better taste.
It lowers the friction of action, but raises the importance of intention.
Purpose becomes clearer when execution gets cheaper.
The more AI compresses execution, the more exposed purpose becomes.
This is where I think many people are still looking at the change from the wrong angle. They ask which skills AI will replace. That question is valid, but incomplete. A better question is: which parts of our work were only treated as central because they were expensive to execute manually?
For years, many skills carried professional value because they required repetition. Knowing how to set up the first version of a presentation. Knowing how to structure a workweek. Knowing how to build a strawman. Knowing how to translate a vague discussion into a set of next steps. Knowing how to manage work in sprints. Knowing how to create the first shape of something useful.
Those skills still matter, but they matter differently now.
AI can support many of those first-pass activities. It can draft, structure, summarize, reframe, compare, and suggest. It can help a person get to the first version faster. That means the value shifts away from the act of producing the first shape and toward the ability to observe, contextualize, challenge, and improve it.
Observation becomes more important.
Pattern recognition becomes more important.
Judgment becomes more important.
Charisma does not become obsolete, because humans remain emotionally complicated mammals who need to be convinced of things even when the spreadsheet is already sobbing in agreement. Getting people on board is still hard. Alignment still requires trust. Leadership still requires presence. A better argument does not automatically move an organization if nobody feels safe enough, seen enough, or motivated enough to act on it. The heightened presence or sudden absence of some attributes of collaboration might eventually make us even question our ways of working, sparking the questioning of agile methodologies for the millionth time.
This is why I do not believe AI makes the human contribution smaller.
It makes the real human contribution harder to hide from. That is also why I keep returning to the responsibility side of AI: if AI helps us create, decide, and act faster, it also raises the standard for what we are accountable for, which I explored in AI Doesn’t Just Change What You Create. It Changes What You’re Responsible For.
If your work was mostly execution, AI will pressure your identity. If your work was judgment, perspective, taste, empathy, trust-building, or the ability to connect ideas across contexts, AI may amplify you dramatically. It will not do that by making you less human. It will do that by removing some of the repetitive layers that kept your more human contribution trapped underneath operational noise.
That is hopeful and also immensely demanding.
Because when the repetitive layer becomes easier, we have fewer excuses for not improving the meaningful layer.
Sales was never about sending emails.
The easiest way to make this concrete is to look at actual roles.
Take sales.
A lot of sales work gets buried under activity that looks productive but is not the essence of the profession. Drafting outreach emails. Preparing account summaries. Cleaning CRM notes. Writing follow-ups. Reconstructing meeting context. Chasing internal inputs. Creating pipeline updates. All of these things may be necessary, but they are not the reason a good salesperson creates value, beyond the fact that some have an easier time shortening the time of execution than others.
Sales is not about sending emails.
Sales is about understanding clients deeply enough to help them solve problems they may not yet fully understand themselves.
If AI gives a salesperson time back, the worst use of that time is to send more generic outreach at higher speed. That is not leverage. That is industrialized irritation. The better use is to spend more time understanding the client’s world, preparing better questions, connecting patterns across conversations, identifying hidden risks, and showing up with more relevant thinking.
The salesperson who benefits most from AI is not the one who automates sincerity and I am tired of seeing LinkedIn posts of people saying “Claude helped me send 3127 outreach e-mails and made salespeople irrelevant”.
The salesperson who best benefits from AI is the one who uses it to clear the administrative fog so that sincerity, curiosity, and commercial judgment have more room to operate. An old mentor of mine said “the first rule of sales is knowing that you will not be successful if you try to sell something your client doesn’t need” and properly using AI should be about finding out how you can take money from your client to provide them true benefit.
That shift matters because trust is still human. A client does not feel understood because your email was generated efficiently. A client feels understood because the conversation reflects context, timing, empathy, and consequence. AI can help prepare that conversation. It cannot care about the outcome. The human has to bring that.
So when sales gets time back, the real question is not how many more touches can be created.
The real question is whether the salesperson can become more useful.
Marketing was never about producing the first draft.
Marketing faces a similar trap.
AI makes it easy to generate copy, concepts, taglines, variations, campaign structures, content calendars, audience hypotheses, and more versions of the same idea than any reasonable person should be forced to read. This is useful, but it also creates a dangerous temptation: mistaking volume for marketing quality.
Marketing was never about producing first drafts.
It was about shaping meaning.
A marketer’s purpose is not to flood the world with more content. The world is already drowning in content, and much of it has the nutritional value of damp cardboard. The point is to understand what should be said, why it matters, who needs to hear it, and what emotional or strategic shift the message is supposed to create.
AI can help create options.
The marketer still has to develop taste.
This is where time given back becomes precious. Instead of spending most of the energy producing the first version of everything, marketers can spend more energy sharpening positioning, rejecting generic language, testing resonance, understanding audience psychology, and making sure the message deserves attention. That is harder than generating text. It requires judgment and restraint.
In that world, the best marketers become editors of meaning, not operators of content machinery.
They use AI to explore possibility, but they do not outsource taste.
HR was never about headcount reports.
The HR example may be the most emotionally important one.
A lot of HR and People work is consumed by reporting, policy explanation, salary trend analysis, procedural communication, workforce planning updates, and administrative coordination. These activities are necessary, but they are not the highest purpose of the function.
HR was never really about headcount reports.
It was about building conditions in which people can do good work without being slowly crushed by the environment around them.
If AI gives HR teams time back, the opportunity is not only faster reporting. It is more space for culture, trust, development, conflict prevention, better onboarding, healthier leadership behaviors, and earlier detection of organizational stress before it turns into attrition.
This is where I think the psychological safety conversation becomes more complicated.
Organizations love to promise psychological safety. Sometimes they mean it. Sometimes they mean it until someone says something inconvenient. In a human-only environment, psychological safety is a promise from a counterpart you may have very little reason to trust. Telling HR or leadership that you struggle with focus, anxiety, neurodivergence, burnout, or uncertainty can feel like opening the first door toward being seen as less required, less resilient, less promotable, or quietly easier to neglect.
That may sound harsh.
It is also how many people experience work.
AI changes the reflection layer because it can provide a consequence-free counterpart. Its judgment is not perfect and should not be blindly trusted, but it is absent of career consequence. You can ask it where your thinking is weak. You can test a concern. You can explore whether you are being unreasonable. You can prepare for a hard conversation. You can examine your own behavior before bringing it into a human setting where politics, hierarchy, and interpretation matter.
That does not replace HR.
It raises the bar for HR. That justifies the ongoing initiative for them to re-baptize themselves to “People & Culture”.
Because if AI becomes the safer first place for people to reflect, then human organizations need to become much better at earning trust, not merely declaring it. “People & Culture” should not fear that. It should learn from it. The goal should be to create environments where people can bring more of their actual reality into the workplace without being penalized for honesty.
If AI gives HR time back, it should be spent making the company more humane.
Not more administratively elegant.
Product management was never about writing tickets.
Product management is another obvious example, partly because it has always been a strange profession where people spend half their time trying to explain what their job is and the other half being blamed for everything that does not fit neatly into another function.
Product was never about writing tickets.
It was about improving decisions under uncertainty.
AI can help summarize interviews, structure research notes, draft requirements, compare options, create roadmap narratives, synthesize feedback, and prepare decision material. That is useful. In some cases, it is transformative. But none of that replaces the core product responsibility: deciding what matters, what trade-offs are acceptable, what should be built, what should not be built, and how to help a team learn faster without losing coherence.
The value of product work moves further toward judgment, even if it means that the lack of humanity in AI means removing nuance where none was desired: Business Value might become more black and white, leaving room for creativity and client-servantship.
If AI gives product managers time back, that time should go into deeper customer understanding, sharper trade-off thinking, better stakeholder alignment, stronger discovery, and more honest confrontation of uncertainty. It should not simply produce more tickets, more roadmaps, more artifacts, and more ceremonies. We already have enough ceremonies. At some point the calendar begins to look like a minor religion.
The better product manager in the age of AI is not the one who generates the most documentation.
It is the one who uses AI to remove enough friction that the team can spend more energy on the decisions that actually change outcomes.
Leadership cannot hide behind answers anymore.
There is another role that deserves its own attention: leadership.
One of the strangest assumptions in many organizations is that leaders are supposed to have answers. That sounds reasonable until it becomes a performance. Then leaders stop asking questions because questions feel like weakness. They start defending clarity even when the situation is unclear. They create certainty where exploration would be more honest. They become trapped by the expectation that authority means knowing.
AI punishes that mindset.
Not directly. It does not walk into the boardroom and say, “nice strategy, did you generate that from panic?” Although, frankly, some boardrooms might benefit from the feature. But AI rewards curiosity. It rewards iteration. It rewards people who can ask better questions, test assumptions, examine blind spots, and compare perspectives quickly.
That makes authentic leadership more important, not less.
In a world where information asymmetry begins to collapse, where employees have access to stronger reasoning support, where junior people can challenge assumptions with better preparation, and where strategic-grade thinking becomes more widely accessible, leadership cannot rely only on being the person with access to the room, the data, or the interpretation.
The remaining differentiator becomes trust.
Authenticity.
Emotional resilience.
The ability to create direction without pretending complexity has disappeared.
The ability to ask questions publicly without losing authority.
Leaders need to become more comfortable saying: I do not know yet, but here is how we will learn. That may become one of the clearest signs of leadership maturity in the AI era.
Because if AI gives leaders time back, it should not only help them prepare better presentations.
It should help them become less performative and more honest.
What if work slowly moves away from survival?
The most provocative version of this conversation goes beyond professional roles.
It asks whether work might eventually become less central to survival.
That sounds utopian, and maybe it is. But some utopian questions deserve to be taken seriously when the underlying conditions begin to change. Human history is full of predictions that extrapolated the current system forward and missed the moment when the system itself changed. There were fears that cities would drown in horse manure because urban transportation depended on horses. Then cars changed the underlying constraint. Cars created other problems, obviously, because humanity never exits one problem without immediately building a premium version of the next one. But that specific future disappeared. As the historian Martin V. Melosi has written, the horse-powered city created massive waste and sanitation challenges before motor vehicles changed the structure of urban transportation altogether: The Automobile and the Environment in American History.
I wonder whether work as survival may face a similar long-term transformation.
For most of human history, work has been tied to scarcity. We worked because food, shelter, safety, medicine, transportation, communication, and comfort required human labor. If something needed to exist, people had to produce it, move it, calculate it, repair it, sell it, coordinate it, or defend it. Work became the mechanism through which survival was earned.
If AI, automation, robotics, energy innovation, and better systems eventually reduce scarcity dramatically, then the meaning of work may shift. Think about it: the sun is a free, never-ending source for energy… The limitation is not it’s lack of existance, it is our inability to harness it without compromise.
Not tomorrow.
Not evenly.
Not without conflict.
But directionally, the possibility is more real than it has ever been.
If food is no longer scarce, why should access to it depend on economic survival games? If transportation becomes autonomous, why should driving remain compulsory rather than recreational? If many forms of execution become automated, why should human value remain tied to the ability to perform tasks machines can do better, faster, and more consistently?
These questions are uncomfortable because they challenge the moral architecture of work.
Many people still believe work is what makes people deserving. But if survival work becomes less necessary, then we will need a broader definition of contribution. We will need to become more comfortable with people choosing passion, care, creativity, coaching, learning, parenting, community, sport, craft, research, and service as legitimate forms of meaning.
The phrase “you are not your work” may eventually be said with less suspicion.
That would be a beautiful shift.
But it would not be simple.
Humans will still compete. The question is what for.
A world with less scarcity does not automatically create a world without envy, status, ambition, or comparison.
That would be adorable. Also completely inconsistent with everything we know about humans, sports, families, neighborhoods, comment sections, and the mysterious emotional intensity of amateur football leagues.
If work becomes less tied to survival, people will still compete. They will compete for mastery, recognition, excellence, attention, love, legacy, and the deeply annoying satisfaction of being better than someone else at something. The difference is that the stakes might change.
I think about this through sport.
I love wakeboarding. I was once very good at it, and part of me would probably still do it every day if I did not need to work for money. But even in a world of abundance, I would still have to face the fact that I will never be good enough to compete in it at the highest levels. Others would progress faster. Some would receive more attention. Some coach might choose someone else because they were more promising. The absence of financial survival pressure would not remove human comparison.
It might even make it more visible.
Consider tennis. If winning Roland Garros no longer made someone rich, would fewer people pursue it because the financial reward disappeared? Or would more people pursue it because the cost of learning, training, and participating was no longer tied to family wealth and economic sacrifice?
Would people be less jealous, because money was no longer the point?
Or more jealous, because talent and recognition would stand exposed without the convenient explanation of unfair access?
These questions matter because they prevent the conversation from becoming childish. Abundance does not remove human nature. It changes the arena in which human nature expresses itself.
That is why the goal cannot simply be a world where nobody needs to work.
The goal should be a world where more people can choose what kind of contribution they want to make, and where fewer people are forced to spend their lives trapped in work that exists only because scarcity, friction, and badly designed systems made it necessary.
The future starts at the individual level.
It is tempting to push all of this into the far future and leave it there, safely contained as philosophical speculation. But I do not think that is honest.
The transition starts now, at the individual level.
Each person who gets time back from AI has a choice. They can use it to produce more noise, or they can use it to become more useful. They can fill every recovered hour with more obligations, or they can ask what the recovered time was supposed to make possible. They can treat AI as a shortcut around growth, or as a tool that makes growth harder to avoid.
For me, the answer has become increasingly personal.
AI gave me enough support to start doing things for myself again. Writing. Building. Sharing. Reading. Helping. Thinking beyond immediate obligations. Not because the machine replaced my ambition, but because it helped me reduce the friction that kept ambition from turning into motion.
That matters, because if AI only helps me finish my obligations faster, it is useful. If it helps me remember who I wanted to become, it is transformative.
And that is the question I think more people should ask.
Not only: what can AI do for my job?
But: what can AI give back to my life?
The answer will be different for everyone. For one person, it may mean better client relationships. For another, more creative exploration. For another, healthier routines. For another, better parenting. For another, the courage to learn something new. For another, simply enough mental space to read a book again without feeling like they are stealing time from survival.
That last part may sound small.
It is not small.
A society in which more people have enough room to think, learn, care, play, create, and recover is not a small ambition. It is a different definition of progress.
AI should give us the opportunity to worry about better things.
I do not believe AI automatically makes humanity better.
That would be far too convenient, and we have done very little as a species to deserve that kind of automatic upgrade. AI will amplify ambition, insecurity, laziness, curiosity, generosity, greed, creativity, and every other inconvenient human ingredient we bring into it. But I do believe it gives us a rare opportunity.
AI should give humanity the opportunity to worry about things we currently treat as trivial:
- Hobbies
- Empathy
- Creativity
- Learning
- Health
- Relationships
- The kind of contribution that does not fit neatly into a salary band.
That future is only truly possible if scarcity becomes less dominant, war becomes ridiculous, and disease becomes a dark memory rather than a normal part of human fear. That is a very long horizon. It may sound absurd from where we stand today. But many futures sound absurd before the underlying constraint breaks.
For now, the personal version is enough.
If AI gives you time back, do not rush to spend all of it proving you are still busy.
Ask what the time is for. Ask what part of your work was only noise. Ask what part of your role was actually purpose hiding underneath execution. Ask what you would build, improve, learn, repair, or finally begin if the old excuses became less convincing.
Because AI may not eliminate work first.
It may eliminate enough friction that we are forced to confront meaning.
And honestly, that might be the most human work left.






