Open Source Consulting for the Cognitive Revolution

April 28, 2026

AI Doesn’t Just Change What You Create. It Changes What You’re Responsible For.

There is a growing discomfort around ethical AI use, and most of it is framed in the wrong way. We tend to ask whether AI is ethical, whether it should be regulated, whether it will eventually replace human contribution altogether, or whether it will create a completely new category of professional risk. Those are valid questions, and apparently humanity has chosen to ask all of them at once, because why solve one philosophical crisis when you can bundle several into a productivity tool. But those questions still miss something more immediate and more personal. The real shift is not happening only at the level of the technology. It is happening at the level of the individual using it.

For decades, ethical behavior in knowledge work was quietly enforced by limitation. You needed time to research, effort to produce, and experience to connect ideas in a meaningful way. Those constraints acted as guardrails. They made it harder to misrepresent yourself, harder to copy without being obvious, and harder to operate outside your actual level of competence. In other words, effort and ethics were often aligned by default. Not because people were naturally noble creatures floating through the workplace on clouds of integrity, but because it was difficult to shortcut everything at once.

AI removes that alignment.

Today, anyone can generate structured arguments, polished content, or seemingly well-informed perspectives in seconds. The output looks legitimate, the structure feels coherent, and the confidence is built into the tone. What used to require expertise can now be simulated with remarkable speed. That is not inherently a problem. In many ways, it is an extraordinary leap forward. But it fundamentally changes where ethical responsibility lives.

Because if friction disappears, so do many of the guardrails that came with it.

This is why the ethical use of AI cannot be reduced to a question of whether the tool was used. That question is already too small. The more relevant question is what the human using the tool is claiming, hiding, avoiding, improving, or contributing. AI did not make authorship, plagiarism, fairness, and accountability suddenly important. It made them impossible to ignore.

AI removed friction. It did not remove responsibility.

The first mistake in the ethical discussion around AI is treating it as if responsibility can somehow be transferred to the model. People use AI to draft a message, summarize a document, generate a legal-sounding argument, write code, build a presentation, or support a piece of analysis, and then suddenly the boundary of responsibility becomes conveniently blurry. If the result is good, the human takes credit. If the result is flawed, misleading, or irresponsible, the tool becomes the scapegoat. A beautifully human arrangement, naturally.

But accountability does not work like that.

If you choose to use AI in your work, communication, decision-making, or public output, you remain responsible for what you do with it. The model does not know your context. It does not understand your obligations. It does not carry your professional reputation. It does not face the consequences when something goes wrong. The moment you present an AI-assisted result as part of your work, it becomes part of your responsibility.

That principle already exists in other technologies. No one argues that Excel owns the output of a financial model because it performed the calculations, which is why asking for help by a colleague in making a formula work is common practice. No one claims that a Business Intelligence dashboard is the author of the conclusions drawn from company data. These tools enhance decision-making. They do not replace ownership of decisions. AI operates in the same broad category, even if it feels more autonomous because it produces language, structure, and apparent reasoning.

That appearance of reasoning is precisely what makes it dangerous.

A spreadsheet still visibly depends on formulas, inputs, and structure. A dashboard still depends on the quality of the underlying data model. AI output, by contrast, often arrives fully dressed for the boardroom. It can sound confident even when it is wrong, polished even when it is shallow, and persuasive even when it is disconnected from reality. That does not reduce accountability. It increases the need for it.

Authorship is no longer about typing. It is about contribution.

One of the clearest examples of this shift is the question of authorship. The U.S. Copyright Office has made its position relatively clear in its AI guidance: purely AI-generated material is generally not protected by copyright, prompts alone usually do not establish authorship, and protectable human contribution typically lies in meaningful selection, coordination, arrangement, or modification. That sounds like a legal nuance, and yes, technically it is, because apparently even creativity now needs paperwork. But it also points to something much deeper.

It forces us to ask what it actually means to create something.

In my own work, AI has become a way to get beyond what I would call step zero. Starting from nothing can be difficult for me, not because I lack ideas, but because I am very aware of my own blind spots. I know that any initial thought I have is shaped by bias, by incomplete information, and by assumptions I may not even recognize. When I bring AI into that process, it gives me a first layer of expansion and validation. Not validation in the sense of being correct, but validation in the sense of helping me question myself earlier. It gives me something to push against. It surfaces adjacent possibilities. It helps me see whether an idea has structure, whether it has gaps, whether it is worth developing, or whether it is just one of those thoughts that felt brilliant for approximately twelve seconds before collapsing under scrutiny.

That is where AI offers one of its best forms of enhancement. It does not replace my thinking. It helps me interrogate it.

The value is not in the generated output. The value is in the human work that follows: reviewing, rejecting, restructuring, contextualizing, connecting, and deciding what is actually useful. In my case, the contribution is pattern recognition. It is the ability to connect what I have seen in consulting, product leadership, organizational transformation, and technology adoption into something that hopefully helps others see their own situation more clearly.

This is why I do not believe the authorship conversation should collapse into a binary argument of “AI wrote it” versus “a human wrote it.” That framing is too crude for the way serious knowledge work is starting to happen. The better question is: where did the meaningful contribution come from?

If someone uses AI to generate a post about a topic they do not understand, publishes it without scrutiny, and presents it as expertise, then the ethical issue is obvious. If someone uses AI to pressure-test a point of view, explore blind spots, and sharpen an argument grounded in their own experience, that is a very different thing.

The distinction is not whether AI was involved.

The distinction is whether the human contribution is real.

Ethical ambiguity is now normal.

This is where the work of organizations such as Students of Ethical Use of Technology is useful. I do not want to overstate or appropriate their domain. But I do think it deserves credit for naming something that many professional environments still avoid saying plainly: ethical ambiguity is no longer an edge case. It is normal.

SEUT approaches ethical technology use from the perspective of students and emerging technology users, but the themes are not limited to students. In many ways, students are simply confronting the ambiguity earlier and more honestly than many adults in organizations. They are asking questions about shortcuts, fairness, overreliance, authorship, and the difference between learning and outsourcing. Those questions are not childish versions of professional dilemmas. They are the professional dilemmas, just without the corporate vocabulary that makes everything sound more sophisticated while changing absolutely nothing.

This matters because AI is not entering a world with stable norms. It is entering schools, workplaces, teams, recruitment processes, content ecosystems, and leadership environments where the rules are inconsistent, incomplete, or still being invented. Some contexts require disclosure. Some tolerate quiet assistance. Some forbid use entirely. Some encourage experimentation but never define the boundary between support and substitution.

That is the gray zone we are now living in.

And in the gray zone, ethics cannot rely only on rules. It relies on judgment.

In the world of Ethical AI Use, transparency is not weakness. It is part of the work.

This article itself is AI-supported. The briefing is mine. The ideas are mine. The examples are mine. The final review and responsibility are mine. AI has supported structure, expansion, and drafting. I am comfortable saying that because the goal is not to claim some mythical purity of creation. The goal is to provide helpful information, shaped by experience and reviewed with intention.

That distinction matters.

We are entering a phase where pretending not to use AI will become as strange as pretending not to use search engines, spreadsheets, templates, or spellcheck. The question is not whether professionals use AI. Many do already, quietly or openly. The question is whether they are honest about the role it plays and responsible for the outcome it helps produce.

Transparency does not mean attaching a dramatic confession to every sentence. Nobody needs a footnote that says “this paragraph was emotionally supported by a language model.” But transparency matters when audience expectations matter. If a reader, client, employer, evaluator, or decision-maker reasonably assumes unaided human judgment, and that assumption is materially false, then there is an ethical issue.

This is where disclosure becomes less about compliance and more about trust.

The International Committee of Medical Journal Editors and the World Association of Medical Editors have both taken positions that AI cannot be listed as an author because it cannot take responsibility for the work. That principle is highly relevant beyond academic publishing. It makes the accountability chain explicit. Tools can assist. Humans must stand behind the result.

That should be the professional standard.

The real risk is not AI use. It is misrepresentation.

Most ethical failures around AI will not look like dramatic fraud. They will look like small acts of misrepresentation. Someone uses AI to produce an argument they cannot defend. Someone submits AI-generated work as evidence of expertise. Someone presents a polished analysis without understanding the assumptions behind it. Someone enters a conversation armed with synthetic confidence and no actual curiosity.

I have seen this play out in a very ordinary, very human setting: a neighborhood dispute. A person tried to win an argument by using AI to generate a legal-sounding response, claiming the right to charge damages from a neighbor. They had neither the knowledge to assess the claim nor the willingness to consult a professional. The output sounded plausible enough to them, so they used it as a weapon.

That is the ethical problem in miniature.

AI made it easy to appear informed without becoming informed. It gave someone confidence without competence. It allowed them to escalate a situation using language they did not understand and authority they did not have. The issue was not that they used AI. The issue was that they used AI to replace judgment they never developed and validation they never sought.

This is not a rare edge case. It is a pattern we should expect to see more often. AI can help people understand complex topics, but it can also help them impersonate understanding. The difference lies in intent, verification, and accountability.

Unfair advantage depends on context.

The idea of unfair advantage through AI is more complicated than many people want it to be. The same action can be ethical in one context and unethical in another. Using AI to prepare for an interview can be reasonable. Using AI during a live interview without permission is something else. Using AI to refine a first draft can be reasonable. Using AI to complete a take-home assessment designed to evaluate your own capability crosses a different line.

This is why I find Anthropic’s candidate AI guidance especially interesting. It does not pretend that AI should be banned from the hiring process entirely. Instead, it distinguishes between acceptable support and unacceptable substitution. Candidates are allowed to use AI for preparation and refinement, but not to complete take-home exercises or participate in live interviews unless explicitly permitted.

That is a mature direction.

It recognizes that the ethical question is not simply whether a tool helped. The question is what the process was designed to evaluate. If a task is meant to assess independent reasoning, then hidden AI assistance changes the nature of the assessment. If a conversation is meant to explore how someone thinks, then outsourcing that thinking defeats the purpose.

The same principle applies in professional life. If AI helps you prepare better, structure your thoughts, and communicate more clearly, that can be responsible use. If it helps you simulate expertise, bypass learning, or obscure your actual contribution, then it becomes a problem.

The junior talent debate reveals a deeper misunderstanding.

There is another ethical conversation around AI that deserves more scrutiny: the assumption that junior professionals will simply be replaced. This argument shows up everywhere, usually dressed as efficiency. If AI can do junior-level tasks, the thinking goes, then organizations need fewer juniors.

That logic is dangerously narrow.

Junior professionals are not just task capacity. They are part of how organizations learn. They bring new behaviors, new expectations, new tools, and new discomfort into systems that would otherwise keep repeating themselves. Senior professionals need junior professionals not only to delegate work to, but to stay exposed to how the world is changing.

The same people who loudly predict the end of junior roles are often the ones who quietly ended their own learning journey when they became managers. They treat expertise as something accumulated and then defended, instead of something that must keep evolving. AI makes that mindset more dangerous, not less.

If organizations remove the spaces where people learn by doing, they may gain short-term efficiency and lose long-term adaptability. That is not a productivity win. That is institutional self-harm with a software license.

Ethical AI use should not be about replacing the development of human capability. It should be about raising the quality of that development. Junior professionals should learn to use AI responsibly, critically, and transparently. Senior professionals should learn from how they use it. That exchange is part of how organizations avoid stagnation.

Outcomes over output.

The standard I keep coming back to is simple: outcomes over output.

AI makes output cheap. It can produce text, images, summaries, code, analysis, presentations, and plans at a speed that makes traditional productivity metrics look increasingly outdated. But output alone was never the point. At least it should not have been, despite entire management systems heroically pretending otherwise.

The real question is whether AI improves the contribution we are trying to make.

If I use AI to create content, the ethical question is not whether the words began inside a model or inside my head. The ethical question is whether the final piece helps someone think better, whether it reflects my actual experience, whether I reviewed it responsibly, and whether I am transparent enough about the process when it matters. If an organization uses AI in a product, process, or service, the ethical question is not whether the implementation looks impressive. It is whether the outcome improves for the people affected by it.

This is the lens that separates meaningful AI use from performative AI use.

There will be endless output. There already is. The scarcity will be judgment, contribution, and responsibility.

A practical guide to ethical AI use.

Because this topic can become abstract quickly, it helps to turn it into a practical guide. Not a rigid rulebook, because rigid rulebooks age terribly in fast-moving technology environments. More like a set of questions that force a professional to slow down before hiding behind the elegance of generated output.

  1. Do you understand what you are sharing? If you cannot explain the output in your own words, you should not present it as part of your work. This sounds basic, which means it will be ignored by exactly the people who need it most.
  2. Have you added meaningful contribution? Editing tone is not the same as contributing insight. Formatting a generated argument is not the same as owning the thinking behind it. Ask what is uniquely yours in the final result. The answer does not need to be every word. It does need to be real.
  3. Are you transparent about your process when it matters? Not every use of AI requires disclosure, but credibility-sensitive contexts do. If the audience would feel misled after learning how the work was produced, that is a signal to rethink the process.
  4. Are you using AI to learn or to shortcut? This is one of the most important distinctions. Learning with AI can accelerate development. Using AI to avoid learning creates synthetic competence, which is fragile and often dangerous.
  5. Have you verified the facts, sources, and assumptions? This is non-negotiable in legal, financial, technical, medical, regulatory, or public-facing contexts. The Mata v. Avianca case, where lawyers were sanctioned after submitting fake AI-generated cases, is the obvious cautionary tale. AI-generated confidence is not evidence.
  6. Would you stand behind the result without blaming the tool? If not, do not use it. That one is brutally simple, which is why it is useful.
  7. Does AI improve the outcome or only the output? Faster is not better if the direction is wrong. More content is not better if it creates confusion. More polished arguments are not better if they hide weak thinking.

Finally, ask the most uncomfortable question: if I explained exactly how this was produced, would anyone feel misled? If the answer might be yes, stop and redesign the process.

That is not anti-AI. It is pro-responsibility.

Micro ethics becomes macro risk.

It is easier to grasp the consequences of the responsible use of AI if you focus on the individual level, because that is where ethical behavior begins. But it does not stay there. Personal habits become team norms. Team norms become organizational culture. Organizational culture becomes market behavior. And suddenly the small compromises people make with AI are no longer small at all.

If individuals start normalizing undisclosed substitution, unverified claims, synthetic competence, and accountability avoidance, organizations will eventually scale those behaviors. That is where the next layer of the conversation begins.

At the macro level, the question becomes whether organizations are building environments where ethical AI use is encouraged, rewarded, and operationalized, or whether they are simply demanding more output and hoping responsibility survives the acceleration.

For now, the individual standard is already demanding enough.

AI does not just change what you create.

It changes what you are responsible for.

And responsibility, annoyingly enough, still belongs to the human in the loop.

Share:

Related