Skip to main content

Who Owns the Code?

January 16, 2026

Who Owns the Code?

A few years ago, this question was already uncomfortable, but AI has turned it into a real business risk rather than an abstract legal curiosity. In 2011, Brad Frazer and I used to ask audiences a deceptively simple question: “If I pay you five hundred dollars to build me a website, who owns the code?” Most people confidently answered “the person who paid for it,” even though that was wrong then unless there was a work-for-hire agreement. It is far more wrong now in an AI-driven world where large portions of modern software are no longer written directly by humans.

Today a developer can open ChatGPT, sketch a product vision, generate personas, produce a backlog, and then ask GitHub Copilot, Cursor, or Claude to scaffold and implement an entire working application in minutes rather than days. When that app runs and people start paying for it, the natural assumption is that the developer or team that prompted, tweaked, tested, packaged, and shipped it must own it, but that assumption is exactly where teams and organizations get into trouble.

The “Million Dollar” App

It’s fun to show how quickly AI can take a rough idea and turn it into something that looks like a real product, from shaping a vision and identifying potential customers to creating a detailed prompt to hand off to an LLM so it can create a working web-based app complete with a user interface and rule-based behavior. With a few fixes and a bit of polishing, it looks like something you could plausibly charge money for.

Imagine that app takes off, customers are paying subscription fees, and a buyer shows up wanting to acquire the company. One of the first questions they will ask is whether you actually own the code, and most developers will instinctively say yes because they were the ones driving the keyboard and making the decisions. That answer can easily be wrong.

Copyright Is Not a Verb

This is the part that breaks people’s mental model. Copyright is not something you do; it is something that exists when a human being “reduces a sufficiently creative idea into a tangible medium.” This means authorship is the foundation of ownership. When you type code by hand or dictate it into a tool that simply transcribes your words, you are clearly the author and a copyright is created automatically.

AI changes this because LLMs do not transcribe what you wrote. They generate something new based on statistical patterns in their training data. There is no guaranteed one-to-one relationship between your prompt and the output, and under current U.S. law that means the output may have no human author and therefore may not be copyrightable at all.

But I Wrote the Prompt!

Yes, and the prompt itself is protected because it is your original creative expression, but the prompt is not the software. A sentence like “create a ten-by-ten web-based naval strategy game with automated behavior” can be copyrighted, but the thousands of lines of code generated from it usually cannot be, because the model decides structure, naming, logic, and flow in ways that are not deterministically tied to your words.

There are cases where AI behaves more like a refactoring tool or keystroke saver, such as when you ask it to produce a very specific function or give it a detailed method signature and it outputs exactly what any human would have typed. In those cases, the law may still see you as the author. But when you let the LLM design and implement an entire application or even large parts of it, authorship slips out of human hands.

Why This Only Hurts When Money Shows Up

This problem often stays invisible because nothing breaks when your software runs and customers are paying you. You can operate for years without anyone questioning ownership. The moment it becomes real is when you try to raise money, sell your company, or enforce your rights against someone else, because investors, buyers, and courts all require proof that there is human-authored intellectual property behind what you are selling.

To register a copyright and get the legal ability to stop someone else from copying your code, you have to submit a copy of the source code and assert that it contains sufficient human authorship. That is when your commit history, your edits, and your actual creative contribution suddenly matter, and saying that an LLM wrote most of it is not a winning argument.

The Uncomfortable Truth

None of this means teams and organizations should stop using AI. These tools are transformational and allow developers to do things that once required entire departments. They are becoming part of everyday professional software development in the same way that compilers and IDEs once did. What has changed is that building something that works is no longer enough. Teams must also be able to demonstrate that what they built is something they legally own.

AI makes it easier than ever to build software quickly, but it also makes it easier than ever to accidentally build something you do not actually own. You will not discover that until the moment when ownership suddenly matters, which is why the question Who owns the code? is no longer a niche legal concern but a core product and business risk in the AI era.

For more information about how AI, copyright, and ownership intersect in modern software development, and to explore how exposed your own products might be, visit https://whoownsthecode.com where you can also take a free ownership assessment to see where you stand.


What did you think about this post?

Comments (0)

Be the first to comment!