A new ‘Human Only’ software license aims to prohibit artificial intelligence systems from using specific open-source code, sparking a fierce debate over its legality and impact on the developer community. This proposal restricts copying, modification, and distribution to human users only, directly challenging the role of AI in modern software development and raising a critical question: can open code stay open if machines, not just humans, want to use it?
What Does the “Human Only” License Prohibit?
The Human Only license specifically forbids automated systems from interpreting, generating code from, or training on the licensed software. It targets AI-powered tools for code completion, analysis, and model training, while still permitting standard developer automation like compilers and linters when used by humans.
The license authors explicitly prohibit AI-powered completion, training, and static analysis but allow routine automation tools like compilers and linters. As one BigGo analysis points out, the document attempts to regulate who can use the software – a type of control not typically granted by copyright law. This creates legal uncertainty, particularly around vague definitions like “AI System” and conflicts with established doctrines such as the MAI Systems v. Peak Computer ruling on software copying.
Will the “Human Only” License Be Enforceable in Court?
Legal experts are skeptical, pointing to two major hurdles. First, the license may exceed the scope of copyright by regulating the use of code rather than its reproduction and distribution. While courts upheld traditional open source licenses in Jacobsen v. Katzer, these new rules venture into uncharted territory.
Second, the defense of fair use is gaining strength in the context of AI. The growing acceptance of large-scale AI training as transformative fair use presents a significant obstacle, a trend detailed in a Skadden summary of a pivotal 2025 Copyright Office study. Jurisdictional differences further complicate enforcement.
| Region | Likely Enforcement Outcome |
|---|---|
| United States | Low chance due to robust fair use protections |
| European Union | Moderate chance if the AI Act links training data to licensed code |
| Singapore | Very low chance due to statutes voiding computational use restrictions |
Open Source Community Reaction and Practical Hurdles
Many developers worry the license will fragment the open source ecosystem and stifle innovation. The PyTorch team, for instance, has warned that a proliferation of bespoke AI licenses could slow collaborative progress. Bans may not even improve productivity, as one 2025 study found that AI tools could increase task completion times for experienced programmers.
Beyond legal and community concerns, practical enforcement is a significant barrier. Tracing a single code snippet within a foundation model trained on terabytes of public data is nearly impossible. Even if a violation is detected, well-funded AI labs can absorb litigation costs or relocate their training operations to more favorable legal jurisdictions.
Key Takeaways: The “Human Only” License
- Scope: Restricts AI training, inference, and code analysis.
- Enforcement Levers: Copyright law combined with contract law.
- Technical Gates: Recommends
robots.txtdirectives to warn web crawlers. - Key Legal Hurdles: Fair use doctrine, vague terms, and jurisdictional splits.
- Community Risks: Ecosystem fragmentation and reduced collaboration.
As the debate unfolds, the license underscores a deeper conflict over whether open source ideals can adapt to an era where machines, not just humans, are the primary consumers of code.
What exactly does the Human Only Public License try to forbid?
The draft license goes beyond traditional open-source conditions and attempts to block any “AI system” from reading, analyzing, or learning from the licensed code. While conventional licenses already govern copying and redistribution, HOPL tries to regulate who (human vs. machine) may use the software – a category that U.S. copyright law has never recognized as an exclusive right. Courts have not yet ruled on whether this expansion falls inside or outside the scope of copyright, leaving the clause in legal limbo.
Why do most lawyers expect the license to be unenforceable?
Three practical hurdles dominate the discussion:
- Fair-use precedent – U.S. courts increasingly treat the mass ingestion of code for model training as fair use, especially when the output is transformative and non-competitive.
- Detection gap – Even if a violation occurred, proving that a particular model ingested your specific repository inside a petabyte-scale dataset is close to impossible.
- Jurisdiction shopping – Singapore’s 2024 statute explicitly voids any contractual term that restricts “computational data analysis”, instantly nullifying HOPL’s core clause if the servers sit in Singapore.
A January 2025 Copyright Office report confirms that “copyright does not currently provide special protections against AI training”, reinforcing the view that the license is more symbolic than binding.
How does the open-source community react?
Reactions split along two lines:
- Ethics camp – applauds the experiment, arguing that developers should have the moral right to keep their work out of opaque, for-profit models.
- Pragmatist camp – warns that fragmentation is already happening: one study counted 17 new AI-specific licenses in 2024 alone, each incompatible with the others. The resulting “license soup” raises compliance costs, discourages reuse, and hits individual contributors hardest because they lack legal teams to parse every new restriction.
Could the license still influence future law or policy?
Yes, but indirectly. Lobbyists cite HOPL when urging Congress to create a “training-right” for software authors, similar to the music industry’s performance right. Meanwhile, the EU’s AI Act (in force since mid-2025) already forces large model builders to publish a summary of training data. If policymakers later decide that opting out must be respected, the license could serve as a template for machine-readable opt-out flags – much like robots.txt did for web crawlers.
What should developers do today if they want to keep AI away from their code?
Until legislation arrives, technical steps beat legal ones:
- Host the repository behind authentication and block crawlers via robots.txt.
- Insert watermark comments (
/* NO-AI-TRAIN */) that survive minification; they act as a trip-wire if identical snippets later surface in generated code. - Watch the “AI Copyright Disclosure Act” working group – expected to release a standardized opt-out header format before the end of 2025 – and adopt it once published.
Legal clauses alone, concluded one recent survey, give “a false sense of protection”; code obfuscation plus access control remains the only proven deterrent.
















