Limited time offer: try Patentext for free →

How well can GPT-5 draft patents? Here’s our real-world experiment

OpenAI introduced GPT-5 on August 7, 2025, calling it “a significant leap in intelligence over all our previous models.” It’s trained to reason more effectively, follow complex instructions, and adapt to specialized domains — in other words, the kind of upgrades that sound promising for a detail-heavy, language-precise task like patent drafting.

So, we decided to put it to the test. Could GPT-5 turn an inventor’s technical disclosure into a patent draft that’s accurate, compliant, and worth building on? Or will it trip over the same hurdles as its predecessors? 

Our invention: the magnetic tessellating umbrella

Here at Patentext, there’s one made-up invention that shows up in our screenshots, our product walkthroughs, and now, in this blog: the famed magnetic tessellating umbrella. It’s our go-to fictional example for testing because it’s silly but still technical enough to mimic the kind of real-world inventions our users work with. 

You can read our technical write-up of our invention here, but this is our plain-English disclosure that we’ll be feeding ChatGPT: 

Imagine standing in a rainy crowd where everyone has an umbrella. Normally, there are gaps between umbrellas where the rain still gets through, and the edges bump into each other awkwardly. The magnetic tessellating umbrella solves that by giving each umbrella a perfectly shaped canopy (triangle, square, or hexagon) and embedding magnets along the edges. These magnets snap together with the umbrellas next to them, forming a continuous, watertight “umbrella wall.” 

The magnets can spin, so they always connect with the right polarity. And if it’s windy or you want a stronger connection, you can use optional clips at the edges to lock umbrellas together. You can also set the umbrella on a tripod base so you don’t have to hold it, and the pole can extend higher so people can walk underneath.

Our rubric for scoring GPT-5’s patent drafting skills

We didn’t just want to see what GPT-5 spit out, but measure how well it actually did the job of a patent drafter. 

For those who don’t know me, I’m a patent agent with 10+ years of drafting experience, having worked at both major law firms and boutique firms. I’ve drafted patents for Airbnb, Facebook, and dozens of startups you may or may not have heard of. In other words, I’ve seen the full spectrum of “good” and “bad” patent drafting — so I had a pretty solid benchmark in mind for what GPT-5 would need to deliver.

I’ll be scoring each output on a 1 to 5 scale (1 = poor, 5 = excellent):

  • 5 = Excellent – On par with a competent human drafter.
  • 4 = Strong – Minor flaws that wouldn’t meaningfully harm a filing.
  • 3 = Mixed – Some good elements, but also notable gaps or errors.
  • 2 = Weak – Significant issues that would require major rewriting.
  • 1 = Poor – Fundamentally unusable without starting over.

How well can GPT-5 draft patents?

To keep this test fair and reproducible, we’ll be using the magnetic tessellating umbrella invention for every run. For each test, we’ll be feeding GPT-5 only the plain-English disclosure, not pre-written claims, legal language, or section headings.

Test 1: Can ChatGPT write broad independent claims?

Goal: See if GPT-5 can take a layperson description and produce a single, legally structured claim that captures the core inventive concept without unnecessary limitations.

Prompt: You are a U.S. patent attorney. Write an independent claim that balances broad scope with high patentability for the following invention. Follow proper U.S. claim format. Avoid unnecessary limitations. 

The output: 

An umbrella system comprising:

  • a canopy having a planform shape configured to tessellate with canopies of adjacent umbrellas without leaving gaps;
  • a plurality of magnetic coupling elements disposed along edges of the canopy, the magnetic coupling elements being configured to magnetically engage with complementary magnetic coupling elements of an adjacent umbrella; and
  • a mounting structure coupled to the canopy, the mounting structure including a support pole,
    • wherein the magnetic coupling elements are mounted to allow relative rotation about at least one axis to align magnetic polarities for engagement,
    • and wherein engagement of adjacent umbrellas forms a substantially continuous weather-resistant barrier.

My feedback:

  • Overly narrow language: Phrases like “without leaving gaps” or “substantially continuous weather-resistant barrier” sound descriptive, but in patent law, they can create loopholes. For example, if there’s even a small gap or minor leak in a competitor’s umbrella, they might escape infringement.
  • Awkward structure: The magnets are introduced before the structural elements they attach to. This forces the claim to loop back awkwardly via “wherein” clauses. If the order were canopy → mounting structure → magnets, it would read more cleanly and avoid that detour.
  • Weaker phrasing: Using “having” instead of stronger patent terms like “comprising” or “characterized by” can make the claim read less formally and potentially feel less precise.
  • Extra filler words: “Being configured” is wordy. “Configured to” means the same thing and keeps the claim cleaner.
  • Unnecessary quantity limits: The term “plurality” means “more than one.” Here, that’s unnecessary — if an infringer only uses one magnetic coupling element, they wouldn’t infringe under this wording. Broad claims usually don’t lock in exact quantities unless required.

My score: 2/5. GPT-5 captured the general idea, but unnecessary limitations, suboptimal language, and awkward structure weaken the claim’s enforceability.

Test 2: Can ChatGPT write dependent claims?

Goal: Evaluate GPT-5’s ability to add refinements and variations that meaningfully narrow the scope while staying consistent with the disclosure.

Prompt: You are a U.S. patent attorney. Write 8 to 10 dependent claims for the following invention, adding narrower features, variants, and refinements described in the disclosure. 

The output: See here

My feedback: 

  • Formality issues: Claim 3 has an “antecedent basis” error (meaning a term is used without being properly introduced first).
  • Problematic use of “or”: While “or” is technically allowed, it’s not recommended because it reduces the patentability of the claim — the examiner only has to find prior art for one of the alternatives. In this case, the “or” doesn’t add meaningful breadth, so it just weakens the claim.
  • Overcomplication: The independent claim already introduced a “mounting structure,” to which the magnets could be “rotatably attached.” Instead, GPT-5 created a brand-new “rotatable housing” in Claim 4 just to allow angular adjustments for the magnets. This adds complexity without adding real value and it’s also redundant because rotation was already addressed in the main claim.
  • Lack of novelty: Claims 7 to 10 describe very basic umbrella features (telescoping pole, height adjustment, tripod base) that are not new, interesting, or patentable. 

My score: 2.5/5. Sure, they’re structurally valid dependent claims, but several were redundant and unnecessarily complicated. Even worse, the dependent claims taken together don’t provide much in the way of backup positions for prosecution.

Test 3: Can ChatGPT write a detailed description for a dependent claim?

Goal: Assess whether GPT-5 can expand one dependent claim into a thorough, compliant Detailed Description section that provides sufficient written description and enablement.

Prompt: You are a U.S. patent attorney. Below is a dependent claim. Write a Detailed Description section for a U.S. utility patent application that supports this claim. The description should explain the feature in depth, provide example embodiments, and stay consistent with the overall invention. 

The output: See here

My feedback: 

  • Impractical fastening examples: Listing rivets, brackets, and clamps could technically work, but it’s clunky and impractical for this product.
  • Questionable rotation description: “Rotate about an axis perpendicular to the canopy plane” makes little sense here — it’d be mechanically awkward and hard to execute.
  • Disorganized flow: The section on “dimensions and magnetic field strength” is tacked on way after the materials discussion, forcing the reader to mentally loop back. Grouping these related points together would make the description more cohesive.
  • Inconsistent terminology: The draft switches between “embodiments,” “implementations,” and “variations” without clarifying whether they mean different things or are interchangeable. 
  • Minor terminology inconsistency: In paragraph 4, “this rotation” is fine, but “the housing” later becomes “the rotatable housing.”

My score: 3.5/5. This is a solid attempt that hits a lot of detail, but it stumbles on mechanical plausibility, structure, and consistent terminology. It feels like GPT-5 was trying to sound “patent-y” and ended up overengineering both the language and the hardware.

Test 4: Can ChatGPT write a patent abstract?

Goal: See if GPT-5 can produce a concise, compliant abstract that captures the essence of the invention in ~150 words.

Prompt: You are a U.S. patent attorney. Write a 150-word abstract for the following invention, following USPTO abstract conventions. 

The output: See here

My feedback: 

  • Meets length requirement: At under 150 words, it stays within USPTO guidelines for abstracts — good job there.
  • Too much detail: An abstract should give a high-level overview, but this one dives a little deeper into specifics than necessary. Keeping it broader would make it cleaner and more in line with typical patent abstracts.

My score: 4/5. It’s a solid, compliant abstract that just needs a lighter touch on the detail to hit the ideal balance between informative and concise.

My overall thoughts

Running this experiment made it clear that GPT-5 does have some genuine strengths. It can hit word count targets, throw in plenty of technical detail, and occasionally phrase things in a way that feels impressively “patent-y.” 

That said, its biggest weakness is narrative flow and planning. Even in this experiment, where we asked it to create pretty discrete sections, it jumps between topics and loops back to ideas it has already covered. Naturally, this problem gets worse when you ask it to do more than one section at a time. That’s why chat-style patent drafting copilots also fall short.

It’s also worth noting that using a consumer version of ChatGPT (non-enterprise) for patent drafting is risky. OpenAI may train on your inputs, and in patent law, that disclosure could be treated as a public disclosure, which jeopardizes your ability to file.

Choose Patentext instead

Patentext works differently. Because you and the LLM share a single, structured view of what the invention is, it always stays on topic. And unlike a chatbot that might give you different results every time you ask the same thing, Patentext produces consistent, repeatable output, which is exactly what’s needed to draft an actual long-form document like a patent.

Obviously, I’m biased. So don’t take my word for it — use Patentext to create your next patent draft for free and see our difference for yourself.

Disclaimer: This article is for informational purposes only and does not constitute legal advice. Patent laws are complex and vary by jurisdiction. For personalized guidance, consult a qualified patent attorney or agent.

Ready to scale your patent filings?

Draft your next application for free, no demo needed.

Try Patentext