There’s a version of this story where AI is the villain. A pro se litigant in over her head finds ChatGPT, asks it questions she’d normally ask a lawyer, and ends up worse off than when she started — having fired her actual attorney, filed dozens of meritless motions, and drawn the attention of a court that had better things to do. Nippon Life sued OpenAI over exactly this scenario, framing it as unauthorized practice of law. It’s a compelling narrative, and it’s mostly wrong about where to assign blame.
The tool wasn’t the failure. The absence of a licensed, accountable human in the loop was the failure. That distinction matters enormously, because if the lesson we take from cases like this is “AI and law don’t mix,” we’ll have learned the wrong thing — and we’ll have preserved a status quo that was already failing most people who needed legal help.
The access-to-counsel gap didn’t start with ChatGPT. It’s been a structural feature of the legal market for decades. Most individuals facing legal questions — and most small and mid-size businesses signing contracts they’re not fully comfortable with — have never had a realistic path to affordable, timely legal review. Outside counsel at $400 an hour, on a two-week timeline, with a retainer requirement, is simply not a product that most of the market can use. So people approximate. They sign and hope. They ask a friend who went to law school. They Google the clause that’s worrying them and read a blog post written for a different jurisdiction. AI made that approximation feel more legitimate, more thorough, more like the real thing — and that’s precisely what created the exposure Nippon Life was litigating.
The question was never whether to use AI in legal contexts. The question was always who’s responsible for the output.
In February 2026, Judge Jed Rakoff of the Southern District of New York answered a version of that question in United States v. Heppner. The case involved a criminal defendant who, after receiving a grand jury subpoena and retaining counsel, used the consumer version of an AI platform to prepare reports outlining his defense strategy. He fed information he’d received from his attorneys into the platform, generated analysis of the facts and law, and later shared those outputs with counsel. When the government moved for access to those documents, Heppner claimed attorney-client privilege. Judge Rakoff denied the claim, calling it “a question of first impression nationwide.”
The analysis turned on the three elements required for attorney-client privilege to attach: the communication must be between a client and an attorney, it must be intended to be and actually kept confidential, and it must be for the purpose of obtaining or providing legal advice. Heppner failed on at least two of the three, and arguably all of them.
First, the communications weren’t between Heppner and his attorney — they were between Heppner and a software platform. That the AI is not a lawyer is obvious, but the implication is significant: a client using AI unilaterally, even to process information received from counsel, is not communicating with counsel. The AI is a third party, legally speaking, regardless of how the client experiences the interaction.
Second, the communications weren’t confidential. The platform’s privacy policy — which Heppner had agreed to — explicitly reserved the right to collect user inputs and outputs, use them to train the model, and disclose them to third parties including governmental regulatory authorities. A user who consents to those terms has, the court held, no reasonable expectation of confidentiality. It doesn’t matter that Heppner subjectively believed his prompts were private. The policy put him on notice.
Third — and this is where the opinion opens a door worth examining — the communications weren’t made for the purpose of obtaining legal advice, because Heppner wasn’t directed by counsel to use the tool. He acted on his own initiative. Judge Rakoff wrote that the analysis might differ if the AI use had been directed by counsel: “Had counsel directed Heppner to use Claude, Claude might arguably be said to have functioned in a manner akin to a highly trained professional who may act as a lawyer’s agent within the protection of the attorney-client privilege.” That’s a reference to the Kovel doctrine, which allows attorneys to extend privilege to non-attorney agents — accountants, translators, experts — whose assistance is necessary for the attorney to provide legal advice.
This is not an anti-AI ruling. It’s a pro-structure ruling. The court wasn’t saying that AI has no place in legal work. It was saying that AI use by clients, without attorney direction and through platforms that disclaim confidentiality, doesn’t satisfy the conditions that privilege was designed to protect. The structure matters. The sequence matters. The question of who is directing the AI and on whose behalf matters enormously.
If Heppner is a design spec, the question becomes: what does compliant AI-assisted legal review actually look like?
The answer I’ve arrived at is simpler than it might seem. The AI has to be on the attorney’s side of the line, not the client’s. That’s not a metaphor — it’s a structural requirement with real legal consequences. The client never interacts with the AI. The client submits a contract through BespokeDocs. The platform assigns the matter to a licensed attorney. An engagement letter is signed before any analysis begins; privilege attaches at that moment. The AI then performs the first-pass review. The attorney reviews that output, applies independent judgment, edits and supplements as needed, and delivers the final memo. The AI does the heavy lifting. The attorney is accountable for everything that leaves the platform.
The confidentiality concern that proved fatal in Heppner is structurally absent from BespokeDocs. Heppner used a consumer-facing AI product governed by terms that permitted the platform to collect inputs and outputs, use them for model training, and disclose them to third parties including government authorities — giving him no reasonable expectation of confidentiality. BespokeDocs does not use consumer-facing AI products. The AI tools integrated into the platform operate under commercial terms that prohibit training use and impose data handling obligations consistent with attorney professional responsibility requirements. Combined with the engagement letter that initiates the attorney-client relationship before any analysis begins, the confidentiality element that Heppner failed is affirmatively satisfied here — not as an afterthought, but as a design requirement.
The sequence matters too. Privilege attaches when an attorney-client relationship is formed — when an engagement letter is signed and a client retains counsel for a specific matter. If AI analysis happens before that moment, or outside that relationship, the analysis isn’t attorney work product. It’s something else. Getting the sequence right isn’t a technicality; it’s the difference between a service that delivers real legal protection and one that delivers the appearance of it.
I’ve spent more than a decade doing commercial contract work — NDAs, vendor agreements, master service agreements, the full range of documents that growing companies sign constantly and review inconsistently. The pattern I saw repeatedly was not that companies were cavalier about their legal exposure. Most of them understood, at some level, that the contract sitting in their inbox deserved a real look. What they lacked was a realistic path to getting one. The economics of traditional outside counsel — the retainer, the hourly rate, the timeline — made routine contract review a luxury most couldn’t justify for deals below a certain size. So they signed. Sometimes that worked out fine. Sometimes it didn’t.
The flat-fee model I built around BespokeDocs is a direct response to that pattern. Predictable pricing removes the anxiety about the meter running. A defined scope — here’s what I review, here’s what I deliver, here’s what the memo looks like — makes the service something a non-lawyer can evaluate and purchase without a preliminary conversation about budget. And attorney-client privilege, attaching at the beginning rather than the end of the process, means that what the client shares with me in the course of the review is actually protected.
There’s something worth saying directly about the billable hour, because it’s the structural incentive that shapes most of what’s broken about legal services delivery. The billable hour rewards time spent, not value delivered. An attorney who spends four hours reviewing a contract and an attorney who spends forty minutes reviewing the same contract — because they’ve developed pattern recognition over years of doing exactly this work — bill very differently for the same outcome. AI accelerates the efficient attorney further. It also makes the efficient attorney’s pricing look, to an outside observer, like they’re not working hard enough to charge what they charge. Flat fees resolve that problem entirely. The client pays for the output, not the inputs. The attorney is incentivized to be efficient, not to generate hours.
I want to be precise about what this is and what it isn’t, because the failure mode I most want to avoid is the one Nippon Life was litigating: a tool that overrepresents its scope and leaves users worse off for having used it.
BespokeDocs is not a replacement for outside counsel on complex transactions. It is not the right service for a company negotiating a material acquisition agreement, a licensing deal with significant IP implications, or a contract that requires jurisdiction-specific regulatory analysis. Those engagements require deeper engagement, iterative negotiation support, and the kind of relationship-based counsel that a flat-fee review service isn’t designed to provide.
What it is is a realistic option for the contract review that most companies are currently doing inadequately or not at all — the NDA that lands on a founder’s desk at 9am for a partnership call at 2pm, the vendor agreement that looks standard but has an indemnification clause that isn’t, the commercial contract that the company is going to sign one way or another but might as well sign with some understanding of what they’re agreeing to. Attorney work product, delivered at a price point and on a timeline that makes the service actually usable.
The courts are going to keep developing this area. Heppner is the first opinion to address privilege directly in the context of consumer AI use, but it won’t be the last. The Kovel question Judge Rakoff gestured at — whether and how attorney-directed AI use can function as an extension of the attorney-client relationship — is genuinely unsettled, and the answers will matter significantly for how legal AI products are structured. My read is that courts will ultimately land somewhere in the vicinity of a functional test: does the AI use, in context, serve the purpose of facilitating the delivery of legal advice by a licensed attorney to a client? If yes, and if the other conditions of privilege are met, the structure holds. If no — if the AI is a substitute for the attorney rather than a tool of the attorney — it doesn’t.
What I’ve tried to build is a service that passes that test. Not because I was designing for legal compliance in the abstract, but because the compliance and the value proposition turn out to be the same thing. The reason attorney supervision matters legally is the same reason it matters practically: someone licensed and accountable, with professional obligations to the client and real consequences for getting it wrong, is reviewing every analysis before it reaches anyone. The AI does the heavy lifting. The attorney is responsible for everything that leaves the platform.
That’s the model. It’s not complicated. It just requires being honest about what the AI is doing, what the attorney is doing, and where the line between them runs.
BespokeDocs is live at bespokedocs.com.