• Home
  • About
  • CRAIGBALL.COM
  • Disclaimer
  • Log In

Ball in your Court

~ Musings on e-discovery & forensics.

Ball in your Court

Category Archives: Law Practice & Procedure

A Dog and Its Tail: Don’t Let Version Uncertainty Cloud Linked Attachment Production

02 Thursday Apr 2026

Posted by craigball in Computer Forensics, E-Discovery, Law Practice & Procedure

≈ 4 Comments

Tags

ESI Protocols, Linked attachments

Two years ago, I wrote a pair of posts (3/29/24 and 4/8/24) about linked attachments—what Microsoft calls “Cloud Attachments”—arguing that producing parties had been getting away with murder by not collecting and searching them.  The argument was straightforward: a linked attachment is no less relevant than an embedded one, the tools to collect them exist, and the claimed burdens were overstated.  Genuine, but exaggerated.

Nothing that’s happened since has changed that core proposition.  If anything, developments in case law, the Sedona Conference’s 2025 Commentary on collaboration platform discovery, and the emergence of proposed technical standards have reinforced it.  But those same developments carry a risk I want to flag: that the versioning question—which version of a linked attachment is the “right” one—is being elevated in ways that could hand producing parties a shiny new excuse for doing nothing.

What’s Changed in a Year

The landscape has shifted since, and largely in the right direction.

Courts are beginning to tiptoe towards what tools can actually do rather than accepting blanket claims of infeasibility.  The Carvana securities litigation is perhaps the most striking example: the court ordered a bounded forensic capability test using a specific tool, then expanded it when the initial pilot supported further testing.  That’s a different approach than we’ve seen before—a court saying, in effect, “show me what you can recover, don’t just tell me you can’t.”

The Sedona Conference published its Commentary on Discovery of Collaboration Platforms Data in 2025, acknowledging the distinct preservation, collection, and production challenges these platforms present.  When Sedona identifies a problem, that identification becomes part of the baseline against which “reasonable steps” under Rule 37(e) will be measured.  Parties who were aware of these challenges—and by now, every competent e-discovery practitioner should be—will find it increasingly hard to argue that their traditional, email-era workflow was good enough.

And a proposed technical standard—the Reconstruction-Grade eDiscovery Standard, authored by Peter Kozak and Brandon D’Agostino—has articulated an architectural framework for what preservation of collaborative evidence should look like.  It’s ambitious and thoughtful.  I want to engage with it constructively, because I think it gets several things right.  But I also want to sound a caution about how standards like this could be deployed in the real world of discovery disputes.

Two Problems

The RG standard does something valuable: it names and taxonomizes the specific ways that traditional preservation fails when evidence is collaborative, hyperlinked, and versioned.  Its framework identifies what it calls the “Preservation Gap” (the referenced content is never preserved at all) and the “Context Gap” (the content is preserved but not in the state it existed at the relevant time).  That’s a useful distinction.

But here’s where I part company—not with the standard’s laudable intent, but with the risk of how it may play out in the field.

The standard treats deterministic version resolution—preserving the as-sent version of a linked document, the version that existed when the message was transmitted—as a core conformance requirement.  Architecturally, I understand why.  If you’re building a system that aspires to reconstruction-grade fidelity, you want to capture the version the recipient would have seen when they clicked the link.  That’s the gold standard.

The problem is that the gold standard can become the enemy of any standard at all. 

To my eye, the versioning concern has been weaponized.  It goes like this: a requesting party asks for linked attachments.  The producing party raises the specter of versioning—“Which version do you want?  The as-sent version?  The as-accessed version?  The current version?  We can’t be sure which is the ‘right’ one, so the whole exercise is fraught with uncertainty.”  And that uncertainty becomes the justification for producing no version.  Not the wrong version.  No version.

That’s the tail wagging the dog.

The “Dog” Is Collection

The threshold obligation is to collect and search linked attachments.  Full stop.  A link in an email reveals nothing about the content of the linked document.  If you don’t collect the document, you can’t search it.  If you can’t search it, you can’t assess it for relevance.  And if you can’t assess it for relevance, you’re making a unilateral decision to exclude potentially responsive evidence—evidence that, but for a shift in how email systems handle large files, would have been embedded in the message and collected automatically.

That obligation exists independently of any versioning question.  It existed before anyone coined the term “reconstruction-grade.”  It existed when I wrote about it a year ago, and it existed for years before that.  “Perfect” is not the standard in e-discovery, but neither is “lousy.”

Beware, too, the half-measure.  A producing party, pressed on missing linked attachments, may offer to search the email text first and seek out the linked attachment only if the parent email hits on a keyword.  This sounds reasonable until you think about how email actually works.  It is exceedingly common for a transmitting email to say nothing more than “Please see attached” or “Here’s the draft we discussed,” while the attachment contains all the substantive content.  If the email text doesn’t trigger a keyword, the attachment—however rich in relevant material—never gets collected or searched.  And even if produced as a loose document, won’t tie to its “parent” transmitting message    

When we search email families containing embedded attachments, we treat the family as responsive if either the message or the attachment generates a hit.  Any workflow that conditions collection of linked attachments on hits in the transmitting email inverts that logic and guarantees that a large share of responsive evidence will be missed.

A producing party that collects and searches the current version of a linked attachment has done something meaningful.  They’ve brought the document into the review population.  They’ve assessed its content against the issues in the case.  They’ve preserved the family relationship between message and attachment.  They may not have captured the precise version that existed at send time, but they’ve captured a version—one that, in the overwhelming majority of cases, is likely to be the same or substantially similar to the transmitted version.

A producing party that collects nothing because of versioning uncertainty has done nothing.  Lousy.

The “Tail” Is Versioning

I don’t dismiss the versioning issue.  It’s real, and the RG standard is right to address it.  There are cases where the difference between the as-sent version and the current version matters enormously—a contract with terms that changed, a financial model with revised projections, a compliance policy that was softened after the relevant communication.  In those cases, producing the wrong version could mislead or, worse, could conceal what the actors actually relied upon.

But how often does this actually happen?

A year ago, I called for objective analysis: what percentage of cloud attachments are actually modified after transmittal?  I’m repeating the call, louder, because the industry still hasn’t answered it.

I have a strong intuition—and I want to be candid that it’s an intuition based on experience, not evidence—that the incidence of post-transmittal modification is modest overall.  My suspicion is that fewer than ten- to twenty percent of linked attachments are meaningfully modified after being shared, and perhaps far fewer than that.  Most cloud attachments are final or near-final documents shared for information, not living collaborative drafts.  Someone emails a report, a slide deck, a signed contract.  The link is a delivery mechanism, not an invitation to co-author.

But I also suspect the percentage varies widely depending upon the culture.  An organization whose culture runs to emailing finished work product will have a very different modification profile than one where teams routinely share early drafts via links for iterative editing in SharePoint.  A law firm circulating closing documents will look different from a product team sharing design specs that change daily.  The incidence of versioning concerns is likely a function of organizational work style, not some universal constant.

Here’s the point: I don’t have solid metrics.  I believe what I’m describing here, but belief is not evidence, and I would readily yield my suspicion to meaningful measurement.  The data needed to resolve this question is not exotic.  Any organization with a reasonably mature M365 environment could sample and compare the version history of linked attachments against the timestamps of the messages that transmitted them.  The analysis would tell us, for a given corpus, what percentage of linked attachments were modified after the transmitting message was sent, how significantly they were modified, and how soon after transmittal the modifications occurred.  That’s a study someone should do—a vendor, a consultant, an academic, a standards body.  It would replace speculation with evidence and give courts and practitioners a rational basis for calibrating the proportionality of versioning remediation.  Too, litigants coming to Court seeking relief from the duty to collect linked attachments should collect the metrics to measure the claimed risk and burden.

Until we have that data, we’re arguing about a problem whose magnitude we don’t grasp, while ignoring a problem whose magnitude is obvious: linked attachments aren’t being collected as they should be.

Don’t Throw Out the Baby

I want to be clear about what I’m not saying.  I’m not saying the RG standard is wrong to aspire to as-sent version resolution.  I’m not saying versioning doesn’t matter.  And I’m not attributing to the standard’s authors any intent to create a new excuse for non-production.  Reading the standard carefully, its concept of graduated conformance levels and its emphasis on proportionality suggest the opposite intent.

But standards exist in an adversarial ecosystem.  A standard that defines three conformance levels—RG-Core, RG-Plus, RG-Max—can be turned into a shield by a party arguing: “Your Honor, we can’t achieve even RG-Core conformance, so we shouldn’t be required to attempt collection of linked attachments.”  That argument confuses the standard’s aspirational architecture with the floor of a party’s discovery obligations.

The floor is not reconstruction-grade fidelity.  The floor is reasonable steps under Rule 37(e) and the obligation to search and produce relevant, responsive, non-privileged material.  That floor requires, at minimum, that you collect linked attachments using the tools your platform provides, search them, and produce responsive documents—even if you’re producing the current version rather than the as-sent version.

To put it another way: producing the “wrong” version of a responsive document is a problem.  Producing no version of a responsive document is a bigger problem.

I’ve been accused of leaning toward the interests of plaintiffs on this topic.  That’s neither fair nor accurate.  I advocate for evidence.  I’m committed to getting to the evidence that resolves disputes in what Rule 1 of the Federal Rules calls a “just, speedy, and inexpensive” fashion.  Not perfect.  Certainly not at any cost.  But I won’t accommodate high-handed, evasive approaches to the duty to produce responsive, non-privileged evidence—and dressing up a refusal to collect linked attachments in the language of versioning complexity is exactly that.

What the Standard Gets Right

Credit where it’s due.  Several elements of the RG framework strike me as genuinely constructive:

Exception transparency.  The standard requires structured records of what couldn’t be collected and why.  In the current landscape, failures are silent.  A linked attachment that can’t be retrieved simply disappears—no record that it was attempted, no record that it failed, no record of why.  Requiring a producing party to document its failures is a significant improvement over the status quo, where the absence of evidence is invisible.  Notably, courts have already begun requiring this kind of transparency on an ad hoc basis.  In the Uber litigation, Judge Cisneros ordered two custom metadata fields—“Missing Google Drive Attachments” and “Non-Contemporaneous”—to flag gaps and version discrepancies in the production.  What the RG standard proposes as a systemic architectural requirement, courts are already imposing case by case.  Formalizing that expectation is a natural and constructive next step.

The Preservation Gap vs. Context Gap distinction.  Naming these as separate failure modes is useful because they have different legal implications.  The Preservation Gap—evidence that was never preserved at all—maps cleanly to Rule 37(e).  The Context Gap—evidence preserved in the wrong state—is doctrinally murkier.  Courts don’t yet have a clean framework for “you preserved it, but what you preserved isn’t what was communicated.”  Distinguishing the two helps practitioners and courts think more precisely about what went wrong and what remedies are appropriate.

Capability testing as an emerging judicial norm.  The companion post to the standard highlights Carvana and the broader trajectory of courts ordering parties to demonstrate what their tools can do.  This is a welcome and overdue development.  The e-discovery conversation around linked attachments has too often been dominated by conclusory assertions of infeasibility.  Capability testing replaces assertion with demonstration, and that benefits everyone—including producing parties who have invested in the right tools and want credit for doing so.

Where We Go from Here

The path forward requires distinguishing between the immediate obligation and the aspirational architecture.

The immediate obligation  is collection.  If you’re on Microsoft 365, use Purview.  If you’re on Google Workspace, use Vault.  These tools aren’t perfect, but they exist, and they collect linked attachments.  The version you collect may be the current version rather than the as-sent version.  That’s a known limitation, not a reason to collect nothing.

The aspirational architecture  is reconstruction-grade fidelity—as-sent version resolution, deterministic exception handling, reproducible exports.  That’s where the industry needs to go.  Tools like Forensic Email Collector are already demonstrating that historical version recovery is technically possible in many cases.  The Carvana court’s willingness to order capability testing suggests that judges are ready to push the envelope.

But the bridge between those two isn’t “wait until perfect tools exist.”  The bridge is “do what you can now, document what you can’t, and improve your capabilities over time.”

That’s what proportionality actually means.  Not perfection.  Not paralysis.  But reasonable, good-faith efforts commensurate with the stakes and the state of the art.

The versioning problem will resolve because courts will order testing, because tools will improve, because someone will finally produce the empirical data on post-transmittal modification rates (pretty please), and because standards like the RG framework will mature.  These are all good-faith efforts to move the law and the industry forward, and they well deserve recognition for that commendable effort.

In the meantime, the producing party’s obligation is clear: collect the linked attachments, search them, and produce what’s responsive.

The tail does not get to wag the dog.

Hat tip to Doug Austin for highlighting the publication of the Reconstruction-Grade eDiscovery Standard on his eDiscovery Today blog.  Doug continues to be an indispensable resource for practitioners trying to keep pace with developments in this space.

© 2026 Craig D. Ball.  All rights reserved.

Share this:

  • Email a link to a friend (Opens in new window) Email
  • Print (Opens in new window) Print
  • Share on X (Opens in new window) X
  • Share on Facebook (Opens in new window) Facebook
  • Share on LinkedIn (Opens in new window) LinkedIn
Like Loading...

Detecting Deep Fakes

24 Tuesday Feb 2026

Posted by craigball in ai, Computer Forensics, E-Discovery, General Technology Posts, Law Practice & Procedure

≈ 2 Comments

This morning, I was approached to present in Texas on deep fake evidence and what litigators need to know to confront it.  It’s to be called, “Real or Rigged: How to Know Whether Evidence Is Fake.” I realized, to my chagrin, that I didn’t have a paper I could hand out—no single place where I had pulled together the technical realities, evidentiary doctrine, and practical litigation tactics this subject demands. So, I wrote one. Whether I ultimately give the talk remains to be seen, but I’m hopeful the resulting article will prove useful to you. The paper—Forensic Tells: A Practitioner’s Guide to Detecting Deep Fakes and Authenticating Digital Evidence—runs about thirty pages and is available here.

The piece starts from a simple premise: digital evidence does not fall like manna from heaven; it has a provenance that speaks to its authenticity. It is fundamentally different from paper because it carries a payload of information about its origins and handling—metadata that functions as a chain of custody embedded within the file itself. In an era when AI systems can generate convincing photographs, videos, and audio recordings of events that never occurred, that metadata has become the last line of defense against manufactured reality.

While I regard myself as much more a student of AI than an authority, I’ve been writing about metadata and evidence as long as anyone on two legs; so, I hope I bring something of value to the topic.  You be the judge.  The article explains, in practical terms, how synthetic media is created, why fabricated media often lacks the coherent metadata of authentic recordings, and how lawyers can use that disparity to authenticate—or challenge—digital evidence. It also addresses the emerging “liar’s dividend,” the phenomenon whereby wrongdoers dismiss authentic recordings as fake simply because the technology exists to fabricate them.

More importantly, the article is written as a practitioner’s guide, not a technical treatise. It outlines concrete discovery strategies: demanding native files, targeting interrogatories and requests for admission, pursuing third-party records, and, where necessary, seeking forensic examination of source devices. It explains what to look for in metadata, what visual and auditory artifacts may signal manipulation, and how federal and Texas evidence rules—including Rules 901 and 902—apply to synthetic media challenges. It closes with a practical checklist and discussion of emerging provenance technologies that may someday make authentication easier—but, for now, make it more essential that lawyers understand how to ask the right questions.

Your feedback is always welcome and appreciated.

Share this:

  • Email a link to a friend (Opens in new window) Email
  • Print (Opens in new window) Print
  • Share on X (Opens in new window) X
  • Share on Facebook (Opens in new window) Facebook
  • Share on LinkedIn (Opens in new window) LinkedIn
Like Loading...

A Master Table of Truth

04 Tuesday Nov 2025

Posted by craigball in ai, Computer Forensics, E-Discovery, General Technology Posts, Law Practice & Procedure, Uncategorized

≈ 5 Comments

Tags

ai, artificial-intelligence, chatgpt, eDiscovery, generative-ai, law, technology

Lawyers using AI keep turning up in the news for all the wrong reasons—usually because they filed a brief brimming with cases that don’t exist. The machines didn’t mean to lie. They just did what they’re built to do: write convincingly, not truthfully.

When you ask a large language model (LLM) for cases, it doesn’t search a trustworthy database. It invents one. The result looks fine until a human judge, an opponent or an intern with Westlaw access, checks. That’s when fantasy law meets federal fact.

We call these fictions “hallucinations,” which is a polite way of saying “making shit up;” and though lawyers are duty-bound to catch them before they reach the docket, some don’t. The combination of an approaching deadline and a confident-sounding computer is a dangerous mix.

Perhaps a Useful Guardrail

It struck me recently that the legal profession could borrow a page from the digital forensics world, where we maintain something called the NIST National Software Reference Library (NIST NSRL). The NSRL is a public database of hash values for known software files. When a forensic examiner analyzes a drive, the NSRL helps them skip over familiar system files—Windows dlls and friends—so they can focus on what’s unique or suspicious.

So here’s a thought: what if we had a master table of genuine case citations—a kind of NSRL for case citations?

Picture a big, continually updated, publicly accessible table listing every bona fide reported decision: the case name, reporter, volume, page, court, and year. When your LLM produces Smith v. Jones, 123 F.3d 456 (9th Cir. 2005), your drafting software checks that citation against the table.

If it’s there, fine—it’s probably references a genuine reported case.
If it’s not, flag it for immediate scrutiny.

Think of it as a checksum for truth. A simple way to catch the most common and indefensible kind of AI mischief before it becomes Exhibit A at a disciplinary hearing.

The Obstacles (and There Are Some)

Of course, every neat idea turns messy the moment you try to build it.

Coverage is the first challenge. There are millions of decisions, with new ones arriving daily. Some are published, some are “unpublished” but still precedential, and some live only in online databases. Even if we limited the scope to federal and state appellate courts, keeping the table comprehensive and current would be an unending job; but not an insurmountable obstacle.

Then there’s variation. Lawyers can’t agree on how to cite the same case twice. The same opinion might appear in multiple reporters, each with its own abbreviation. A master table would have to normalize all of that—an ambitious act of citation herding.

And parsing is no small matter. AI tools are notoriously careless about punctuation. A missing comma or swapped parenthesis can turn a real case into a false negative. Conversely, a hallucinated citation that happens to fit a valid pattern could fool the filter, which is why it’s not the sole filter.

Lastly, governance. Who would maintain the thing? Westlaw and Lexis maintain comprehensive citation data, but guard it like Fort Knox. Open projects such as the Caselaw Access Project and the Free Law Project’s CourtListener come close, but they’re not quite designed for this kind of validation task. To make it work, we’d need institutional commitment—perhaps from NIST, the Library of Congress, or a consortium of law libraries—to set standards and keep it alive.

Why Bother?

Because LLMs aren’t going away. Lawyers will keep using them, openly or in secret. The question isn’t whether we’ll use them—it’s how safely and responsibly we can do so.

A public master table of citations could serve as a quiet safeguard in every AI-assisted drafting environment. The AI could automatically check every citation against that canonical list. It wouldn’t guarantee correctness, but it would dramatically reduce the risk of citing fiction. Not coincidentally, it would have prevented most of the public excoriation of careless counsel we’ve seen.

Even a limited version—a federal table, or one covering each state’s highest court—would be progress. Universities, courts, and vendors could all contribute. Every small improvement to verifiability helps keep the profession credible in an era of AI slop, sloppiness and deep fakes.

No Magic Bullet, but a Sensible Shield

Let’s be clear: a master table won’t prevent all hallucinations. A model could still misstate what a case holds, or cite a genuine decision for the wrong proposition. But it would at least help keep the completely fabricated ones from slipping through unchecked.

In forensics, we accept imperfect tools because they narrow uncertainty. This could do the same for AI-drafted legal writing—a simple checksum for reality in a profession that can’t afford to lose touch with it.

If we can build databases to flag counterfeit currency and pirated software, surely we can build one to spot counterfeit law?

Until that day, let’s agree on one ironclad proposition: if you didn’t verify it, don’t file it.

Share this:

  • Email a link to a friend (Opens in new window) Email
  • Print (Opens in new window) Print
  • Share on X (Opens in new window) X
  • Share on Facebook (Opens in new window) Facebook
  • Share on LinkedIn (Opens in new window) LinkedIn
Like Loading...

Chambers Guidance: Using AI Large Language Models (LLMs) Wisely and Ethically

19 Thursday Jun 2025

Posted by craigball in ai, General Technology Posts, Law Practice & Procedure

≈ 3 Comments

Tags

ai, artificial-intelligence, chatgpt, generative-ai, law, LLM, technology

Tomorrow, I’m delivering a talk to the Texas Second Court of Appeals (Fort Worth), joined by my friend, Lynne Liberato of Houston. We will address LLM use in chambers and in support of appellate practice, where Lynne is a noted authority. I’ll distribute my 2025 primer on Practical Uses for AI and LLMs in Trial Practice, but will also offer something bespoke to the needs of appellate judges and their legal staff–something to-the-point but with cautions crafted to avoid the high profile pitfalls of lawyers who trust but don’t verify.

Courts must develop practical internal standards for the use of LLMs in chambers. These AI applications are too powerful to ignore and too powerful to use without attention given to safe use.

Chambers Guidance: Using AI Large Language Models (LLMs) Wisely and Ethically

Prepared for Second District Court of Appeals (Fort Worth)


Purpose
This document outlines recommended practices for the safe, productive, and ethical use of large language models (LLMs) like ChatGPT-4o in chambers by justices and their legal staff.


I. Core Principles

  1. Human Oversight is Essential
    LLMs may assist with writing, summarization, and idea generation, but should never replace legal reasoning, human editing, or authoritative research.
  2. Confidentiality Must Be Preserved
    Use only secure platforms. Turn off model training/sharing features (“model improvement”) in public platforms or use private/local deployments.
  3. Verification is Non-Negotiable
    Never rely on an LLM for case citations, procedural rules, or holdings without confirming them via Westlaw, Lexis, or court databases.  Every citation is suspect until verified.
  4. Transparency Within Chambers
    Staff should disclose when LLMs were used in a draft or summary, especially if content was heavily generated.  Prompt/output history should be preserved in chambers files.
  5. Judicial Independence and Public Trust
    While internal LLM use may be efficient, it must never undermine public confidence in the independence or impartiality of judicial decision-making. The use of LLMs must not give rise to a perception that core judicial functions have been outsourced to AI.

II. Suitable Uses of LLMs in Chambers

  • Drafting initial outlines of bench memos or summaries of briefs
  • Rewriting judicial prose for clarity, tone, or readability
  • Summarizing long records or extracting procedural chronologies
  • Brainstorming counterarguments or exploring alternative framings
  • Comparing argumentative strength and inconsistencies of and between parties’ briefs

Note: Use of AI output that may materially influence a decision must be identified and reviewed by the judge or supervising attorney.


III. Prohibited or Cautioned Uses

  • Do not insert any LLM-generated citation into a judicial order, opinion, or memo without independent confirmation
  • Do not input sealed or sensitive documents into unsecured platforms
  • Do not use LLMs to weigh legal precedent, assess credibility, or determine binding authority
  • Do not delegate critical judgment or reasoning tasks to the model (e.g., weighing precedent or evaluating credibility)
  • Do not rely on LLMs to generate summaries of legal holdings without human review of the supporting authority

IV. Suggested Prompts for Effective Use

These prompts may be useful when paired with careful human oversight and verification

  • “Summarize this 40-page brief into 5 bullet points, focusing on procedural history.”
  • “Summarize the uploaded transcript respecting the following points….”
  • “Summarize the key holdings and the law in this area”
  • “Rewrite this paragraph for clarity, suitable for a published opinion.”
  • “List potential counterarguments to this position in a Texas appellate context.”
  • “Explain this concept as if to a first-year law student.”

Caution: Prompts seeking legal summaries (e.g., “What is the holding of X?” or “Summarize the law on Y”) are particularly prone to error and must be treated with suspicion. Always verify output against primary legal sources.


V. Public Disclosure and Transparency

Although internal use of LLMs may not require disclosure to parties, courts must be sensitive to the risk that judicial reliance on AI—even as a drafting aid—may be scrutinized. Consider whether and what disclosure may be warranted in rare cases when LLM-generated language substantively shapes a judicial decision.

VI. Final Note

Used wisely, LLMs can save time, increase clarity, and prompt critical thought. Used blindly, they risk error, overreliance, or breach of confidentiality. The justice system demands precision; LLMs can support it—but only under a lawyer’s and judge’s careful eye and hand.


Prepared by Craig Ball and Lynne Liberato, advocating thoughtful AI use in appellate practice.

Of course, the proper arbiters of standards and practices in chambers are the justices themselves; I don’t presume to know better, save to say that any approach that bans LLMs or presupposes AI won’t be used is naive. I hope the modest suggestions above help courts develop sound practical guidance for use of LLMs by judges and staff in ways that promote justice, efficiency and public confidence.

Share this:

  • Email a link to a friend (Opens in new window) Email
  • Print (Opens in new window) Print
  • Share on X (Opens in new window) X
  • Share on Facebook (Opens in new window) Facebook
  • Share on LinkedIn (Opens in new window) LinkedIn
Like Loading...

Tailor FRE 502(d) Orders to the Case

20 Monday Jan 2025

Posted by craigball in E-Discovery, Law Practice & Procedure

≈ 6 Comments

Tags

ethics, insurance, law, legal, news

Having taught Federal Rule of Evidence 502 (FRE 502) in my law classes for over a decade, I felt I had a firm grasp of its nuances. Yet recent litigation where I serve as Special Master prompted me to revisit the rule with Proustian ‘fresh eyes,’ uncovering insights I hope to share here

I’ve long run with the herd in urging lawyers to “always get a 502 order,” never underscoring important safeguards against unintended outcomes; but lately, I had the opportunity to hear from experienced trial counsel on both sides of a FRE 502 order negotiation and have gained a more nuanced view.

Enacted in 2008, FRE 502 was a means to use the federal rules (and Congress’ adoption of the same) to harmonize widely divergent outcomes vis-à-vis subject matter waiver flowing from the inadvertent disclosure of privileged information. 

That’s a mouthful, and I know many readers aren’t litigators, so let’s lay a little foundation.

Confidential communications shared in the context of special relationships are largely shielded from compulsory disclosure by what is termed “privilege.”  You certainly know of the Fifth Amendment privilege against self-incrimination, and no doubt you’ve heard (if only in crime dramas) that confidential communications between a lawyer and client for the purpose of securing legal advice are privileged.  That’s the “attorney-client privilege.” Other privileges extend to, inter alia, spousal communications, confidences shared between doctor and patient and confidences between clergy and parishioner for spiritual guidance.  None of these privileges are absolute, but that’s a topic for another day. 

Yet another privilege, called “work-product protection,” shields from disclosure an attorney’s mental impressions, conclusions, opinions, or legal theories contained in materials prepared in anticipation of litigation or for trial.  Here, we need only consider the attorney-client privilege and work-product protection because FRE 502 applies exclusively to those two privileges.

Clearly, lawyers enjoy extraordinary and expansive rights to withhold privileged information, and lawyers really, REALLY hate to mess up in ways that impair those rights. I’d venture that as much effort and money is expended seeking to guard against the disclosure of privileged material as is spent trying to isolate relevant evidence. A whole lot, at any rate.

One of the quickest ways to lose a privilege is by sharing the privileged material with someone who isn’t entitled to claim the privilege.  Did the lawyer let the friend who drove the client to the law office sit in when confidences were exchanged?  Such actions waive the privilege.  One way to lose a privilege is by accidentally letting an opponent get a look at privileged material.  That can happen in a host of prosaic ways, even just by the wrong CC on an email.   More often, it’s a consequence of a failed e-discovery process, say, a reviewer or production error.  Inadvertently producing privileged information in discovery is every litigator’s nightmare.  It happens often enough that the various states and federal circuits developed different ways of balancing protection from waiver against findings that the waiver opened the door to further disclosure in a disaster scenario called “Subject Matter Waiver.”

Continue reading →

Share this:

  • Email a link to a friend (Opens in new window) Email
  • Print (Opens in new window) Print
  • Share on X (Opens in new window) X
  • Share on Facebook (Opens in new window) Facebook
  • Share on LinkedIn (Opens in new window) LinkedIn
Like Loading...
Follow Ball in your Court on WordPress.com

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 2,234 other subscribers

Recent Posts

  • Free at Last: Ditching TurboTax for FreeTaxUSA April 5, 2026
  • A Dog and Its Tail: Don’t Let Version Uncertainty Cloud Linked Attachment Production April 2, 2026
  • The EDRM Isn’t Broken; It’s Misunderstood. March 18, 2026
  • Detecting Deep Fakes February 24, 2026
  • A Fun Way to Build AI Fluency February 21, 2026

Archives

RSS Feed RSS - Posts

CRAIGBALL.COM

Helping lawyers master technology

Categories

EDD Blogroll

  • Minerva 26 (Kelly Twigger)
  • Sedona Conference
  • Complex Discovery (Rob Robinson)
  • E-D Team (Ralph Losey)
  • Basics of E-Discovery (Exterro)
  • Corporate E-Discovery Blog (Zapproved )
  • Illuminating eDiscovery (Lighthouse)
  • CS DISCO Blog
  • E-Discovery Law Alert (Gibbons)
  • GLTC (Tom O'Connor)
  • The Relativity Blog
  • eDiscovery Journal (Greg Buckles)
  • eDiscovery Today (Doug Austin)

Admin

  • Create account
  • Log in
  • Entries feed
  • Comments feed
  • WordPress.com

Enter your email address to follow Ball in Your Court and receive notifications of new posts by email.

Website Powered by WordPress.com.

  • Subscribe Subscribed
    • Ball in your Court
    • Join 2,086 other subscribers
    • Already have a WordPress.com account? Log in now.
    • Ball in your Court
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...
 

    %d