This is a rant. So, it will be mercifully short and posted on the weekend when few stop by. I’m fed up with e-discovery surveys. I mean those ersatz “studies” that solicit opinions about things that could be measured but aren’t, polling those sufficiently underemployed to respond and tallying and touting their responses as if they signified more than attitudes and prejudices.
Surveys have almost entirely displaced measurement in e-discovery. When you scratch the surface of the many so-called studies of e-discovery that aspire to an academic aura, they’re just studies of surveys of attitudes. No statistical rigor can make a lot of wild ass guesses anything more than a lot of wild ass guesses. The studies do a decent job documenting what people think might be fact, but tell us nothing about fact because guesses about measurement are not the same as measurement. No, not even when you gather many guesses.
The Blair & Marron BART document study famously showed us that perception of e-discovery outcomes and measurement of those outcomes diverge markedly. Polls don’t tell us where the money goes in e-discovery; and, why should we be surprised by this? A poll of ancient Greek scholars could have “proven” the flatness of the Earth. Seventy-seven percent of Americans polled believe angels are real and among us. People believe what suits them; but, smart people believe what they can measure.
My point is this: when it comes to e-discovery, virtually everything we hear—certainly every study of EDD cost I’ve ever seen—is based on processes wholly devoid of real measurement. Authors tally up guesstimates from surveys then pass them off as scholarship. It’s like taking tranches of bad mortgages and securitizing them as triple-A paper. We all know how well that worked.
So, enough with the silly surveys! They’re tired. They’re useless. They’re bunk. Let’s try defining and measuring to arrive at numbers that mean something. We’re not playing Family Feud here. I don’t want to know what the survey says. I want genuine metrics.