This is a rant. So, it will be mercifully short and posted on the weekend when few stop by. I’m fed up with e-discovery surveys. I mean those ersatz “studies” that solicit opinions about things that could be measured but aren’t, polling those sufficiently underemployed to respond and tallying and touting their responses as if they signified more than attitudes and prejudices.
Surveys have almost entirely displaced measurement in e-discovery. When you scratch the surface of the many so-called studies of e-discovery that aspire to an academic aura, they’re just studies of surveys of attitudes. No statistical rigor can make a lot of wild ass guesses anything more than a lot of wild ass guesses. The studies do a decent job documenting what people think might be fact, but tell us nothing about fact because guesses about measurement are not the same as measurement. No, not even when you gather many guesses.
The Blair & Marron BART document study famously showed us that perception of e-discovery outcomes and measurement of those outcomes diverge markedly. Polls don’t tell us where the money goes in e-discovery; and, why should we be surprised by this? A poll of ancient Greek scholars could have “proven” the flatness of the Earth. Seventy-seven percent of Americans polled believe angels are real and among us. People believe what suits them; but, smart people believe what they can measure.
My point is this: when it comes to e-discovery, virtually everything we hear—certainly every study of EDD cost I’ve ever seen—is based on processes wholly devoid of real measurement. Authors tally up guesstimates from surveys then pass them off as scholarship. It’s like taking tranches of bad mortgages and securitizing them as triple-A paper. We all know how well that worked.
So, enough with the silly surveys! They’re tired. They’re useless. They’re bunk. Let’s try defining and measuring to arrive at numbers that mean something. We’re not playing Family Feud here. I don’t want to know what the survey says. I want genuine metrics.
Pingback: Legal Vendor Advisory Blog » Blog Archive » When P.T. Barnum said “There’s no such thing as bad publicity”, he did not know Legal
ESIDence said:
Thanks for calling out the silliness of these inbox-clutterers.
Metrics-free surveys (or those incorporating pseudo-metrics) are, unfortunately, rife in the vast majority of mass-mailed “survey” invitations – and ample evidence of the all-too-frequent “see-how-smart-WE-are” surveyors’ motivation.
If the METRICS are suspect, then one can be sure that any ROOT-CAUSE analysis was either poorly done, or omitted entirely. Much like a vacation home built on poorly-reclaimed land, such ‘surveys’ lack foundation and offer wasteful temptation for the unwary.
One could easily explore these concerns in the context of the rule-making process, but that’s another matter entirely.
LikeLike
AndrewBartholomew said:
I generally agree with your points but I think it’s also important to consider that there are different types of surveys. Depending on the subject being covered, qualitative surveys are often the best we can do to gather valuable information. Take e-discovery spending for example. Very few companies would be willing – or in some cases even able – to quantify and divulge how much they spend on e-discovery in a public facing survey. Even if the data was presented anonymously many corporations would still forgo participation out of fear that the information could slip into the wrong hands or just to avoid the hassle of pulling figures that probably aren’t all that carefully tracked or centralized. However, those same corporations might be willing to indicate on a survey which specific areas they feel they spend too much on, or share what measures they are pursuing internally to reduce costs. I agree that attempts at “guessing” or “estimating” on things that can and should be precisely measured are fairly worthless and frustrating. But I also believe that in some cases, a well crafted survey, as limited as it may be, is better than nothing. Thanks for another thoughtful post.
LikeLike
craigball said:
Andrew, thanks for the comment. You have articulated the defense as well as anyone could. But at the risk of being thought a purist, I resist the notion that, because there is such resistance to sharing reliable data, surveyors are justified in using whatever data they can muster. You did not say that, but it is what I took away. My opinion–no more righteous than yours, to be sure–is that using suspect data to support suspect conclusions causes more harm than lacking the metrics to support any conclusion at all. Would that all my disagreements could be urged upon opponents who have given the issue as much thought as you have.
LikeLike