Considering the billions of dollars spent on e-discovery every year, wouldn’t you think every trial lawyer would have some sort of e-discovery platform? Granted, the largest firms have tools; in fact, e-discovery software provider Relativity (lately valued at $3.6 billion) claims 198 of the 200 largest U.S. law firms as its customers. But, for the smaller firms and solo practitioners who account for 80% or more of lawyers in private practice, access to e-discovery tools falls off. Off a cliff, that is.
When law firms or solos seek my help obtaining native production, my first question is often, “what platform are you using?” Their answer is usually “PC” or simply a blank stare. When I add, “your e-discovery platform–the software tool you’ll use to review and search electronically stored information,” the dead air makes clear they haven’t a clue. I might as well ask a dog where it will drive if it catches the car.
Let’s be clear: no lawyer should expect to complete an ESI review of native forms using native applications.
Don’t do it.
I don’t care how many regale me with tales of their triumphs using Outlook or Microsoft Word as ‘review tools.’ That’s not how it’s done. It’s reckless. The integrity of electronic evidence will be compromised by that workflow. You will change hash values. You will alter metadata. Your searches will be spotty. Worst case scenario: your copy of Outlook could start spewing read receipts and calendar reminders. I dare you to dig your way out of that with a smile. Apart from the risks, review will be slow. You won’t be able to tag or categorize data. When you print messages, they’ll bear your name instead of the custodian’s name. Doh!
None of this is an argument against native production. It’s an argument against incompetence.
I am as dedicated a proponent of native production as you’ll find; but to reap the benefits and huge cost savings of native production, you must use purpose-built review tools. Notwithstanding your best efforts to air gap computers and use working copies, something will fail. Justdon’t do it.
You’ll also want to use an e-discovery review tool because nothing else will serve to graft the contents of load files onto native evidence. For the uninitiated, load files are ancillary, delimited text files supplied with a production and used to carry information about the items produced and the layout of the production.
I know some claim that native productions do away with the need for load files, and I concede there are ways to structure native productions to convey some of the data we now exchange via load files. But why bother? After years in the trenches, I’ve given up cursing the use of load files in native, hybrid and TIFF+ productions. Load files are clunky, but they’re a proven way to transmit filenames and paths, supply Bates numbers, track duplicates, share hash values, flag family relationships, identify custodians and convey system metadata (that’s the kind not stored in files but residing in the host system’s file table). Until there’s a better mousetrap, we’re stuck with load files.
The takeaway is get a tool. If you’re new to e-discovery, you need to decide what e-discovery tool you will use to review ESI and integrate load files. Certainly, no producing party can expect to get by without proper tools to process, cull, index, deduplicate, search, review, tag and export electronic evidence—and to generate load files. But requesting parties, too, are well-served to settle on an e-discovery platform before they serve their first Request for Production. Knowing the review tool you’ll use informs the whole process, particularly when specifying the forms of production and the composition of load files. Knowing the tool also impacts the keywords used in and structure of search queries.
There are a ton of tools out there, and one or two might not skin you alive on price. Kick some tires. Ask for a test drive. Shop around. Do the math. But, figure out what you’re going to do before you catch that car. Oh, and don’t even THINK about using Outlook and Word. I mean it. I’ve got my eye on you, McFly.
Where does the average person encounter binary data? Though we daily confront a deluge of digital information, it’s all slickly packaged to spare us the bare binary bones of modern information technology. All, that is, save the humble Universal Product Code, the bar code symbology on every packaged product we purchase from a 70-inch TV to a box of Pop Tarts. Bar codes and their smarter Japanese cousins, QR Codes, are perhaps the most unvarnished example of binary encoding in our lives.
Barcodes have an ancient tie to e-discovery as they were once used to Bates label hard copy documents, linking them to “objective coding” databases. A lawyer using barcoded documents was pretty hot stuff back in the day.
Just a dozen numeric characters are encoded by the ninety-five stripes of a UPC-A barcode, but those digits are encoded so ingeniously as to make them error resistant and virtually tamperproof. The black and white stripes of a UPC are the ones and zeroes of binary encoding. Each number is encoded as seven bars and spaces (12×7=84 bars and spaces) and an additional eleven bars and spaces denote start, middle and end of the UPC. The start and end markers are each encoded as bar-space-bar and the middle is always space-bar-space-bar-space. Numbers in a bar code are encoded by the width of the bar or space, from one to four units.
The bottle of Great Value purified water beside me sports the bar code at right.
Humans can read the numbers along the bottom, but the checkout scanner cannot; the scanner reads the bars. Before we delve into what the numbers signify in the transaction, let’s probe how the barcode embodies the numbers. Here, I describe a bar code format called UPC-A. It’s a one-dimensionalcode because it’s read across. Other bar codes (e.g., QR codes) are two-dimensional codes and store more information because they use a matrix that’s read side-to-side and top-to-bottom.
The first two black bars on each end of the barcode signal the start and end of the sequence (bar-space-bar). They also serve to establish the baseline width of a single bar to serve as a touchstone for measurement. Bar codes must be scalable for different packaging, so the ability to change the size of the codes hinges on the ability to establish the scale of a single bar before reading the code.
Each of the ten decimal digits of the UPC are encoded using seven “bar width” units per the schema in the table at right.
To convey the decimal string 078742, the encoded sequence is 3211 1312 1213 1312 1132 2122 where each number in the encoding is the width of the bars or spaces. So, for the leading value “zero,” the number is encoded as seven consecutive units divided into bars of varying widths: a bar three units wide, then (denoted by the change in color from white to black or vice-versa), a bar two units wide, then one then one. Do you see it? Once more, left-to-right, a white band, three units wide, a dark band two units wide , then a single white band and a single dark band (3-2-1-1 encoding the decimal value zero).
You could recast the encoding in ones and zeroes, where a black bar is a one and a white bar a zero. If you did, the first digit would be 0001101, the number seven would be 0111011 and so on; but there’s no need for that, because the bands of light and dark are far easier to read with a beam of light than a string of printed characters.
Taking a closer look at the first six digits of my water bottle’s UPC, I’ve superimposed the widths and corresponding decimal value for each group of seven units. The top is my idealized representation of the encoding and the bottom is taken from a photograph of the label:
Now that you know how the bars encode the numbers, let’s turn to what the twelve digits mean. The first six digits generally denote the product manufacturer. 078742 is Walmart. 038000 is assigned to Kellogg’s. Apple is 885909 and Starbucks is 099555. The first digit can define the operation of the code. For example, when the first digit is a 5, it signifies a coupon and ties the coupon to the purchase required for its use. If the first digit is a 2, then the item is something sold by weight, like meats, fruit or vegetables, and the last six digits reflect the weight or price per pound. If the first digit is a 3, the item is a pharmaceutical.
Following the leftmost six-digit manufacturer code is the middle marker (1111, as space-bar-space-bar-space) followed by five digits identifying the product. Every size, color and combo demands a unique identifier to obtain accurate pricing and an up-to-date inventory.
The last digit in the UPC serves as an error-correcting check digit to ensure the code has been read correctly. The check digit derives from a calculation performed on the other digits, such that if any digit is altered the check digit won’t match the changed sequence. Forget about altering a UPC with a black marker: the change wouldn’t work out to the same check digit, so the scanner will reject it.
In case you’re wondering, the first product to be scanned at a checkout counter using a bar code was a fifty stick pack of Juicy Fruit gum in Troy, Ohio on June 26, 1974. It rang up for sixty-seven cents. Today, 45 sticks will set you back $2.48 (UPC 22000109989).
“Time heals all wounds.” “Time is money.” “Time flies.”
To these memorable mots, I add one more: “Time is truth.”
A defining feature of electronic evidence is its connection to temporal metadata or timestamps. Electronically stored information is frequently described by time metadata denoting when ESI was created, modified, accessed, transmitted, or received. Clues to time are clues to truth because temporal metadata helps establish and refute authenticity, accuracy, and relevancy.
But in the realms of electronic evidence and digital forensics, time is tricky. It hides in peculiar places, takes freakish forms, and doesn’t always mean what we imagine. Because time is truth, it’s valuable to know where to find temporal clues and how to interpret them correctly.
Everyone who works with electronic evidence understands that files stored in a Windows (NTFS) environment are paired with so-called “MAC times,” which have nothing to do with Apple Mac computers or even the MAC address identifying a machine on a network. In the context of time, MAC is an initialization for Modified, Accessed and Created times.
That doesn’t sound tricky. Modified means changed, accessed means opened and created means authored, right? Wrong. A file’s modified time can change due to actions neither discernible to a user nor reflective of user-contributed edits. Accessed times change from events (like a virus scan) that most wouldn’t regard as accesses. Moreover, Windows stopped reliably updating file access times way back in 2007 when it introduced the Windows Vista operating system. Created may coincide with the date a file is authored, but it’s as likely to flow from the copying of the file to new locations and storage media (“created” meaning created in that location). Copying a file in Windows produces an object that appears to have been created after it’s been modified!
it’s crucial to protect the integrity of metadata in e-discovery, so changing file creation times by copying is a big no-no. Accordingly, e-discovery collection and processing tools perform the nifty trick of changing MAC times on copies to match times on the files copied. Thus, targeted collection alters every file collected, but done correctly, original metadata values are restored and hash values don’t change. Remember: system metadata values aren’t stored within the file they describe so system metadata values aren’t included in the calculation of a file’s hash value. The upshot is that changing a file’s system metadata values—including its filename and MAC times—doesn’t affect the file’s hash value.
Conversely and ironically, opening a Microsoft Word document without making a change to the file’s contents can change the file’s hash value when the application updates internal metadata like the editing clock. Yes, there’s even a timekeeping feature in Office applications!
Other tricky aspects of MAC times arise from the fact that time means nothing without place. When we raise our glasses with the justification, “It’s five o’clock somewhere,” we are acknowledging that time is a ground truth. “Time” means time in a time zone, adjusted for daylight savings and expressed as a UTC Offset stating the number of time zones ahead of or behind GMT, time at the Royal Observatory in Greenwich, England atop the Prime or “zero” Meridian.
Time values of computer files are typically stored in UTC, for Coordinated Universal Time, essentially Greenwich Mean Time (GMT) and sometimes called Zulu or “Z” time, military shorthand for zero meridian time. When stored times are displayed, they are adjusted by the computer’s operating system to conform to the user’s local time zone and daylight savings time rules. So in e-discovery and computer forensics, it’s essential to know if a time value is a local time value adjusted for the location and settings of the system or if it’s a UTC value. The latter is preferred in e-discovery because it enables time normalization of data and communications, supporting the ability to order data from different locales and sources across a uniform timeline.
Four months of pandemic isolation have me thinking about time. Lost time. Wasted time. Pondering where the time goes in lockdown. Lately, I had to testify about time in a case involving discovery malfeasance and corruption of time values stemming from poor evidence handling. When time values are absent or untrustworthy, forensic examiners draw on hidden time values—or, more accurately, encoded time values—to construct timelines or reveal forgeries.
Time values are especially important to the reliable ordering of email communications. Most e-mails are conversational threads, often a mishmash of “live” messages (with their rich complement of header data, encoded attachments and metadata) and embedded text strings of older messages. If the senders and receivers occupy different time zones, the timeline suffers: replies precede messages that prompted them, and embedded text strings make it child’s play to alter times and text. It’s just one more reason I always seek production of e-mail evidence in native and near-native forms, not as static images. Mail headers hold data that support authenticity and integrity—data you’ll never see produced in a load file.
Underscoring that last point, I’ll close with a wacky, wonderful example of hidden timestamps: time values embedded in Gmail boundaries. This’ll blow your mind.
If you know where to look in digital evidence, you’ll find time values hidden like Easter eggs.
E-mail must adhere to structural conventions to traverse the internet and be understood by different e-mail programs. One of these conventions is the use of a Content-Type declaration and setting of content boundaries, enabling systems to distinguish the message header region from the message body and attachment regions.
The next illustration is a snippet of simplified code from a forged Gmail message. To see the underlying code of a Gmail message, users can select “Show original” from the message options drop-down menu (i.e., the ‘three dots’).
The line partly outlined in red advises that the message will be “multipart/alternative,” indicating that there will be multiple versions of the content supplied; commonly a plain text version followed by an HTML version. To prevent confusion of the boundary designator with message text, a complex sequence of characters is generated to serve as the content boundary. The boundary is declared to be “00000000000063770305a4a90212” and delineates a transition from the header to the plain text version (shown) to the HTML version that follows (not shown).
Thus, a boundary’s sole raison d’être is to separate parts of an e-mail; but because a boundary must be unique to serve its purpose, programmers insure against collision with message text by integrating time data into the boundary text. Now, watch how we decode that time data.
Here’s our boundary, and I’ve highlighted fourteen hexadecimal characters in red:
Next, I’ve parsed the highlighted text into six- and eight-character strings, reversed their order and concatenated the strings to create a new hexadecimal number:
A decimal number is Base 10. A hexadecimal number is Base 16. They are merely different ways of notating numeric values. So, 05a4a902637703 is just a really big number. If we convert it to its decimal value, it becomes: 1,588,420,680,054,531. That’s 1 quadrillion, 588 trillion, 420 billion, 680 million, 54 thousand, 531. Like I said, a BIG number.
But, a big number…of what?
Here’s where it gets amazing (or borderline insane, depending on your point of view).
It’s the number of microseconds that have elapsed since January 1, 1970 (midnight UTC), not counting leap seconds. A microsecond is a millionth of a second, and 1/1/1970 is the “Epoch Date” for the Unix operating system. An Epoch Date is the date from which a computer measures system time. Some systems resolve the Unix timestamp to seconds (10-digits), milliseconds (13-digits) or microseconds (16-digits).
When you make that curious calculation, the resulting date proves to be Saturday, May 2, 2020 6:58:00.054 AM UTC-05:00 DST. That’s the genuine date and time the forged message was sent. It’s not magic; it’s just math.
Had the timestamp been created by the Windows operating system, the number would signify the number of 100 nanosecond intervals between midnight (UTC) on January 1, 1601 and the precise time the message was sent.
Why January 1, 1601? Because that’s the “Epoch Date” for Microsoft Windows. Again, an Epoch Date is the date from which a computer measures system time. Unix and POSIX measure time in seconds from January 1, 1970. Apple used one second intervals since January 1, 1904, and MS-DOS used seconds since January 1, 1980. Windows went with 1/1/1601 because, when the Windows operating system was being designed, we were in the first 400-year cycle of the Gregorian calendar (implemented in 1582 to replace the Julian calendar). Rounding up to the start of the first full century of the 400-year cycle made the math cleaner.
Timestamps are everywhere in e-mail, hiding in plain sight. You’ll find them in boundaries, message IDs, DKIM stamps and SMTP IDs. Each server handoff adds its own timestamp. It’s the rare e-mail forger who will find every embedded timestamp and correctly modify them all to conceal the forgery.
When e-mail is produced in its native and near-native forms, there’s more there than meets the eye in terms of the ability to generate reliable timelines and flush out forgeries and excised threads. Next time the e-mail you receive in discovery seems “off” and your opponent balks at giving you suspicious e-mail evidence in faithful electronic formats, ask yourself: What are they trying to hide?
The takeaway is this: Time is truth and timestamps are evidence in their own right. Isn’t it about time we stop letting opponents strip it away?
A federal court appointed me Special Master, tasked to, in part, search the file slack space of a party’s computers and storage devices. The assignment prompted me to reconsider the value of this once-important forensic artifact.
Slack space is the area between the end of a stored file and the end of its concluding cluster: the difference between a file’s logical and physical size. It’s wasted space from the standpoint of the computer’s file system, but it has forensic significance by virtue of its potential to hold remnants of data previously stored there. Slack space is often confused with unallocated clusters or free space, terms describing areas of a drive not currently used for file storage (i.e., not allocated to a file) but which retain previously stored, deleted files.
A key distinction between unallocated clusters and slack space is that unallocated clusters can hold the complete contents of a deleted file whereas slack space cannot. Data recovered (“carved”) from unallocated clusters can be quite large—spanning thousands of clusters—where data recovered from a stored file’s slack space can never be larger than one cluster minus one byte. Crucially, unallocated clusters often retain a deleted file’s binary header signature serving to identify the file type and reveal the proper way to decode the data, whereas binary header signatures in slack space are typically overwritten.
A little more background in file storage may prove useful before I describe the dwindling value of slack space in forensics.
Electronic storage media are physically subdivided into millions, billions or trillions of sectors of fixed storage capacity. Historically, disk sectors on electromagnetic hard drives were 512 bytes in size. Today, sectors may be much larger (e.g., 4,096 bytes). A sector is the smallest physical storage unit on a disk drive, but not the smallest accessible storage unit. That distinction belongs to a larger unit called the cluster, a logical grouping of sectors and the smallest storage unit a computer can read from or write to. On Windows machines, clusters are 4,096 bytes (4kb) by default for drives up to 16 terabytes. So, when a computer stores or retrieves data, it must do so in four kilobyte clusters.
File storage entails allocation of enough whole clusters to hold a file. Thus, a 2kb file will only fill half a 4kb cluster–the balance being slack space. A 13kb file will tie up four clusters, although just a fraction of the final, fourth cluster is occupied is occupied by the file. The balance is slack space and it could hold fragments of whatever was stored there before. Because it’s rare for files to be perfectly divisible by 4 kilobytes and many files stored are tiny, much drive space is lost to slack space. Using smaller clusters would mean less slack space, but any efficiencies gained would come at the cost of unwieldy file tracking and retrieval.
So, slack space holds forensic artifacts and those artifacts tend to hang around a long time. Unallocated clusters may be called into service at any time and their legacy content overwritten. But data lodged in slack space endures until the file allocated to the cluster is deleted–on conventional “spinning” hard drives at any rate.
When I started studying computer forensics in the MS-DOS era, slack space loomed large as a source of forensic intelligence. Yet, apart from training exercises where something was always hidden in slack, I can’t recall a matter I’ve investigated this century which turned on evidence found in slack space. The potential is there, so when it makes sense to do it, examiners search slack using unique phrases unlikely to throw off countless false positives.
But how often does it make sense to search slack nowadays?
I’ve lately grappled with that question because it seems to me that the shopworn notions respecting slack space must be re-calibrated.
Keep in mind that slack space holds just a shard of data with its leading bytes overwritten. It may be overwritten minimally or overwritten extensively, but some part is obliterated, always. Too, slack space may hold the remnants of multiple deleted files; that is, as overlapping artifacts: files written, deleted overwritten by new data, deleted again, then overwritten again (just less extensively so). Slack can be a real mess.
Fifteen years ago, when programs stored text in ASCII (i.e., encoded using the American Standard Code for Information Interchange or simply “plain text”), you could find intelligible snippets in slack space. But since 2007, when Microsoft changed the format of Office productivity files like Word, PowerPoint and Excel files to Zip-compressed XML formats, there’s been a sea change in how Office applications and other programs store text. Today, if a forensic examiner looks at a Microsoft Office file as it’s written on the media, the content is compressed. You won’t see any plain text. The file’s contents resemble encrypted data. The “PK” binary header signature identifying it as compressed content is gone, so how will you recognize zipped content? What’s more, the parts of the Zip file required to decompress the snippet have likely been obliterated, too. How do you decode fragments if you don’t know the file type or the encoding schema?
The best answer I have is you throw common encodings against the slack and hope something matches up with the search terms. More-and-more, nothing matches, even when what you seek really is in the slack space. Searches fail because the data’s encoded and invisible to the search tool. I don’t know how searching slack stacks up against the odds of winning the lottery, but a lottery ticket is cheap; a forensic examiner’s time isn’t.
That’s just the software. Storage hardware has evolved, too. Drives are routinely encrypted, and some oddball encryption methods make it difficult or impossible to explore the contents of file slack. The ultimate nail in the coffin for slack space will be solid state storage devices and features, like wear leveling and TRIM that routinely reposition data and promise to relegate slack space and unallocated clusters to the digital dung heap of history.
Taking a fresh look at file slack persuades me that it still belongs in a forensic examiner’s bag of tricks when it can be accomplished programmatically and with little associated cost. But, before an expert characterizes it as essential or a requesting party offers it as primary justification for an independent forensic examination, I’d urge the parties and the Court to weigh cost versus benefit; that is, to undertake a proportionality analysis in the argot of electronic discovery. Where searching slack space was once a go-to for forensic examination, it’s an also-ran now. Do it, when it’s an incidental feature of a thoughtfully composed examination protocol; but don’t bet the farm on finding the smoking gun because the old gray mare, she ain’t what she used to be! See? I never metaphor I didn’t like.
******************************
Postscript: A question came up elsewhere about solid state drive forensics. Here was my reply:
The paradigm-changing issue with SSD forensic analysis versus conventional magnetic hard drives is the relentless movement of data by wear leveling protocols and a fundamentally different data storage mechanism. Solid state cells have a finite life measured in the number of write-rewrite cycles.
To extend their useful life, solid state drives move data around to insure that all cells are written with roughly equal frequency. This is called “wear leveling,” and it works. A consequence of wear leveling is that unallocated cells are constantly being overwritten, so SSDs do not retain deleted data as electromagnetic drives do. Wear leveling (and the requisite remapping of data) is handled by an SSD drive’s onboard electronics and isn’t something users or the operating system control or access.
Another technology, an ATA command called TRIM, is controllable by the operating system and serves to optimize drive performance by disposing of the contents of storage cell groups called “pages” that are no longer in use. Oversimplified, it’s faster to write to an empty memory page than to initiate an erasure first; so, TRIM speeds the write process by clearing contents before they are needed, in contrast to an electromagnetic hard drive which overwrites clusters without need to clear contents beforehand.
The upshot is that resurrecting deleted files by identifying their binary file signatures and “carving” their remnant contents from unallocated clusters isn’t feasible on SSD media. Don’t confuse this with forensically-sound preservation and collection. You can still image a solid state drive, but you’re not going to get unallocated clusters. Too, you won’t be interfacing with the physical media grabbing a bitstream image. Everything is mediated by the drive electronics.
******************************
Dear Reader, Sorry I’ve been remiss in posting here during the COVID crisis. I am healthy, happy and cherishing the peace and quiet of the pause, hunkered down in my circa-1880 double shotgun home in New Orleans, enjoying my own cooking far too much. Thanks to Zoom, I completed my Spring Digital Evidence class at the University of Texas School of Law, so now one day just bubbles into the next, and I’m left wondering, Where did the day go?. Every event where I was scheduled to speak or teach cratered, with no face-to-face events sensibly in sight for 2020. One possible exception: I’ve just joined the faculty of the Tulane School of Law ten minutes upriver for the Fall semester, and plan to be back in Austin teaching in the Spring. But, who knows, right? Man plans and gods laugh.
We of a certain age may all be Zooming and distancing for many months. As one who’s bounced around the world peripatetically for decades, not being constantly on airplanes and in hotels is strange…and stress-relieving. While I miss family, friends and colleagues and mourn the suffering others are enduring, I’ve benefited from the reboot, ticking off household projects and kicking the tires on a less-driven day-to-day. It hasn’t hurt that it’s been the best two months of good weather I’ve ever seen, here or anywhere. The prospect of no world travel this summer–and no break from the soon-to-be balmy Big Easy heat–is disheartening, but small potatoes in the larger scheme of things.
Recently, I wrote on the monstrous cost of TIFF+ productions compared to the same data produced as native files. I’ve wasted years trying to expose the loss of utility and completeness caused by converting evidence to static formats. I should have recognized that no one cares about quality in e-discovery; they only care about cost. But I cannot let go of quality because one thing the Federal Rules make clear is that producing parties are not permitted to employ forms of production that significantly impair the searchability of electronically stored information (ESI).
In the “ordinary course of business,” none but litigators “ordinarily maintain” TIFF images as substitutes for native evidence When requesting parties seek production in native forms, responding parties counter with costly static image formats by claiming they are “reasonably usable” alternatives. However, the drafters of the 2006 Rules amendments were explicit in their prohibition:
[T]he option to produce in a reasonably usable form does not mean that a responding party is free to convert electronically stored information from the form in which it is ordinarily maintained to a different form that makes it more difficult or burdensome for the requesting party to use the information efficiently in the litigation. If the responding party ordinarily maintains the information it is producing in a way that makes it searchable by electronic means, the information should not be produced in a form that removes or significantly degrades this feature.
FRCP Rule 34, Committee Notes on Rules – 2006 Amendment.
I contend that substituting a form that costs many times more to load and host counts as making the production more difficult and burdensome to use. But what is little realized or acknowledged is the havoc that so-called TIFF+ productions wreck on searchability, too. It boggles the mind, but when I share what I’m about to relate below to opposing counsel, they immediately retort, “that’s not true.” They deny the reality without checking its truth, without caring whether what they assert has a basis in fact. And I’m talking about lawyers claiming deep expertise in e-discovery. It’s disheartening, to say the least.
A little background: We all know that ESI is inherently electronically searchable. There are quibbles to that statement but please take it at face value for now. When parties convert evidence in native forms to static image forms like TIFF, the process strips away all electronic searchability. A monochrome screenshot replaces the source evidence. Since the Rules say you can’t remove or significantly degrade searchability, the responding party must act to restore a measure of searchability. They do this by extracting text from the native ESI and delivering it in a “load file” accompanying the page images. This is part of the “plus” when people speak of TIFF+ productions.
E-discovery vendors then seek to pair the page images with the extracted text in a manner that allows some text searchability. Vendors index the extracted text to speed search, a mapping process intended to display the page where the text was located when mapped. This is important because where the text appears in the load file dictates what page will be displayed when the text is searched and determines whether features like proximity search and even predictive coding work as well as we have a right to expect. Upshot: The location and juxtaposition of extracted text in the load file matters significantly in terms of accurate searchability. If you don’t accept that, you can stop reading.
Now, let’s consider the structure of modern electronic evidence. We could talk about formulae in spreadsheets or speaker notes in presentations, but those are not what we fight over when it comes to forms of production. Instead, I want to focus on Microsoft Word documents and those components of Word documents called Comments and Tracked Changes; particularly Comments because these aren’t “metadata” by any stretch. Comments are user-contributed content, typically communications between collaborators. Users see this content on demand and it’s highly contextual and positional because it is nearly always a comment on adjacent body text. It’s NOT the body text, and it’s not much use when it’s separated from the body text. Accordingly, Word displays comments as marginalia, giving it the power of place but not enmeshing it with the body text.
But what happens to these contextual comments when you extract the text of a Word document to a load file and then index the load files?
There are three ways I’ve seen vendors handle comments and all three significantly degrade searchability:
First, they suppress comments altogether and do not capture the text in the load files. This is content deletion. It’s like the content was never there and you can’t find the text using any method of electronic search. Responding parties don’t disclose this deletion nor is it grounded on any claim of privilege or right. Spoliation is just S.O.P.
Second, they merge the comments into the adjacent body text. This has the advantage of putting the text more-or-less on the same page where it appears in the source, but it also serves to frustrate proximity search and analytics. The injection of the comment text between a word combination or phrase causes searches for that word combo or phrase to fail. For example, if your search was for ignition w/3 switch and a four-word comment comes between “ignition” and “switch,” the search fails.
Third, and frequently, vendors aggregate comments and dump them at the end of the load file with no clue as to the page or text they reference. No links. No pointers. Every search hitting on comment text takes you to the wrong page, devoid of context.
Some of what I describe are challenges inherent to dealing with three-dimensional data using two-dimensional tools. Native applications deal with Comments, speaker notes and formulae three-dimensionally. We can reveal that data as needed, and it appears in exactly the way witnesses use it outside of litigation. But flattening native forms to static images and load files destroys that multidimensional capability. Vendors do what they can to add back functionality; but we should not pretend the results are anything more than a pale shadow of what’s possible when native forms are produced. I’d call it a tradeoff, but that implies requesting parties know what’s being denied them. How can requesting party’s counsel know what’s happening when responding parties’ counsel haven’t a clue what their tools do, yet misrepresent the result?
But now you know. Check it out. Look at the extracted text files produced to accompany documents with comments and tracked changes. Ask questions. Push back. And if you’re producing party’s counsel, fess up to the evidence vandalism you do. Defend it if you must but stop denying it. You’re better than that.
Social Media Content (SMC) is a rich source of evidence. Photos and posts shed light on claims of disability and damages, establish malicious intent and support challenges to parental fitness–to say nothing of criminals who post selfies at crime scenes or holding stolen goods, drugs and weapons. SMC may expose propensity to violence, hate speech, racial animus, misogyny or mental instability (even at the highest levels of government). SMC is increasingly a medium for business messaging and the primary channel for cross-border communications. In short, SMC and messaging are heirs-apparent to e-mail in their importance to e-discovery.
Competence demands swift identification and preservation of SMC.
Screen shots of SMC are notoriously unreliable, tedious to collect and inherently unsearchable. Applications like X1 Social Discovery and service providers like Hanzo can help with SMC preservation; but frequently the task demands little technical savvy and no specialized tools. Major SMC sites offer straightforward ways users can access and download their content. Armed with a client’s login credentials, lawyers, too, can undertake the ministerial task of preserving SMC without greater risk of becoming a witness than if they’d photocopied paper records.
Collecting your Client’s SMC Collecting SMC is a two-step process of requesting the data followed by downloading. Minutes to hours or longer may elapse between a request and download availability. Having your client handle collection weakens the chain of custody; so, instruct the client to forward download links to you or your designee for collection. Better yet, do it all yourself.
Obtain your client’s user ID and password for each account and written consent to collect. Instruct your client to change account passwords for your use, re-enabling customary passwords following collection. Clients may need to temporarily disable two-factor account security. Download data promptly as downloads are available briefly.
Collection Steps for Seven Social Media Sites Facebook: After login, go to Settings>Your Facebook Information>Download Your Information. Select the data and date ranges to collect (e.g., Posts, Messages, Photos, Comments, Friends, etc.). Facebook will e-mail the account holder when the data is ready for download (from the Available Copies tab on the user’s Download Your Information page). Facebook also offers an Access Your Information link for review before download. Continue reading →
Next week is Georgetown Law Center’s sixteenth annual Advanced E-Discovery Institute. Sixteen years of a keen focus on e-discovery; what an impressive, improbable achievement! Admittedly, I’m biased by longtime membership on its advisory board and my sometime membership on its planning committees, but I regard the GTAEDI confab of practitioners and judges as the best e-discovery conference still standing. So, it troubles me how much of the e-discovery content of the Institute and other conferences is ceded to other topics, and one topic in particular, privacy, is being pushed to be the focus of the Institute in future.
This is not a post about the Georgetown Institute, but about privacy, particularly whether our privacy fears are stoked and manipulated by companies and counsel as an opportunistic means to beat back discovery. I ask you: Is privacy a stalking horse for a corporate anti-discovery agenda?Continue reading →
Today, I published my primer on processing. It’s fifty-odd pages on a topic that’s warranted barely a handful of paragraphs anywhere else. I wrote it for the upcoming Georgetown Law Center Advanced E-Discovery Institute and most of the material is brand new, covering a stage of e-discovery–a “black box” stage–where a lot can go quietly wrong. Processing is something hardly anyone thinks about until it blows up.
Laying the foundation for a deep dive on processing required I include a crash course on the fundamentals of digitization and encoding. My students at the University of Texas and at the Georgetown Academy have had to study encoding for years because I see it as the best base on which to build competency on the technical side of e-discovery.
The research for the paper confirmed what I’d long suspected about our industry. Despite winsome wrappers, all the leading e-discovery tools are built on a handful of open source and commercial codebases, particularly for the crucial tasks of file identification and text extraction. Nothing evil in that, but it does make you think about cybersecurity and pricing. In the process of delving deeply into processing, I gained greater respect for the software architects, developers and coders who make it all work. It’s complicated, and there are countless ways to run off the rails. That the tools work as well as they do is an improbable achievement. Stilli, there are ingrained perils you need to know, and tradeoffs to be weighed.
Working from so little prior source material, I had to figure a lot out by guess and by gosh. I have no doubt I’ve misunderstood points and could have explained topics more clearly. Please don’t hesitate to weigh in to challenge or correct. Regular readers know I love to hear your thoughts and critiques.
I’ll be talking about processing in an ACEDS/Logikcull webcast tomorrow (Tuesday, November 5, 2019) at 1:00pm EST/10:00am PST. I expect it’s not to late to register.
The milestone of the title is that this is my 200th blog post and it neatly coincides with my 200,000 unique visitor to the blog (actually 200,258, but who’s counting?). When I started blogging here on August 20, 2011, I honestly didn’t know if anyone would stop by. Two hundred thousand kind readers have rung the bell (and that’s excluding the many more spammers turned away). I hope something I wrote along the way gave you some insight or a chuckle. I’m intensely grateful for your attention.
By the way, if you’d like to come to the Georgetown Advanced E-Discovery Institute in Washington, D.C. on November 21-22, 2019, please use my speaker’s discount code to save $100.00. The discount code is BALL (all caps). Hope to see you!
We all need certainty in our lives; we need to trust that two and two is four today and will be tomorrow. But the more we learn about any subject, the more we’re exposed to the qualifiers and exceptions that belie perfect certainty. It’s a conundrum for me when someone writes about cryptographic hashing, the magical math that allows an infinite range of numbers to match to a finite complement of digital fingerprints. Trying to simplify matters, well-meaning authors say things about hashing that just aren’t so. Their mistakes are inconsequential for the most part—what they say is true enough–but it’s also misleading enough to warrant caveats useful in cross-examination.
I’m speaking of the following two assertions:
Hash values are unique; i.e., two different files never share a hash value.
Hash values are irreversible, i.e., you can’t deduce the original message using its hash value.
It’s October (already?!?!) and–YIKES–I haven’t posted for two weeks. I’m tapping away on a primer about e-discovery processing, a topic that’s received scant attention…ever. One could be forgiven for thinking the legal profession doesn’t care what happens to all that lovely data when it goes off to be processed! Yet, I know some readers share my passion for ESI and adore delving deeply into the depths of data processing. So, here are a few paragraphs pulled from my draft addressing the well-worn topic of hashing in e-discovery where I attempt a foolhardy tilt at the competence windmill and seek to explain how hashing works and what those nutty numbers mean. Be warned, me hearties, there be math ahead! It’s still a draft, so feel free to push back and all criticism (constructive/destructive/dismissive) warmly welcomed.
My students at the University of Texas School of Law and the Georgetown E-Discovery Training Academy spend considerable time learning that all ESI is just a bunch of numbers. They muddle through readings and exercises about Base2 (binary), Base10 (decimal), Base16 (hexadecimal) and Base64; as well as about the difference between single-byte encoding schemes (ASCIII) and double-byte encoding schemes (Unicode). It may seem like a wonky walk in the weeds; but the time is well spent when the students snap to the crucial connection between numeric encoding and our ability to use math to cull, filter and cluster data. It’s a necessary precursor to their gaining Proustian “new eyes” for ESI.
Because ESI is just a bunch of numbers, we can use algorithms (mathematical formulas) to distill and compare those numbers. Every student of electronic discovery learns about cryptographic hash functions and their usefulness as tools to digitally fingerprint files in support of identification, authentication, exclusion and deduplication. When I teach law students about hashing, I tell them that hash functions are published, standard mathematical algorithms into which we input digital data of arbitrary size and the hash algorithm spits out a bit string (again, just a sequence of numbers) of fixed length called a “hash value.” Hash values almost exclusively correspond to the digital data fed into the algorithm (termed “the message”) such that the chance of two different messages sharing the same hash value (called a “hash collision”) is exceptionally remote. But because it’s possible, we can’t say each hash value is truly “unique.”
Using hash algorithms, any volume of data—from the tiniest file to the contents of entire hard drives and beyond—can be almost uniquely expressed as an alphanumeric sequence; in the case of the MD5 hash function, distilled to a value written as 32 hexadecimal characters (0-9 and A-F). It’s hard to understand until you’ve figured out Base16; but, those 32 characters represent 340 trillion, trillion, trillion different possible values (2128 or 1632). Continue reading →