Online Cyber Law Training


OnDemand version of SANS Institute's Legal 523 course "Law of Data Security and Investigations" is popular with students in a hurry. The course is paired with the coveted GLEG certification.

Another reason some students prefer the OnDemand version is it allows them to absorb the material in bite-sized chunks. You can listen for a few minutes, stop the audio, read the notes, think, and then continue.

EU's General Data Protection Regulation

SANS Institute Publishes White Paper by Benjamin Wright


Executive Summary

Adoption of the new General Data Protection Regulation (GDPR) is motivating organizations worldwide to improve existing technical controls for securing personal information. Organizations should be especially aware that the GDPR and other recent legal developments amplify the negative repercussions of a data security breach -- meaning organizations have increased incentives to avoid a breach.

Data security law in Europe continues to evolve. Enactment of the GDPR, which takes effect May 25, 2018, will impose formal, new data security requirements on organizations within the European Union, affecting many companies.

In parallel, in October 2016, France adopted the Digital Republic Bill. It dramatically increases fines on those organizations that fall short on security. For larger, multinational organizations, these types of new security regulations reflect three major trends:


  • Greater potential monetary penalties imposed by regulators
  • More rules for disclosure of data breaches
  • Increased exposure to diverse proceedings and investigations into whether data security is adequate

As a consequence, larger organizations should begin immediately to redouble the implementation of information security controls and technologies, which includes automated IT security monitoring, testing and measuring.

This paper provides recommendations and a checklist for technical compliance with the GDPR. These recommendations are equally imperative for avoiding a painful data security breach. Included are several case studies showing how companies can effectively use advanced technology for regulatory compliance and reduced breach risk.

Read the full paper titled Preparing for Compliance with the General Data Protection Regulation (GDPR): A Technology Guide for Security Practitioners.

How to Keep InfoSec Investigation Secret

Confidentiality Labels as Compliance with Professional Ethics

In the investigation of a data security incident, proper use of confidentiality labels can help a lawyer or other professional show they are complying with ethical requirements for confidentiality.

Consider the American Bar Association Model Rules of Professional Conduct, “Client-Lawyer Relationship, Rule 1.6 Confidentiality of Information.” Rule 1.6(c) reads, “A lawyer shall make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client.”

Official commentary to that Rule says: “When transmitting a communication that includes information relating to the representation of a client, the lawyer must take reasonable precautions to prevent the information from coming into the hands of unintended recipients. This duty, however, does not require that the lawyer use special security measures if the method of communication affords a reasonable expectation of privacy. …  Factors to be considered in determining the reasonableness of the lawyer's expectation of confidentiality include the sensitivity of the information and the extent to which the privacy of the communication is protected by law or by a confidentiality agreement.”

Not Every Security Incident is a Breach


So let’s consider how this Rule 1.6(c) might apply to a data security investigation. A data security investigation can be very sensitive for an enterprise. The investigation can require much work and analysis to determine the legal impact of a security incident. The analysis may conclude that the enterprise has suffered a data security “breach” for which notice must be given and for which the enterprise is legally liable. On the other hand, the analysis may conclude there was no “breach” and therefore no requirement for notice and no liability.

Accordingly, it is in the best interests of the enterprise that the investigation be kept legally confidential. The enterprise does not want its legal adversaries (such as regulators or class action plaintiff lawyers) to know anything about the investigation. If the adversaries possess details from the investigation, they might use those details to penalize, hassle or assert liability against the enterprise.

An attorney working for the enterprise can help to promote the confidentiality of the investigation -- and all information and communications related to it -- by ensuring that the information and communications are properly labeled as “Confidential attorney-client communication,” “Confidential attorney work product created in preparation for dispute” or something like that.

In many cases law respects confidentiality associated with attorney communications and work. For this reason, non-lawyer professionals, like infosec experts, are motivated to involve a lawyer in their investigations.

Labels like those above can be powerful to prevent the unintended or unauthorized disclosure of sensitive information. The labels warn anyone who sees the information (police, vigilantes, regulators, contractors, employees, whistleblowers and so on) that it is confidential and protected by law. The labels can also help to prevent disclosure of the information through legal process such as a subpoena, a police raid or discovery in a civil lawsuit.

Thus, the labels would be a crucial part of a lawyer’s reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information belonging to the lawyer’s enterprise client. Furthermore, the labels could be evidence of a reasonable expectation by the lawyer that the information will be treated as confidential by law.

In other words, proper use of these labels can help an infosec lawyer comply with ethics Rule 1.6(c) quoted above.



How to Make a Legal Recording of Mixed Reality

Evidence of Digital Interaction in Physical Space


This blog post teaches how to make an evidence-rich record of mixed reality. Mixed reality is like virtual or augmented reality, but doesn’t necessarily involve a headset. It shows information from both the real world and the cyber world (e.g., "nearables," wearable computers or SCADA devices). The information in a mixed reality environment can be much more complex that what a user perceives through a virtual reality headset.

The Internet of Things (IoT) Creates a Mixed Reality.


In the video below the mixed reality involves interaction among a Bluetooth location Tile, the apps on a smartphone and the cameras and microphone on the phone. As the video is made, the phone is physically moving from one place to another.
Internet of Things - Attached to pet cat
Details of the interaction are memorialized in a video that shows:
  • images and sounds from the real, physical world; 
  • activity happening on or through the phone; 
  • sounds and Bluetooth signals emitted from the tracking Tile (which is attached to a cat) when the Tile is prompted by an app on the phone;
  • distinctive visual change in the Tile app as the phone draws nearer to the physical location of the cat:
    1. circle displayed in the app changes from gray, to dotted green to solid green
    2. then the tile icon in the app swings back and forth to show the physical Tile is emitting sounds that can be heard through the air (You can actually hear the sound from the Tile as it is detected by the microphone on the smartphone.) 

The video includes narration from an eyewitness -- the “investigator” -- who explains what is happening in real time.

The Video Records Images from Both the Front Camera and the Back Camera on the Phone.

In parts of the video, the investigator appears on the left side. When the investigator appears, the investigator is being recorded with the front-facing camera on the phone. The right side of the video shows what the investigator sees and records with the back-facing camera on the phone.

The narrated explanation helps the observer – such as a judge or jury who watches the video in the future – understand and believe the evidence so that the observer can reach legal conclusions. (Examples of legal conclusions are that a party is guilty, or innocent, or liable, or trespassing or in compliance with a regulation.)

Notice that the sound of the narrator's voice changes as he walks with the phone. The phone's microphone picks up an echo as the narrator walks through a narrow space (a stairwell). Subtle details like this could have forensic significance when the video is analyzed later. They help to show whether the video is fake or authentic. 

A video record like this might be valuable in resolving:
  • a lawsuit
  • a tax audit 
  • a police investigation 
  • a child custody dispute
  • a dispute over assets in probate court
  • a response to an information security incident 
The video reliably captures facts as they appear at the time. It captures the facts in chronological sequence. The video is a version of "screencast" evidence record I have explained elsewhere.


Mixed Reality Is Here Only Momentarily.


The facts captured in a video like this might be ephemeral. They might not be reproducible later. The digital world is in constant flux. For example, the Tile might behave a certain way at the time the video is made, but behave a different way an hour later due to an update to the software that runs the Tile or the app that controls it on the phone.

The investigator lends credibility to the video record by ending his narration with a legally binding statement of authentication: “I Ben Wright hereby sign and affirm this video as my official work.” He concludes by stating date and time with his voice and his lips. That date/time statement can be linked with related representations of the date and time, including the time displayed on the screen of the phone itself in the final moments of the video. The representations of date and time make it harder for a fraudster to counterfeit or manipulate the video later.

Trustworthiness Depends on the Investigator’s Credibility.


Obviously the investigator could fabricate this video, just as other eyewitnesses could fabricate their testimony about what they saw. But if the investigator has a good reputation, then the observer of the video (judge or jury) has more reason to believe what is depicted in the video.

The video can serve as evidence of what happened, even if the investigator is not available later to vouch for it.

Legal records like this video might be needed in court many years after their original creation. Therefore the multitude of visual and auditory details captured in the video, together with the voice authentication stated by the investigator, can be invaluable to a court that is trying to understand and evaluate what happened long ago.

Video is Efficient Tool for Professional Investigator.


Historically a professional investigator made records by snapping a few photographs and writing a text report. But to write a report takes a long time. This video captures a great deal of compelling evidence in a short time.

Notice that the end of the video records details about how the video was made. For example it shows the video was captured with the AZ Screen Recorder App. Details like that might help answer questions by a judge if the video were used in court.

Mixed Reality is Rapidly Growing More Common.

The modern world sports a spellbinding array of digital devices and sensors that can detect and transmit information useful to an investigator like a police officer. Mixed reality devices include;
Fitbit

The backup camera/sensor on a car begets a mixed reality. 

The driver sees a video image from the camera. But the driver experiences much more than just a video image.
Mixed Reality for Motorist
Superimposed on the image are colored guidelines. Plus the system, which includes multiple cameras and sensors, presents a simulated image of what the car and its surroundings look like from 20 feet above! (Cool)

The cameras/sensors may emit audio if the car approaches danger. Moreover, the sensors may give the driver haptic feedback through the driver's seat. All of this "reality" transpires in a physical space where the driver also directly hears, sees and feels what is happening in and around the car.

I invite your comments.


Related Blog Posts: 



How to Write Terms of Service for Virtual Reality

Legal contracts will pervade and regulate virtual reality. Just as end user license agreements (EULA) govern the use of software, legal terms of use will govern virtual reality "space." Some terms of use will be like No Trespassing signs. Others will will be warnings or disclaimers of liability. 

Like the terms of use for web sites or mobile apps, some virtual reality terms of use will prohibit unauthorized activity (example: "You agree not to simulate sexual acts.)

Legal Notices Are Common.


Modern life is filled with legal notices and contracts. For example, as a visitor enters a physical building, it is common that the manager of the building will notify the visitor -- with a legible sign -- that guns are prohibited inside the building. Notices like this can be legally enforceable against a visitor: bring a gun into that building, and you can be ejected and perhaps arrested.

Property Rules

Legal Terms in VR Could Impose a Binding Contract.


In a virtual reality environment, the terms of use could cover myriad topics. They could confirm the intellectual property rights of the VR developer. Or they could restrict the legal power of a user to violate intellectual property (e.g., a work of art) by, for instance, forbidding the user from recording the property.
virtual reality contract


The terms could limit the power of a user to sue the developer if its data security is weak. (Example: "You give us your personally-identifiable information at your own risk. We cannot assure the security of your information, and we take no liability for any compromise of your information.")

Or ... the terms could impose legally-binding fees on a visitor. (Example: "If you enter this virtual room, you agree to pay VR Dev, Inc. $5.")

Enforcement of terms would often require the gathering evidence of the terms and how they appeared in the virtual space. See blog post about capturing legal evidence in virtual or augmented reality


Legal Terms Might Be Enforced on Bots.


Google reported that its DeepMind bot is able to navigate a Doom-like 3D maze similar to how a physical robot can navigate through a physical building. Cool.

But when a bot visits a virtual space, legal terms -- written in natural language not robot language like robots.txt -- might be imposed on it, even though no human actually set eyes on the terms or interprets the legal meaning of the terms.

Why do I say that?

Refer to the famous case Internet Archive v. Shell. Ms. Shell published a web site, and posted legal terms on that site. The terms said that any visitor to the site agreed by contract that if it made a copy of a page from the site it would pay Ms. Shell $5000 per page. Internet Archive engages in the public service of archiving the Web. Using an automated program (a bot), Internet Archive made copies from Ms. Shell's website. Then, Ms. Shell sued Internet Archive for breach of contract, seeking money! Internet Archive argued in court that it was impossible for it to enter a contract with her because the copying was performed by an automated program and no human had reviewed the terms posted on Ms. Shell's site.

However, on a first-blush review, the court sided with Ms. Shell. The court ruled she had sufficiently proven the possibility of breach of contract so as to force the lawsuit into deeper proceedings.

The risk of deeper proceedings meant greater cost to Internet Archive and the possibility of an embarrassing loss in court.

Then Internet Archive and Ms. Shell settled their dispute. Internet Archive apologized to her, and she accepted the apology. She dropped her demand for money from Internet Archive.

Ms. Shell achieved a victory and established the possibility that a bot could be legally bound to contract terms communicated by natural language.

Legal Notices Will Be Published as Audio.


When Time Magazine's Lisa Eadicicco tried Microsoft's HoloLens, what surprised her were the sounds. Through HoloLens, she saw 3D objects as she expected. But she did not anticipate that the audio would be so meaningful.

She could hear objects that were out of view! She reported that she could hear them moving, similar to how we can hear creatures moving in real space, even though we don't see them. In other words, a rich VR experience will communicate by way of audio as much as by video.

Accordingly, some legal notices and contracts will be posted as audio, and/or they will attract attention by audio. For instance, as a VR explorer enters a landscape, she may hear a certain tone to indicate that legal terms apply to that landscape and she can read them if she so elects.


Notice of a Contract Might Be Given By Haptic Vibration.


Instead of audio, however, legal notices might bring attention to themselves through haptic feedback. For instance, a little vibration on the left side of a headset might indicate that

  • a legal notice is present,
  • the legal notice is binding, and
  • the user can access the notice (similar to clicking "Legal Terms" link at bottom of web page) if the user so desires.
I am interested to hear comments on this topic.


See also:

  1. How to make a legal recording of a "mixed reality" experience.
  2. Legal measures brand and property owners may take to regulate augmented reality

How to Write Information Security Policy

In the 5-day SANS Institute course called "Legal 523," Law of Data Security and Investigations, I teach these general tips for how to write infosec policy for an organization. These tips are equally applicable to responding to a cyber security questionnaire from a regulator, a cyber insurer or a corporate customer.

1.     The organization is wise to have some kind of written Risk Assessment. For a less-complex organization, the Risk Assessment need not be very long, but a Risk Assessment shows the organization is evaluating infosec risk (such as risk of breach of credit card data) and setting priorities based on that risk.

2. The organization is wise to identify a high officer as having responsibility for overseeing privacy and data security.

3.     As I explain in the course, I like this statement as an accurate, overarching rule of infosec policy:  "Company strives to maintain a reasonable, continuous process for implementing, reviewing, improving and documenting security and privacy in information technology. This process places more emphasis on the never-ending professional efforts of Company's IT staff than on paperwork, recognizing perfection is impossible." I like making clear in all policies that the quoted language is the ultimate policy, and everything else is subordinate to that quoted language.

4.     As I teach in the course, I am wary of any statements of absolute in policy. When an organization says that the organization "will" or "must" or "shall" do anything in IT, the organization is setting itself up for potential failure. No organization can always do any particular IT thing. Therefore, I prefer using words like "the organization strives …" or "the organization aspires …". And of course, if an organization says that it strives or aspires to do some thing, then the organization should in fact work hard to do that thing.

5.     An organization can responsibly require staff to do certain things (assuming those things are in fact achievable). For instance, an organization can require staff to maintain passwords that meet certain characteristics. (Example: "Each staff member must have a password for that is no shorter than 12 characters.")

6.     In my experience, the bigger problem is not whether an organization fails to cover particular topic X or topic Y in written policy. Instead, the problem is that the organization writes too many policies, which are too long, too hard to read, and too prescriptive and are disconnected from the reality of the fluid, dynamic challenges of modern infosec. The best standard is nimble, never-ending "professional attention" by the infosec team rather than satisfaction of a checklist covering particular topics (firewall, anti-virus, intrusion detection etc.).

7. Published "privacy policies" need to be carefully written so as not to promise privacy or security that is unrealistic.

The foregoing ideas are applicable generically. An organization subject to particular laws or threats may need to behave differently.

I welcome comments. I know some smart people will disagree with me on some of the ideas above.

-Benjamin Wright

How to Record Augmented Reality Legal Evidence

Audits and Official Inspections | Virtual Reality


Digital evidence can be faked. One way to enhance the reliability of digital evidence is to have a responsible person attest to its creation and authenticity.

Real-time narration bears witness to the truth.


This video demonstrates the recording of evidence from augmented reality.

 

The video records “reality,” which is the footage captured with the back camera on a smartphone as the inspector walks. The reality is “augmented” with information that is superimposed over the footage. Here the augmenting information includes compass and geolocation data that change as the inspector walks.



The video could constitute legal or audit evidence showing precisely what happened as the inspector moved about a certain parcel of land. The evidence might be used in a court of law or other official proceeding, or it might be used to support tax or financial statements. 

The video might show, for example, that the inspector encountered a "no trespassing" sign in the augmented environment.

It might show he accepted or rejected legal terms and conditions (like an end-user license agreement or EULA).

Alternatively, it might be used to show how the compass app functioned (or malfunctioned) or used intellectual property such as trademarks or copyrighted images.

Legal affidavit makes record more credible.


The lower left-hand corner of the video displays real-time footage from the phone’s front camera. It shows the inspector narrating the record, explaining what is happening step-by-step. The video also records audio of his voice as he talks and walks.

The inspector takes these measures to authenticate the video:
  • shows his face with his moving lips as he narrates,
  • identifies himself,
  • identifies the technology he is using,
  • describes the data as it appears on the screen of his phone,
  • closes by formally signing the video with these words recorded in both the audio and the small video window on the lower left corner: “I Ben Wright hereby sign and affirm this record as my official work.”,
  • vocalizes the date and time.

In effect the audio and video of the inspector constitute a legal affidavit confirming the augmented reality record. The investigator is placing his professional reputation behind the evidence depicted in the video.

Something similar could be done with a record of virtual reality or other immersive environment.


Augmented reality can entail more than audio and visual feedback.


Augmented reality could provide haptic feedback. So for example as the inspector walks, his smartphone could vibrate. The visual video record might not capture this vibration. However, the inspector could describe it in his vocal narration of events.

A pattern of ominous vibrations might signal danger or no trespassing. A calm vibration might signal approval or "thank you".

Augmented and virtual reality could (someday) even provide smell and taste feedback, which the inspector could describe vocally in a record like the video above.


More on this topic


For more analysis of these ideas, please see : Attestation of record captured from website

See related ideas on legal records made by robots and cyborgs and how to record legal evidence from mixed reality.

I would be pleased to hear comments.

-Benjamin Wright

Active Defense for the Internet of Things

Summary: Attackers will hack the Internet of Things. Then defenders will invoke "active defense." To support unexpected and unconventional active defense, defenders can post legal terms and warnings.

Today, a hot topic is hacking -- breaking into -- the Internet of Things.

The Internet of Thinks includes myriad little devices -- like smart Nest thermostats -- that are connected to the net via channels like wifi and bluetooth.

At SANS Institute's Network Security 2015 conference, experts demonstrated how to manipulate things remotely, in ways that are not intended by the designers of the things. Experts hacked into a flying drone, a wireless teddy bear and a doll.




Active Defense to the Rescue?

But if attackers will hack into "things," then defenders will use so-called Active Defense to defend the things.

SANS Instructor John Strand for example teaches a whole array of techniques for tricking or annoying attackers or for collecting threat intelligence from them.

One technique is Kippo, a fake SSH server that captures the attacker's commands on his local machine, even after the attacker thinks he has logged out of the SSH server. Dick Dastardly would be proud.

Another tool Strand teaches is a spider trap or WebLabrynth. It serves up to an attacker an endless supply of junk data that could crash the attacker's web crawler software and possibly even the hard drive that supports the web crawler. What a surprise to the attacker who thought she was just hacking into a toy!

Active Defense Law


What are the legal implications of Active Defense techniques? Generally speaking a good active defender would have legal justification for thwarting and snooping on an attacker.

But Active Defense is an evolving, loosely-defined style of cyberdefense. It might embrace a zany repertoire of tricks, spoofs and unconventional maneuvers.

To reinforce legal justification, an Active Defender might post a legal notice that says the attacker consents to being tricked or tracked.

So for example, a wireless teddy bear might post a statement like this:

“Warning. No trespassing. If you hack this device, you consent to us deceiving you, tracking you and taking other unconventional steps to stop you and prosecute you to the fullest extent of the law.”

According to SANS instructor Josh Wright, this statement might be published "in the mobile application or the web UI of the device, using a modal dialog or other splash/landing page." It might be published many different ways. The statement needs to be accessible to the attacker, though not necessarily screaming in his face.

Posted Warnings Affect the Legal Interpretation of an Activity.


My point is that the publication of warnings and statements of legal consent can help to confirm the legal justification for Active Defense of lots of things connected to the Internet, including drones, robots, teddy bears and creepy dolls.

Furthermore, such statements can help to confirm that the professionals who execute or give advice about Active Defense are behaving ethically.

Compare my discussion of Offensive Countermeasures that warn a trespasser away from physical danger.

What do you think?

==
Attorney Benjamin Wright teaches the law of data security and investigations at the SANS Institute.
==

Post Script. At SANS Institute's Network Security 2015 conference, my fellow instructors were handing out coveted Hack the Internet of Things badges. You should have been there.
 

A Standard of Professional Attention for Data Security

Better than a Checklist of Minimum Requirements


By what legal standard should the holder of PII be held? PII means personally identifiable information like social security numbers and medical information.

I argue the standard should be this: A data holder must have an on-going process for devoting professional attention to security.

Under this standard, a sizable data holder like a hospital or a retail chain deploys a team of professionals to work all the time, every day. Any legal review of the data holder is an enormous amount of work . . . an utterly massive amount of work. Under this standard courts, insurers or regulatory authorities must undertake an exhausting analysis to conclude whether a data holder met the standard.

“Minimum Technical Requirements” Is a Common But Flawed Standard.


But the professional attention standard that I advocate is not universally acknowledged by authorities.

Instead, a commonly-articulated standard is that the data holder must achieve some “minimum requirements.” Those minimum requirements amount to a prescriptive checklist of specific technical measures the data holder must take. The authority promoting the minimum requirements argues that each and every requirement is easy to do, so failure to do any one of them merits some kind of penalty.

Here are two examples of a legal authority arguing that a data holder failed to meet minimum, easy requirements for data security:

One: Cyber-insurer Denies Coverage Because Hospital Failed to Do Everything on Minimum Checklist. 


In Columbia Casualty Company vs. Cottage Health System a hospital had paid for cyber insurance. Then a breach happened. The insurer sued the hospital, seeking to deny coverage because – in good part – the hospital failed to satisfy some specific minimum requirements like installing patches on servers.

Two: FTC Says Medical Laboratory Violated Law Because It Missed Some Specific Checklist Points. 


The Federal Trade Commission is locked in an epic struggle against the victim of a cyber attack, LabMD. In this proceeding FTC’s lawyers maintain that LabMD violated data security law because LabMD failed to implement specific low-cost checklist items, such as adoption of written security policy (which is different from an unwritten policy), formal training of employees, destruction of data on people for whom no healthcare was performed and failure to update operating system.

See Footnotes 5-14 and accompanying text, Complaint Counsel’s Opposition to Respondent’s Motion to Dismiss. Public Document Number 9357, filed May 6, 2015.

It is important to observe that FTC’s lawyers give no credit to LabMD for what it did right; LabMD did in fact have a substantial, on-going InfoSec program. But FTC’s lawyers simplistically say: You missed some specific technical points in our checklist; therefore, you violated the law. No deeper analysis is necessary. [See update below.]

The Minimum Requirements Checklist Does Not Align with Reality.


The minimum requirements approach is easy for an authority like FTC to enforce. An audit will always find that a data holder did not meet some specific minimum requirement. That is reality. So any time the FTC looks, it will find that the data holder failed to meet this requirement or that requirement, even if the data holder maintained a substantial, professional, good faith InfoSec process.

But the minimum requirements approach is ineffective.

Every day, major data breaches happen. The reason is that data security is astonishingly hard to achieve in a functioning organization. As I write this post today, the big breach in the news is US Office of Personnel Management. Breaches are routine. Breaches are normal.

According to InfoSec pundit Bruce Schneier:

“In general, it is far easier to attack a network than it is to defend the same network. This isn’t a statement about willpower or budget; it’s a statement about how computer and network security work today. A former NSA deputy director recently said [link omitted] that if we were to score cyber the way we score soccer, the tally would be 462456 twenty minutes into the game. In other words, it’s all offense and no defense. … In this kind of environment we simply have to assume that even our classified networks have been penetrated.”

In practice, achieving all of the minimum, low-cost requirements – 24 hours a day, 365 days a year -- is exceedingly hard to do. Each little requirement viewed in isolation might be “low cost,” but collectively they are not low cost. More importantly, striving for minimum requirements is not the most effective approach to security. As a multitude of institutions have proven, the data holder can invest great resources in security and still be breached.

InfoSec is a fierce competition, and you might not win that competition even if you work hard at it. Like a rugby game, security invariably involves tradeoffs, judgment calls and good faith mistakes.
Cyber Defense as competition.
Even “easy” measures might not make sense on account of such things as compensating controls, prioritization of attention, rapidly-changing threats and technology, disruption caused by “patches” or the operational needs of the data holder.

The Better Standard Is Professional Attention.


So the better standard is not that the data holder meet specific minimum requirements on a prescriptive checklist. The better standard is that the data holder maintain a professional program to attend to security.

To understand that standard, let’s look at an example. A hospital (Massachusetts Ear and Eye Infirmary) lost a laptop containing patient data. The Department of Health and Human Services investigated. HHS concluded that the hospital violated HIPAA data security requirements and imposed a $1.5 million fine.

But the analysis by HHS was telling. HHS emphasized the violation and fine were not about a specific security measure, i.e., encryption on a laptop. HHS did not say, "Encryption is easy. You did not encrypt. Therefore you broke the law."

Instead, said HHS, the violation was that the hospital failed over time to maintain an effective, on-going process for evaluating the security of portable devices and responding to that evaluation. See Resolution Agreement September 13, 2012.

Perfection in Information Security Will Never Be Achieved.


If data holders like hospitals must achieve perfect minimum data security – if they must always meet all the “low-cost” measures that can be dreamed up -- then they should cease operating. They will never get to legal compliance, and they will owe infinite fines and infinite compensation to victims like patients. That outcome is absurd.

A better approach is to motivate data holders to maintain a process, a responsible on-going program. It is like motivating a sports team to train rigorously and play its heart out on the field.

That approach includes recognizing that oftentimes organizations with good programs will be breached. Organizations with good programs should be rewarded for having the programs. They should be spared penalty when a breach happens.

Data holders, like sports teams, should be cheered for playing hard, even when they lose.

This topic keeps me humble. I'd be pleased to hear comments.

--

Disclosure: Mr. Wright has performed work for LabMD.

Update on LabMD: Administrative Law Judge ruled against FTC and the standard of liability it was advancing.

eDiscovery: Opportunities for Creative Thinking by IT Professionals

Deep knowledge of technology is critical to winning modern lawsuits. When an enterprise is in litigation, the legal team needs advice and ideas from IT staff and other forensic experts.

Discovery of Records Resolves Lawsuits

Consider in particular the discovery phase of a commercial lawsuit.  The lawyers representing an enterprise wish to request, through the rules of discovery in litigation, that the adversary turn over records that are relevant to the lawsuit. The adversary’s records can help to resolve the lawsuit.

Fishing Expedition Not Tolerated

But under the rules, the lawyers must have some reason to believe that specific kinds of records exist in order to ask for them. The lawyers can’t simply ask that the adversary rummage through all of its digital stuff – all email, text messages, files, folders, images, metadata, tapes, hard drives, backup, cloud-computing accounts and on and on -- and turn over “all relevant records.” Such a request would be an open-ended fishing expedition,  which the court will not tolerate. Such a request would be far too broad and therefore not enforceable.

So the lawyers face a chicken-and-egg paradox. They want the adversary’s records, and they are entitled to get some of those records. But if they don’t know which specific kinds records the adversary might have, then they don’t possess the technical knowledge necessary to frame a request for them.

The Internet of Things Is an eDiscovery Bonanza


Enter the Internet of Things.
forensics
Evidence from Small Connected Devices
New technology – like smartphones, smart-watches and smart-grid power meters – begets prodigious quantities of heretofore unimaginable records. The records can show, for instance, who was at a certain place at a certain time or when a particular event occurred in a work room. The technology changes and advances constantly. Many new and surprising kinds of records – records that could be very impactful in a lawsuit – emerge every day.

A Demonstration from Investigative Journalism

Here’s an example of how new technology breeds surprisingly influential new records and evidence. News media investigated the spending habits of former Congressman Aaron Schock. Congressman Schock relished using social media to tell the world what he was doing all the time. But unbeknownst to him, he was telegraphing little clues – little records – about himself that would prove to be embarrassing.

Schock published Instagram photos that included time and geolocation data.
investigation
Geographic Location on Photograph
The ever-watchful Associated Press matched this data with his official (publicly available) expense reports. The AP deduced, for instance, that he illicitly rented a private jet, at taxpayer expense, for his transportation connected with a particular fundraising event in Peoria, Illinois. Ouch.

As an investigative journalist, AP published its analysis and concluded that Schock was abusing his travel expense budget. This and similar revelations contributed to Schock’s resignation.

Now Let’s Apply that Example to Litigation

Just as digital details like geolocation data can help the news media scrutinize spending by a politician, they can be decisive in a commercial lawsuit. But often the lawyers handling a lawsuit need help from people with technical expertise. Lawyers may not realize that, for example, if a video is stored in Sharepoint at an adversary enterprise, then Sharepoint may store reliable metadata about the date of the video and the dates of each revision to that video.

Very often, under the rules of discovery, the lawyer’s request for something like Sharepoint metadata must be predicated on more than a mere guess that “some kind of meta data somewhere exists with respect to the video in question.” In their eDiscovery request for records from the adversary, the lawyers need to refer to some empirical evidence that Sharepoint metadata would be relevant to the case at hand.

That’s precisely where an alert IT staffer can add value. If the staffer understands the details of the case, he or she may be able to divine that the adversary was using Sharepoint to store a video. Further, the staffer might know enough about Sharepoint (or be able to learn through quick research) to advise the lawyers they should target Sharepoint metadata in their eDiscovery request. That kind of advice can make or break a case!

IT Experts: You Should Be Inspired and Empowered  

A person with technical knowledge should be inspired to be creative … and to think outside their normal roles … to help their legal team to discern and articulate that the adversary possesses unconventional records that should be produced.

By Benjamin Wright