Eighth Circuit Court Of Appeals Rejects Approval Of Settlement In Target Data Breach Class Action - On February 1, 2017, the U.S. Court of Appeals for the Eighth Circuit rejected the lower court’s approval of a settlement in the Target Corporation data breach class action lawsuit filed on behalf of over 100 million consumers allegedly affected by the 2013 data breach. In 2014, the consumer-plaintiffs filed the lawsuit in the U.S. District Court for the District of Minnesota. In November 2015, the District Court ordered final approval of the settlement, certifying a class of “[a]ll persons in the United States whose credit or debit card information and/or whose personal information was compromised as a result of the [Target] data breach.” The settlement required Target to (1) create a $10 million settlement fund, (2) pay up to $6.75 million in class counsel fees, and (3) make specific improvements to its data security program. The settlement faced multiple objections from class members. While class members Leif Olson and Jim Sciaroni each objected to alleged “inadequate compensation and excessive attorneys’ fees,” class member Olson additionally objected to the certification of the class. Olson, unlike the named plaintiffs in the lawsuit, did not incur any expense or cost from the breach and thus is ineligible to receive any monetary compensation, but is still required to waive his rights to bring future claims against Target resulting from the breach. He thus asserted that the “zero-recovery” plaintiffs like himself should be considered a separate subclass with independent representation. He argued that the District Court “did not properly analyze Rule 23(a) before concluding that the consumer-plaintiffs adequately represent the class,” and thus that the final order should be remanded. The Eighth Circuit reversed the District Court’s approval. The appellate court agreed with Olson that the District Court did not conduct sufficient analysis to certify the class, holding that “[t]he lack of legal analysis in both the preliminary and final orders suggests that class certification was the product of summary conclusion rather than rigor.” The Eighth Circuit noted that “Olson’s objection raises important concerns for the district court to evaluate upon remand,” such as “whether an intraclass conflict exists when class members who cannot claim money from a settlement fund are represented by class members who can.” While the Eighth Circuit stated that it took no position on the propriety of class certification, it concluded that the District Court “has not conducted a meaningful analysis of class certification,” and remanded the class certification issue for further consideration. The District Court now has 120 days to act on the Eighth Circuit’s instructions and provide a revised class certification analysis. Reporter, Elizabeth E. Owerbach, Washington, DC, +1 202 626 9223, eowerbach@kslaw.com. House Passes Bill To Prevent Insider Threats At DHS – On January 31, 2017, the U.S. House of Representatives passed by voice vote a bill that would amend the Homeland Security Act of 2002 to include measures aimed at identifying and preventing insider threats at the Department of Homeland Security (“DHS”) that could result in “espionage, terrorism, [or] the unauthorized disclosure of classified national security information.” The bill, titled the “Department of Homeland Security Insider Threat and Mitigation Act of 2017” (H.R. 666), calls for the establishment of an “Insider Threat Program” at DHS to (1) train and educate DHS personnel to “identify, prevent, mitigate, and respond to insider threat risks;” (2) provide investigative support when a potential insider threat is identified; and (3) implement various risk mitigation activities. The bill broadly defines “insider” to mean “any person who has access to classified national security information,” including DHS employees, members of the Armed Forces assigned to DHS, and consultants and contractors. The bill would also require biennial reports to Congress on the status of the Program. Peter King (R-N.Y.), sponsor of the bill, reported that DHS has over 115,000 employees with access to classified information, and that the unauthorized disclosure of such information “represent[s] a significant threat to national security.” King also cited the “recent high-profile cases of government employees leaking classified information,” which he said “have caused drastic damage to U.S. national security and diplomacy.” King specifically pointed to Edward Snowden and Chelsea Manning as examples of insider threats, saying that they and others “were able to conduct their traitorous work undetected because the government had at one time vetted and granted them access to secure facilities and information systems.” Bennie Thompson (D-Miss.) also spoke in favor of the bill, but emphasized that DHS should engage with Congress regarding details of the Insider Threat Program prior to its implementation. In particular, Thompson asked for attention to be paid to the protections that would be available to individuals within the Program’s oversight, who Thompson said would be “subjected to ongoing automated credit, criminal, and social media monitoring.” While the text of the bill does not specify what methods the Program would employ to track or supervise individuals with access to classified information, it does generally require the development of “workplace monitoring technologies.” The bill has been referred to the Senate’s Committee on Homeland Security and Governmental Affairs. A prior version of the bill passed the House in November 2015, but failed to become law as a result of what Representative King called “last-minute scheduling issues with the Senate.” Reporter, Robert D. Griest, Atlanta, GA, +1 404 572 2824, rgriest@kslaw.com. National Automobile Dealers Association And The Future Of Privacy Forum Release Vehicle Privacy Guide For Consumers – On January 26, 2017, the National Automobile Dealers Association (“NADA”) and the Future of Privacy Forum (“FPF”) jointly released a guide directed to consumers titled “Personal Data in Your Car” (the “Guide”). According to the NADA press release, the eight-page Guide “will help consumers understand the kind of personal information collected by the latest generation of vehicles, which use data to further safety, infotainment, and customer experience.” FPF CEO Jules Polonetsky stated: “[T]he release of this Guide is a critical step in communicating to consumers the importance of privacy in the connected car, as well as the benefits that car data can provide.” The Guide describes the array of equipment and features that rely on the collection and use of data about consumers and their vehicles to “support safety, efficient performance, convenience, and entertainment,” and makes recommendations to consumers with respect to safeguarding their personal information when selling their vehicles (or returning rental vehicles). Consumer data that is collected by vehicles, often automatically, includes the following: Event Data Recorders (“EDRs”): Installed in over 90% of vehicles, EDRs record technical information about a vehicle’s operation in the seconds before and after a crash. Accessing EDR information requires physical access to the vehicle as well as a specific EDR reader tool, in addition to meeting any consent requirements imposed under the laws of a given state. On-Board Diagnostic Information: All vehicles manufactured after 1996 are legally required to have an On-Board Diagnostic port, or “OBD-II.” The information contained in the OBD-II port can be retrieved by physically inserting a compatible device into the port, which enables access to information for service technicians to measure emissions, diagnose performance issues, or repair the vehicle. Accessible information may include driver behavioral information (such as speed and use of brakes) as well as geolocation data. In-Cabin Information: Many modern vehicles contain microphones, cameras, and other devices in the vehicle cabin that may be used to record information about vehicle occupants. The Guide encourages consumers to review their vehicle’s privacy policies in order to understand what data are obtained by the vehicle and how that data is used. These privacy policies disclose the ways a consumer’s data is gathered, used, disclosed, and managed and can be found in the vehicle purchase agreement, the vehicle’s user manual, and in the interface of any apps or devices that are used to connect to the vehicle. In addition, the Guide makes the following recommendations to consumers when selling their vehicles (which may also apply to returning rental vehicles): Delete any phone contact/address books that are stored on the vehicle. Reset/delete any car applications that contain personal information. Delete the data on the vehicle’s hard drive. Delete any locations that are stored on the vehicle’s navigation system. Reset all garage door programming. The Guide comes on the heels of research by Charlie Miller and Chris Valasek, who have demonstrated the dangerous threats that may be posed to drivers by both remote and wired hacking of automobile data systems. See Andy Greenburg, The Jeep Hackers are Back to Prove Car Hacking Can Get Much Worse, Wired, Aug. 1, 2016. The FPF website states that the FPF is a non-profit organization that serves as a catalyst for privacy leadership and scholarship, advancing principled data practices in support of emerging technologies. According to the FPF, it brings together industry, academics, consumer advocates, and other thought leaders to explore the challenges posed by technological innovation and develop privacy protections, ethical norms and workable business practices. FPF seeks to fill the void in the “space not occupied by law” which exists due to the speed of technology development. As self-described “data optimists,” the FPF believes that the power of data for good is a net benefit to society, and that it can be well-managed to control risks and offer the best protections and empowerment to consumers and individuals. The Guide can be found here. The NADA press release can be found here. Reporter, Stephen Abreu, San Francisco, +1 415 318 1219, sabreu@kslaw.com. Are Algorithm Discrimination Claims A Future Business Liability? – Concerns raised last year by the Federal Trade Commission (“FTC”) and the American Civil Liberties Union (“ACLU”) regarding the potential discriminatory use of big data may be gaining more traction. On January 12, 2017, the U.S. Public Policy Council of the Association for Computer Machinery (“USACM”) issued a Statement on Algorithmic Transparency and Accountability. The USACM is a global computing society that works to raise awareness of issues related to the computing industry to impact public policy. USACM’s statement sets forth a set of seven principles that USACM recommends be considered during the development and employment of algorithms: (1) stakeholders should be aware of possible biases involved in the design and deployment of algorithms; (2) regulators should encourage the adoption of mechanisms that enable potentially adversely affected individuals to seek redress; (3) institutions should be accountable for the decisions made by the algorithms used, even if it is not feasible to explain their results; (4) systems and institutions are encouraged to produce explanations of the procedures followed by the algorithm, particularly in the public policy contexts; (5) builders of the algorithm should maintain a description of the way the algorithm’s training data was collected, as well as an exploration of the potential biases induced by the human or algorithmic data-gathering process (USACM encourages public scrutiny of the data, but recognizes that there are several concerns to justify restricting access to qualified and authorized individuals only); (6) institutions should record models, algorithms, data, and decisions so they can be audited in cases where there is suspected harm; and (7) institutions should use rigorous methods to validate the models and document the methods and results, including routinely performing tests to determine if the model generates discriminatory harm. (Read the full statement here). USACM’s stated reasons for issuing these guidelines echoed similar concerns that the FTC advanced nearly a year ago in a report entitled, Big Data: A Tool for Inclusion or Exclusion? The report detailed the FTC’s growing concern regarding the potential discriminatory commercial practices in what it characterized as the “era of big data.” Big data is the collection and analysis of vast amounts of consumer data. The report focused on how the commercial use of big data could potentially impact low-income and underserved populations by perpetuating existing disparities, such as making assumptions that deny individuals access to credit. In the report, the FTC cautioned companies that collect, store, sell, and use big data to be mindful of how their activities could potentially result in discriminatory actions. Then, in June 2016, the ACLU filed a lawsuit to protect several research efforts to test and identify discriminatory harms of big data analytics. Sandvig v. Lynch, Civ. No. 1:16-cv-01368 (D.D.C. June 29, 2016). As alleged by the ACLU, with the rise of big data analytics, an increasing number of commercial websites are relying on confidential and proprietary algorithms to collect and track people’s data, cull the data, and use the results to determine access to services for their customers. The ACLU alleged that it fears that these websites are secretly considering information that is leading to discrimination on the basis of race, gender, or other protected characteristics. The ACLU claims that it is bringing this lawsuit because there is a risk that research of these websites would potentially violate the terms of use, which could give rise to civil and, more importantly, criminal penalties under the Computer Fraud and Abuse Act (“CFAA”), 18 U.S.C. § 1030(a)(2)(C). Based on the ACLU’s allegations, it seeks declaratory relief, asking the court to declare that CFAA on its face violates the U.S. Constitution, as well as injunctive relief to stop the Department of Justice (“DOJ”) from enforcing Section 1030(a)(2)(C) of CFAA. Most recently, the DOJ filed a motion to dismiss, arguing that the ACLU failed to allege sufficient facts to confer standing to bring the lawsuit because there is no credible threat that this provision has been or will be enforced against any of the plaintiffs. In addition, the motion argued that the ACLU’s complaint failed to allege facts to support its claims that CFAA violates the First and Fifth Amendments of the Constitution. The motion is fully briefed and has been pending before the district court since October 2016. The issues related to algorithms will continue to evolve. As illustrated by the USACM’s statement, public policy concerns that call for greater public access to review algorithms clash with the important policy of allowing private business to protect algorithms as trade secrets. Reporter, Julie A. Stockton, San Francisco and Silicon Valley, +1 650 422 6818, jstockton@kslaw.com. EU Commission Publishes Documents On Data Economy Strategy – On January 10, 2017, the European Commission (the “Commission”) published a package of documents outlining various proposals related to the digital single market initiative the Commission announced in May 2015. The documents cover a wide range of topics, including a proposal for a new privacy and electronics communications regulation (the “E-Privacy Regulation”) as well as a Communication on building a European data economy (the “Data Economy Communication”). Although the proposals are in their preliminary stages, the Commission’s initiatives are primed to affect more than just technology companies doing business in the European Union (the “EU”). The E-Privacy Regulation as proposed is constructed to enhance privacy in electronic communications by updating current rules set out in the e-Privacy Directive (2002/58/EC), extending the rules to service providers that run over the internet and by introducing a broad definition of “electronic communication services.” The E-Privacy Regulation would protect privacy for both content and metadata derived from electronic communications (e.g., time of a call and location), and both types of data will need to be anonymized or deleted if users have not given their consent, unless the data is required for a limited number of purposes, such as billing purposes. Once a user has provided consent, traditional telecoms operators will have more opportunities to use data and provide additional services to benefit businesses and consumers. The E-Privacy Regulation would also require software that permits electronic communications over the internet to offer end-users an option to prevent third parties from storing or processing such users’ information on terminal equipment. The E-Privacy Regulation further provides that such software must inform end-users about the privacy setting options upon installation, and require the end-user to consent to a particular setting in order to continue installing the software. The Data Economy Communication focuses on the increasing importance of data generated by machines or other emerging technologies (the Internet of Things) and how such data generation can be used to improve products or production processes. According to the Commission, utilization of this raw data is central to the emergence of a data-driven economy, and the Commission specifically mentions the transport, energy and healthcare sectors as industries that could benefit from utilizing such raw data. By analyzing four main areas — obstacles to the free movement of data, data access and transfer, liability, and standardization and portability of technologies — the Commission intends to explore a possible future framework for data access and transfer in the EU while still providing basic privacy protections to consumers. The Commission is currently in the process of taking comments and input from various businesses and industry sectors as to how these proposals will affect the economy. The Commission will be consulting with affected parties until April 26, 2017, in addition to other forms of industry engagement which will provide interested parties with the opportunity to communicate their views on the proposals, and the E-Privacy Regulation is anticipated to be adopted by May 2018. Reporter, Brett Schlossberg, Silicon Valley, +1 650 422 6708, bschlossberg@kslaw.com. ALSO IN THE NEWS King & Spalding’s 2017 Cybersecurity & Privacy Summit – On Monday, April 24, 2017, please join the cybersecurity and privacy experts at King & Spalding for the 2017 Cybersecurity & Privacy Summit. This event is for legal and business professionals who want to participate in a discussion about the latest developments and strategies for data protection. King & Spalding will provide a registration link in the coming weeks.