Balancing Privacy and Security: How personal data and privacy laws collide with “trustworthiness” scores

Everlaw’s VP of Security and Compliance Lisa Hawke was quoted in the Wall Street Journal on April 6, 2019 in Christopher Mims’ Keywords column.

In the article “The Secret Trust Scores Companies Use to Judge Us All,” Mims discusses services like Sift that create “trustworthiness” scores, which are used by companies to flag suspicious behavior and help guard against fraud. In compliance and risk management circles, especially in the Bay Area tech industry, account takeovers (ATO) and chargebacks are a common topic of conversation from a security standpoint. But, there are privacy considerations to prevention measures, like the “Sift score” service discussed in the WSJ article. Lisa commented on the difference between EU and US privacy laws, which define personal information differently.

In the US, PII is typically defined by state or municipal laws as a “name plus” another piece of information, such as a Social Security, driver’s license, or account number. The exact nature of that information is specified in legislation (see the CA definition here), and it varies by state. GDPR is completely different. It does not specify what exactly constitutes personal data. Instead, GDPR places the obligation on the organization collecting and processing personal data. The organization must determine whether or not the data collected is “reasonably likely” to be used to identify the natural person (see GDPR Recital 26 – 30).

Articles 30 32 35 GDPRAs the WSJ article points out, services like Sift—who collect large amounts of data about people’s online activities and use that data to produce a “score” to aid client companies to secure their products—cause tension with privacy laws, especially in the EU. GDPR was enacted to give consumers control of their personal data, which aligns with Europe’s approach to data protection as a form of “informational self-determination.”

The idea that consumers have no visibility or ability to learn what their score is or how it was determined, as referenced in the article, certainly raises privacy flags. Lisa’s comment noted that key terms in GDPR, like the definition of personal data and what suffices as “anonymized” data, will likely be interpreted in future court decisions. In the meantime, it is imperative that privacy and security teams in organizations work together to address these issues and have open conversations about how services and technology aimed at increasing security may impact privacy, as operating in silos is no longer feasible in the current regulatory environment.