Police need policing; Vegas hotel loses data; European Commission has a strategy; Hamburg fines Facebook; and Canada – does it have a clearview problem?

Study discusses need for oversight of police use of AI and data analytics

On February 23rd the Royal United Services Institute (RUSI) raised concerns about the use of personal data by the police, warning that there is a lack of oversight of police use of data-driven technology, and that it could lead to racial discrimination. The study recommends that guidelines are needed to ensure that the use of AI and computer algorithms are used and developed both legally and ethically. It also described that, given the increasing volume of data available to police, pressure has grown to take a preventative rather than reactive stance.

Examples of data-driven policing tools include Hampshire police’s domestic violence risk-assessment model, as well as Durham police’s Harm Assessment Risk Tool (HART). While the study acknowledges that such tools could help improve police ‘effectiveness and efficiency’ this goal is held back by a lack of empirical evidence, poor quality data and insufficient skills and expertise.

The study also discussed the legality of such tools, stating that “it could be argued that the use of such tools would not be ‘necessary’ if the police force had the resources needed to deploy a non-technological solution to the problem at hand, which may be less intrusive in terms of its use of personal data.” Furthermore, it advised that an integrated impact assessment was required to justify new analytics projects, and that typically not enough evidence of their benefits were provided.

While the study criticised data-driven tools as carrying a racial bias, it stated that there is not enough evidence as to whether such biases have occurred in the UK, as studies purporting to show bias were generally conducted in the USA. Ian Dyson, the National Police Chief’s Council head of information management, stated that they would consider the study’s recommendations alongside the government and regulators.

What happens to your data in Vegas, doesn’t stay in Vegas!

It was reported on February 19th that last summer, the personal data of almost 10.7 million guests of the casino and hotel chain MGM Resorts was accessed by hackers.

The personal data exposed during the breach included email addresses, names, addresses and dates of birth. Whilst MGM characterised most of the data as ‘phonebook’ type information, they also acknowledged that more sensitive information of over 1,200 guests, including passport information, was also leaked; however, they are confident that no financial information was exposed.

ZDNet, who reported the hack, said that the leak included guest details of celebrity and government officials, including Twitter CEO Jack Dorsey and Department of Homeland Security staff.

MGM stated that certain customers were notified, as required by state law. However, it should be noted that most US states do not presently require companies to notify customers if data which is already in the public domain is exposed during a hack.

This is not the largest data breach of hotel guest information to have occurred recently, as the data breach experienced by Marriott Hotels last year (covered here) resulted in the data of roughly 500 million guests being exposed.

European Commission data strategy published

On February 19th, the European Commission (EC) published its comprehensive data strategy, outlining a proposed regulatory framework for access and use of personal and non-personal European data.

In recognising the centrality of data to future economic development, the stated aim is balancing the interests of individuals with the promotion of data driven innovation. Key to the EC’s vision is a single EU data market where personal data and non-personal (including sensitive commercial data), could be secured and accessed by businesses. According to the EC, this would provide businesses with an almost infinite supply of high quality data, leading to higher growth across a variety of industries.

In order to achieve this ambitious goal, the EC envisions the need for common rules and enforcement mechanisms, applied uniformly, in order to allow for data to flow while protecting individuals’ personal data. Furthermore, the EC believes that the creation of an EU-wide data market would require new investments in next-generation infrastructure and data-related skills development.

The EC has identified certain issues preventing the maximal exploitation of EU data, the first of which relates to data availability. In order to increase data availability, the EC suggests a need to encourage innovative data re-use, and proposes reconsidering government-to-business, government-to-government and business-to-business data sharing. For example, in considering business-to-business sharing, the EC notes that there is currently little incentive for businesses to share data, particularly the lack of trust that data will be used only in accordance with contractual limits, as well as a general fear that sharing data may erode competitive advantages.

Market imbalances are another challenge identified by the EC, as presently large platforms benefit from employing data in developing new products and services, effectively crowding out competition from smaller and less data-rich competitors. This is particularly problematic when considering data gleaned from devices comprising the Internet of Things, including the growing importance of data collection stemming from the use of automobiles.

Other challenges observed by the EC include data quality and governance, infrastructure and technology shortcomings, and skills regarding cybersecurity and data literacy.

In order to realise its vision, the EC has proposed policy measures based on four principles:

  1. Cross-sectoral governance framework for data access and use;
  2. Investments in data and strengthening Europe’s capabilities and infrastructures for hosting, processing and using data and interoperability;
  3. Empowering individuals, investing in skills and in SMEs; and
  4. Common European data spaces in strategic sectors and domains of public interest.

Each principle involves several concrete policy proposals, more information for which can be found here. While greater regulatory harmonisation has been an EC goal for some time (consider the GDPR) many critics are concerned over the centralisation of Europeans’ personal data. Given numerous high profile data breaches that have occurred since implementation of GDPR, for example the Marriott breached discussed above, one must consider whether the economic benefits of pooling so much data in one accessible location outweighs the potential problems that could arise from a related breach.

Hamburg Data Commissioner fines Facebook

The Hamburg Data Protection Commissioner has fined Facebook’s German subsidiary €51,000 for failing to notify the regulator of the details of a data protection officer (DPO) for its local office.

Whilst this fine is relatively small, and the target of the fine was Facebook’s German unit, and not the California-based parent company, it shows fines can hit for simple failures. In announcing the fine, the Commissioner stated that “This case should be a clear warning to all other companies: naming a [DPO] and telling the regulator about it are duties…even smaller violations like these can lead to substantial penalties.”

The fine was made under the GDPR, which allows EU data protection authorities to fine companies as much as 4% of global annual turnover for sufficiently serious breaches of the Regulation. Contributing to the relatively small fine amount was Facebook’s professional and prompt handling of the violation. According to the Hamburg Commissioner, Facebook immediately remedied the violation by properly appointing a DPO once being informed of the violation.

For those following Facebook, this represents one of many issues that the company has had with European DPAs. Notably, this includes a claim by the Irish DPA concerning the validity of standard contractual clauses relied on by Facebook for data transfers to the US (covered here), as well as a settlement reached last year between the company and the UK ICO in relation to the Cambridge Analytica scandal (covered here).

Canadian Privacy Authorities announce Investigation into Clearview AI

On February 21st, it was announced that the Office of the Privacy Commissioner of Canada (OPC), along with the provincial privacy protection authorities of Quebec, British Columbia and Alberta, will investigate Clearview AI for its use of facial recognition technology.

The investigation stems from numerous media reports raising concerns over the company’s collection and use of personal data without consent. The reports, confirmed by Clearview, describe how the company has scraped publically available images from the internet in order to teach its facial recognition algorithm. The company then licensed the technology to law enforcement and select financial institutions for the purposes of identifying individuals.

The investigation will assess whether the company’s practices are compliant with the Personal Information Protection and Electronic Documents Act (PIPEDA) as well as related provincial legislation.

Provincial regulators have also announced that they will jointly develop guidance for organisations, including law enforcement, regarding the use of facial recognition and other biometric technology.

For more information please contact Partner, James Tumbridge at jtumbridge@vennershipley.co.uk.