See below for the latest Data Blast from our legal team: Substantial fines for French supermarket for GDPR and cookie consent failings; New guidance on the use of live facial recognition technology; Home Depot reaches $17.5M settlement over 2014 data breach; ICO orders government to release pandemic planning report; US senate bill seeks to counter the use of ‘deepfakes’
French supermarket chain Carrefour and its banking branch hit with €3 million in fines for data and cookie breaches
The French Data Protection Authority (the CNIL), has issued fines of more than €3 million to the supermarket chain Carrefour France (€2.25 million), and to its banking branch Carrefour Banque (€ 800,000), for various breaches of the GDPR and the French Data Protection Act which implements the EU e-Privacy Directive’s rules on cookies. The failings assessed by the CNIL covered a range of fundamental aspects of compliance;
- Informing data subjects about the purposes of data processing, and the length of time for which personal data will be retained;
- Automatically placing non-essential cookies on users’ browsers without obtaining user consent;
- Failing to delete individuals’ data after an appropriate time, including when accounts had been inactive for 5 – 10 years;
- Failing to comply with requests not to receive further SMS or email marketing, and failing to delete personal data upon request;
- Systematically requiring proof of identity from individuals seeking to exercise their right to access their personal data;
- Sharing personal data with the group’s loyalty program beyond the scope of the consent sought for such sharing.
The CNIL’s report of its investigations reveals a data protection compliance programme which had been neglected, and is likely to serve as a warning to other large organisations that maintaining compliance requires ongoing investment and monitoring; failing to do so may have costly consequences.
The CNIL held that a systematic requirement for individuals to provide proof of identity when exercising their rights is not appropriate. That finding is consistent with guidance from the UK Information Commissioner’s Office, which states that proof of identity should be required only where there is uncertainty about the requester’s identity and according to their particular circumstances; for example, where there is a need to guard against the heightened risk of revealing sensitive data to someone other than the data subject themselves.
Also of note, is that the CNIL confirmed its view that analytics cookies which may also be used for advertising purposes are not essential to the functioning of a website and cannot be deployed without proper consent. The deployment of analytics cookies has been contentious for a number of years, and the long-awaited EU e-Privacy Regulation has in draft iterations sought to exempt some analytics cookies from the requirement for consent. Notwithstanding that possible future change, the CNIL’s finding makes clear that placing analytics cookies without proper consent carries risks.
The CNIL did not issue an order requiring Carrefour to implement changes to its data practices, as the company had already demonstrated that it had made substantial changes toward compliance following the CNIL investigation.
UK Surveillance camera commissioner calls for tighter control on police use of Facial Recognition
On December 3rd 2020, UK Surveillance camera commissioner Tony Porter released a new report, Facing the Camera (the Report), which makes several recommendations regarding police forces’ use of live facial recognition (LFR). The report addresses many of the implications of the UK Court of Appeal ruling earlier this year against the South Wales Police for use of LFR (previously covered here).
While Mr. Porter acknowledged the potential positive uses of LFR in policing, he noted that it can be intrusive and that the present legal framework surrounding its use is opaque. Accordingly, the Report’s recommendations aim to provide guidance as to what constitutes good practice for the use of LFR.
The Report recommends that the Home Office, National Police Chiefs’ Council (NPCC) and Association of Police and Crime Commissioners (APCC) work together on a national LFR procurement strategy and standard, as well as a method of assessing LFR technology, in order to ensure accountability for police conduct. This should be paired with the development of approval structures for police use of LFR, with consideration given to watch lists, system probability thresholds, parameters of deployment and review processes. It also suggests that the role of decision makers regarding LFR’s deployment be better defined, and that the Home Office update the Surveillance Camera Code of Conduct.
The Report also makes several recommendations to police forces, including the development of ‘ethical oversight’ of LFR decision making and operational conduct; such oversight could involve a police force’s own ethics committee, or a local body tasked with scrutinizing police activity.
The report also suggests that the NPCC develop several performance indicators for LFR operations, and a consistent national terminology regarding the use of LFR systems, so that a meaningful analysis, comparison and understanding may be derived by both the police and the public.
In a statement accompanying the Report, Mr. Porter commented ‘[t]he guidance I’ve issued today will help forces who want to use LFR identify how to do so in accordance with the current legal framework. Where there is a proportionate need to deploy intrusive technology, it is right that the police have the guidance to do that.’
The full Report can be found here.
Home Depot settles with majority of US states over 2014 data breach
On November 24th 2020, it was announced that The Home Depot Inc. (Home Depot) reached a $17.5 million settlement with the Attorneys General of most US states over a 2014 data breach. The payment is to be apportioned between the 46 participating states, as well as the District of Columbia.
The breach occurred when hackers were able to access Home Depot’s network using a third-party vendor’s username and password, and installed malware on the company’s self-checkout system, which permitted hackers to retrieve customers’ payment card information between April and September 2014. Home Depot disclosed the breach in September of 2014, but details of roughly 55 million payment cards were compromised, some of which were then used to carry out fraudulent transactions.
In addition to paying the participating states the $17.5 million settlement, Home Depot agreed to enact several new IT security measures, including;
- Providing privacy and security awareness training to all employees who either access Home Depot’s network, or who process customer information in the United States;
- Implementing new safeguards regarding password management, two-factor authentication, firewalls, encryption, penetration and intrusion testing and detection, and vendor management; as well as agreeing to undergo a third-party assessment of the new measures;
- Hiring a Chief Information Officer to report to the company’s management team on security risks; and
- Ensuring the allocation of appropriate resources to enact and maintain the new IT security measures
ICO orders release of government report on pandemic preparedness
On September 17th 2020, the UK Information Commissioner’s Office (ICO) issued a Freedom of Information Act decision notice, ordering the UK government to make public an unpublished report on Exercise Cygnus, a 2016 test of the UK’s flu pandemic preparedness.
In May 2020, NHS doctor Moosa Qureshi submitted a freedom of information (FOI) request to the Department of Health and Social Care for England (DHSC), seeking the release of a report into Exercise Cygnus, threatening legal action if it was not released and eventually brought judicial review proceedings. A judge refused Dr. Qureshi’s request for a judicial review into the DHSC’s refusal to release the report, in part because it was held that a response to her request already ought to have been provided.
The DHSC defended the judicial review on the grounds that the FOI request was the appropriate path for Dr. Qureshi to take. However, the DHSC has delayed in providing a response to Dr. Qureshi’s request, claiming that it needed more time to assess whether the release of the report qualified as being in the public interest, as provided for in section 35 of the Freedom of Information Act.
In its decision, the ICO ordered that the report be released, or that the DHSC provide a substantive response detailing its reasons for withholding the report, within 35 days.
In an open letter to DHSC Secretary Matthew Hancock, The UK Information Commissioner, Elizabeth Denham, stated that ‘public authorities should aim to respond fully to all requests within 20 days. In cases where the public interest considerations are exceptionally complex, it may be reasonable to take longer but, in the Commissioner’s view, in no case should the total time exceed 40 working days.’
The DHSC published the report on October 20th 2020, a copy of which can be found here.
US Senate approves bill to address the use of ‘deepfakes’
On November 18th 2020, the US Senate approved bipartisan legislation to fund research into defences against realistic computer-generated media, often called deepfakes.
The bipartisan bill, named the Identifying Outputs of Generative Adversarial Networks Act (IOGAN Act) seeks to boost research in the detection and prevention of deepfakes used for the purposes of misinformation, deception and harassment.
Deepfakes are already a problem, as evidenced by the Congressional Research Services assessment that foreign intelligence agencies use deepfakes on social media during recruitment efforts.
Specifically, the IOGAN Act would mandate that the US National Science Foundation fund research into ‘manipulated or synthesized content and information authentication,’ particularly as it relates to content produced by AI-generated deepfake systems, known as Generation Adversarial Networks (GANs). Furthermore, the IOGAN Act would require the National Institute of Standards and Technology (NIST) to create mechanisms for analysing deepfakes.
US technology companies, including Facebook, Amazon and Microsoft, are already undertaking such research, and in January of this year Facebook announced that it would ban ‘misleading and manipulated media’ content, save where it is being used for satire or parody.
The US Defence Advance Research Projects Agency (DARPA) is also conducting research into the issue by seeking to develop algorithms ‘to automatically assess the integrity of photographs and videos, and provide analysts with information about how the counterfeit content was generated.’
For more information please contact Partner, James Tumbridge at email@example.com.