Amanda Lathia examines the increase in fines under the GDPR in Information Security Buzz

  • February 13, 2020
  • By Amanda Lathia, Associate

This article was originally published in Information Security Buzz and can be accessed here.

Reducing The Risk Of ICO Enforcement Notices And Penalties Under The GDPR

Since the GDPR regulations came into force on 25 May 2018, there have been hundreds of thousands of GDPR breaches resulting in enforcement action for non-compliance and /or penalties for data protection breaches. With regards to the latter, regulators across Europe have imposed much more severe penalties than previously seen under the Data Protection Act 1998 (DPA 1998), the record being £183m against British Airways for losing over 500,000 customers’ personal data.

It is not only large, global corporates that are penalised under the GDPR. Smaller organisations and even individuals have been penalised for GDPR breaches, for example: estate agents failing to keep tenants’ personal data safe; social workers emailing individuals’ sensitive data to their personal email addresses without authorisation; company directors selling personal data (which can lead to director disqualification for a period of 5 years); and organisations sending marketing material electronically to customers without their consent. Lack of transparency, not having a lawful basis for processing or failure to obtain valid consent feature in the growing number of complaints to the Information Commissioner’s Office (ICO).

Complaints relating to electronic communication

The DPA 1998 remains in force for the purposes of the Privacy and Electronic Communications (EC Directive) Regulations 2002 (“the PECR”) notwithstanding the introduction of the DPA 2008. The PECR gives effect to the right to privacy directive, 2002/58/EC. Section 55 of the DPA 1998 prohibits illegally obtaining personal data without the consent of the data controller and selling or sharing that data. Many complaints to the ICO are related to unsolicited direct marketing messages. Individuals have stated, via the ICO’s online reporting tool, that where they have not given their consent to receive marketing messages, they find it “concerning and worrying” how a company has managed to get hold of their personal information, e.g. a mobile phone number. It is also very concerning that they do not know what other information a company may have about them.

Minimising consent breaches

Organisations can minimise consent breaches by rectifying any or all of the following:

  1. Do not have consent ‘yes’ boxes defaulted to being ticked – under the GDPR, the default must always be ‘no’ – explicitly ask the customer if they would like to receive marketing information electronically;
  2. Review any data sharing agreements with third parties and make sure that if you do share data with third parties (providing you have the customer’s consent), require that the third party obtains the customer’s consent separately before sending them any marketing material;
  3. Where you receive personal data from a third party, always verify a customer’s consent before sending any marketing material even if the third party informs you that consent has been obtained; and
  4. Transparency: always tell a customer whose data has been shared with you by a third party from where you obtained their data and what you intend to do with it.

Automated decision-making and profiling

Article 22 of the GDPR Regulations provides for even stricter conditions to protect individuals from automated decision-making that has legal or similarly significant effects on them. More specifically, solely automated decisions, i.e. decisions that do not have a human input or where the human review of such decisions is not meaningful could amount to a breach under Article 22. Examples include: an online decision to award a loan or a recruitment aptitude test that uses pre-programmed algorithms and criteria. Profiling goes further and collates data on an individual to ascertain their buying habits or lifestyle with the aim of predicting individuals’ behaviour or to make decisions about them.

Organisations using or intending to use artificial intelligence (AI) applications and/or profiling should ensure that human reviewers can override automated decisions and not be penalised for doing so. Where there is a low or non-meaningful level of human decision-making, the risk of a breach of Article 22 is higher. The ICO has published guidance relating to the issues of what level of human involvement in automated decision-making is meaningful (See ). It remains to be seen whether the increase use of AI will result in an increase in fines under the GDPR.

Related News

Mar 11, 2020
Amanda Lathia discusses employment law changes from 6 April 2020 in Employee Benefits
Mar 02, 2020
Amanda Lathia examines off-payroll working rules following the news of Thomson Reuters freelancers in Employer News
Feb 18, 2020
Amanda Lathia examines partnerships and important factors to consider that will help navigate future challenges
Feb 14, 2020
Gregor Kleinknecht comments on new rules for copyright in the EU in IBA Global Insight
Nov 27, 2019
Amanda Lathia discusses firing employees over social media posts in HR Grapevine
Nov 27, 2019
Amanda Lathia comments on the Supreme Court case that led to Royal Mail employee’s dismissal in Personnel Today
Oct 23, 2019
Gregor Kleinknecht’s chapter on trade marks and design rights post-Brexit published in Winning with IP
Oct 17, 2019
Amanda Lathia comments on the Supreme Court employment ruling in favour of Judge Claire Gilham in The Times
Sep 18, 2019
Amanda Lathia discusses how the Gig Economy continues to shape the legal status of a worker in Employer News
Aug 20, 2019
Amanda Lathia and Polly Atkins discuss DSARs on the rise since GDPR introduced in 2018, in Lawyer Monthly

© Hunters Law LLP 2020 | Privacy NoticeLegal & Regulatory | Cookies Policy | Complaints Procedure

Hunters Law LLP is authorised and regulated by the Solicitors Regulation Authority (number 657218)