Amanda Lathia examines the increase in fines under the GDPR in Information Security Buzz

  • February 13, 2020
  • By

This article was originally published in Information Security Buzz and can be accessed here.

Reducing The Risk Of ICO Enforcement Notices And Penalties Under The GDPR

Since the GDPR regulations came into force on 25 May 2018, there have been hundreds of thousands of GDPR breaches resulting in enforcement action for non-compliance and /or penalties for data protection breaches. With regards to the latter, regulators across Europe have imposed much more severe penalties than previously seen under the Data Protection Act 1998 (DPA 1998), the record being £183m against British Airways for losing over 500,000 customers’ personal data.

It is not only large, global corporates that are penalised under the GDPR. Smaller organisations and even individuals have been penalised for GDPR breaches, for example: estate agents failing to keep tenants’ personal data safe; social workers emailing individuals’ sensitive data to their personal email addresses without authorisation; company directors selling personal data (which can lead to director disqualification for a period of 5 years); and organisations sending marketing material electronically to customers without their consent. Lack of transparency, not having a lawful basis for processing or failure to obtain valid consent feature in the growing number of complaints to the Information Commissioner’s Office (ICO).

Complaints relating to electronic communication

The DPA 1998 remains in force for the purposes of the Privacy and Electronic Communications (EC Directive) Regulations 2002 (“the PECR”) notwithstanding the introduction of the DPA 2008. The PECR gives effect to the right to privacy directive, 2002/58/EC. Section 55 of the DPA 1998 prohibits illegally obtaining personal data without the consent of the data controller and selling or sharing that data. Many complaints to the ICO are related to unsolicited direct marketing messages. Individuals have stated, via the ICO’s online reporting tool, that where they have not given their consent to receive marketing messages, they find it “concerning and worrying” how a company has managed to get hold of their personal information, e.g. a mobile phone number. It is also very concerning that they do not know what other information a company may have about them.

Minimising consent breaches

Organisations can minimise consent breaches by rectifying any or all of the following:

  1. Do not have consent ‘yes’ boxes defaulted to being ticked – under the GDPR, the default must always be ‘no’ – explicitly ask the customer if they would like to receive marketing information electronically;
  2. Review any data sharing agreements with third parties and make sure that if you do share data with third parties (providing you have the customer’s consent), require that the third party obtains the customer’s consent separately before sending them any marketing material;
  3. Where you receive personal data from a third party, always verify a customer’s consent before sending any marketing material even if the third party informs you that consent has been obtained; and
  4. Transparency: always tell a customer whose data has been shared with you by a third party from where you obtained their data and what you intend to do with it.

Automated decision-making and profiling

Article 22 of the GDPR Regulations provides for even stricter conditions to protect individuals from automated decision-making that has legal or similarly significant effects on them. More specifically, solely automated decisions, i.e. decisions that do not have a human input or where the human review of such decisions is not meaningful could amount to a breach under Article 22. Examples include: an online decision to award a loan or a recruitment aptitude test that uses pre-programmed algorithms and criteria. Profiling goes further and collates data on an individual to ascertain their buying habits or lifestyle with the aim of predicting individuals’ behaviour or to make decisions about them.

Organisations using or intending to use artificial intelligence (AI) applications and/or profiling should ensure that human reviewers can override automated decisions and not be penalised for doing so. Where there is a low or non-meaningful level of human decision-making, the risk of a breach of Article 22 is higher. The ICO has published guidance relating to the issues of what level of human involvement in automated decision-making is meaningful (See ). It remains to be seen whether the increase use of AI will result in an increase in fines under the GDPR.

Related News

Mar 23, 2023
Stephen Morrall and Sophia Smout discuss firing someone for gross misconduct in People Management
Feb 20, 2023
Stephen Morrall discusses the impact of the four-day work week in TheWealthNet
Jan 30, 2023
Stephen Morrall and Sophia Smout examine the new rules on flexible working in People Management
Dec 12, 2022
Stephen Morrall comments on the new flexible working rights in Personnel Today
Oct 18, 2022
Stephen Morrall comments on gig economy rulings challenging pension enrolment in Law360
Sep 20, 2022
Stephen Morrall and Annabelle Woosnam discuss the legal rights for gig economy employees to a pension in People Management
Jul 06, 2022
Stephen Morrall and Annabelle Woosnam discuss pensions in the gig economy, in Employee Benefits
Feb 18, 2022
Gregor Kleinknecht discusses Trademarks, Design Rights and Copyright to Promote Business Growth and Innovation in University of Buckingham Press
Feb 11, 2022
Stephen Morrall comments on what COVID rules means for workers and employers in Mail Online, This is Money, Mail on Sunday, Daily Mail and MSN Money
Jan 14, 2022
Gregor Kleinknecht comments on the General Court clarifying the law on rights of representation before EU courts in Managing IP

© Hunters Law LLP 2023 | Privacy NoticeLegal & Regulatory | Cookies Policy | Complaints Procedure.

Hunters Law LLP is authorised and regulated by the Solicitors Regulation Authority (number 657218)