Twitter Fined $150 Million for Using Customer Data Without Consent

Twitter has been ordered to pay a $150 million (£119 million) to settle allegations that it used people’s personal data to provide targeted advertising without their consent.

More than 140 million Twitter users were affected by the practice.

Announcing the fine, The FTC (Federal Trade Commission) and US Justice Department said that Twitter will no longer be able to profit from “deceptively collected” data.

“Twitter obtained data from users on the pretext of harnessing it for security purposes but then ended up also using the data to target users with ads,” FTC Chair Lina M Khan said.

The tech giant has also been ordered to implement a new privacy programme that will review the information security and data privacy security risks of new products and services.

It will also be required to give users ways to verify their accounts without having to hand over their email addresses or phone numbers.

It’s one of the largest penalties ever handed out for a data privacy violation and follows several investigations into Twitter.

Associate Attorney General Vanita Gupta believes that the penalty “reflects the seriousness of the allegations against Twitter, and the substantial new compliance measures to be imposed as a result of today’s proposed settlement will help prevent further misleading tactics that threaten users’ privacy”.

A long time coming

Twitter’s deceptive practices began in May 2013, according to a complaint filed by the US Justice Department. The social media site introduced a rule requiring users to provide a phone number or email address to improve account security.

Users were told that the information would be used to help them recover locked accounts and to enable two-factor authentication.

However, Twitter was also sharing this information with advertisers, who matched the data with information already obtained from other third parties, such as data brokers.

Twitter announced in 2019 that it had “inadvertently” mishandled users’ email and phone numbers for advertising penalties, but said that the issue had now been resolved.

Following a lengthy investigation, the FTC concluded that Twitter had “misrepresent[ed] its privacy and security practices”.

Meanwhile, the additional enforcement actions suggest that Twitter’s data protection practices still fell short of expectations and that further controls were necessary.

Is this a GDPR victory?

Although the GDPR (General Data Protection Regulation) dominates most discussions of data protection and data privacy, it was not involved in this investigation.

The fine was issued by the FTC (Federal Trade Commission), which enforces civil US antitrust law. The government agency rivals the GDPR in its disciplinary power, as demonstrated by this investigation and the $5 billion fine levied against Facebook in 2019.

The agency has certainly stepped up its game following the introduction of the GDPR. There has been a greater emphasis on regulating data privacy practices, while the potential for eye-watering fines has been normalised.

Shortly before the GDPR took effect in 2018, the FTC proposed a $148 million fine against Uber after the ride-sharing app suffered a data breach and then conspired to cover it up. The agency subsequently backed down and issued no financial penalty whatsoever.

In the following four years, which were marked by several high-profile GDPR fines, the FTC became stricter. In addition to the $5 billion fine issued against Facebook, the agency penalised Equifax $425 million for an extensive data breach that occurred five years previously.

The change in attitude shows a wider shift in the way we think about data privacy. Huge penalties for mishandling personal data are not just a quirk of the GDPR; they are a demonstration of the importance of data privacy for all of us.

Organisations cannot expect to mislead customers and get away with it. Twitter, as a behemoth of the tech industry, will escape from this incident relatively unscathed, although people’s eyes will be opened to the risks that occur with extensive data sharing practices.

However, smaller organisations have less room for error. A significant data breach or privacy violation could be catastrophic, so they must be certain that they handle users’ personal data responsibly.

To do that, they must ensure that they conduct DPIAs (data protection impact assessments) whenever they process personal data in high-risk scenarios.

How DPIAs can protect your organisation

Article 35 of the GDPR states organisations must conduct a DPIA where processing is ‘likely to result in a high risk’ to the rights and freedoms of individuals.

This includes practices such as targeted advertising, which isn’t prohibited under the GDPR but must only be conducted if certain conditions are met.

DPIAs are essential to ensure that appropriate data protection controls are in place. They are a form of risk assessment that analyse how high-risk data processing activities could impact data subjects.

The GDPR doesn’t specify how you should conduct the assessment, so this is where our DPIA Tool helps.

This software guides you through the six steps you must complete to ensure your assessment measures the level of risk involved in data processing activities.

You don’t have to be a GDPR expert to complete the assessment. Our DPIA template shows you a DPIA example and outlines the questions you need to ask and how you can find the answers.

It even provides links to the relevant sections of the Regulation, so you can check why each process is necessary.

Try before you buy with a FREE 30-day trial. Simply add the subscription you require to your basket and proceed to checkout.