How worried should you be by the BioStar 2 breach that leaked 1 million people’s biometric data?

Last week, security researchers released a report that claimed that the fingerprints of more than 1 million people, as well as facial recognition information, unencrypted usernames and passwords, and other personal information, were left on a publicly accessible database.

The information was collected for the security tool BioStar 2, which is used by the likes of the Metropolitan Police, defence contractors and banks to control access to specific parts of secure facilities.

There was, understandably, a huge outcry over the incident, because it compromises the security of those facilities and allows miscreants to use the information for other criminal purposes.

This might include gaining access to devices and facilities that rely on fingerprint scans and, perhaps more concerningly, the creation of ‘deepfakes’. These are video forgeries that piece together a victim’s facial expressions and audio to depict them saying and doing things that never happened.

Deepfakes

A deepfake of Mark Zuckerberg was released earlier this year to demonstrate how convincing the technology is. That video was created using public appearances by Zuckerberg, but if criminal hackers can get their hands on face scans from leaked databases, they could make similar videos of anyone whose information was compromised.

Michela Menting, research director at ABI Research, commented: “The key ingredient in truly credible deepfakes is having a lot of data on the subject, and notably video of a person in any number of different facial expressions. One can imagine that leaks of facial recognition information can help to build better databases.”

Unprotected and unencrypted

The evidence collected by security researchers Noam Rotem and Ran Locar on behalf of vpnMentor initially indicated a potentially disastrous incident.

They discovered that BioStar 2’s database was unprotected and mostly unencrypted, giving them the freedom to look through 27.8 million records.

“We were able to find plain-text passwords of administrator accounts,” Rotem told the Guardian.

“The access allows [us to see that] millions of users are using this system to access different locations and see in real time which user enters which facility or which room in each facility, even.”

“We [were] able to change data and add new users,” he added.

The report also noted that Rotem and Locar were able to access data from co-working organisations in the US and Indonesia, a gym chain in India and Sri Lanka, a medicine supplier in the UK, and a car parking space developer in Finland, among others.

Biometric data must be kept secure

This leak is more damaging than one containing traditional authentication factors, like passwords or PINs, because those can be changed if they’re compromised. That’s not the case for fingerprints and your face; you can’t realistically change those following a biometric data breach.

Tim Erlin, vice president of product management and strategy at Tripwire, said:

“As an industry, we’ve learned a lot of lessons about how to securely store authentication data over the years. In many cases, we’re still learning and re-learning those lessons.

“Unfortunately, companies can’t send out a reset email for fingerprints. The benefit and disadvantage of biometric data is that it can’t be changed.

“Using multiple factors for authentication helps mitigate these kinds of breaches. As long as I can’t get access to a system or building with only one factor, then the compromise of my password, key card or fingerprint doesn’t result in compromise of the whole system.

“Of course, if these factors are stored or alterable from a single system, then there remains a single point of failure.”

Confusion at Tile Mountain

Many organisations named in the report have criticised its findings, saying they were either mistaken for different companies or weren’t told about breach.

Stoke-based Tile Mountain told Bloomberg that it hadn’t been made aware of the leak until the day the report was published.

“It is concerning that no contact was made to inform us that data may have been compromised,” Tile Mountain IT Director Colin Hampson said in a statement.

When researchers identify vulnerabilities like this, they’ll typically contact the affected organisations, giving them the opportunity to fix the problem and, if necessary, meet their data breach notification requirements.

This is more important than ever since the introduction of the GDPR (General Data Protection Regulation), which outlines strict rules on the way organisations must respond to security incidents and introduces severe penalties for those that violate its requirements.

However, Rotem told the BBC that when he and Locar tried to contact Suprema, the firm that hosts BioStar 2, the organisation repeatedly hung up the phone.

The researchers aren’t entirely without blame, though. Their report named Phoenix Medical as a UK-based organisation affected by the leak. However, after fielding queries, a spokesperson for the company said the organisation doesn’t use BioStar 2 and that Rotem and Locar had named the wrong Phoenix Medical.

They later updated the report to identify the correct Phoenix Medical, which is based in Tennessee, but a spokesperson for that organisation said that it had not been informed.

Was personal data even leaked?

The organisations named in the report weren’t its only critics. A few hours after vpnMentor published the report, security researcher Zack Whittaker asked on Twitter:

He added that, although the researchers have a good track record, he couldn’t find any evidence that proved the database contained real people’s fingerprint information or whether it was test data.

Whittaker consulted with the professional data breach/leak hunter @shadow0ps to test the information and found that attempts to validate the fingerprint data came up as a “bad request”.

However, Rotem clarified the following day:

This confirmed that the report referred to real biometric data, but it adds to the catalogue of mistakes and misunderstandings surrounding the incident. Data breaches, particularly those involving biometric data, are incredibly dangerous, and the individuals affected must be made aware as soon as possible.

That’s hard when the organisation responsible for protecting the data refuses to acknowledge a breach and those reading the report can’t tell whose information – if anyone’s – had been affected.