Majority of Orgs Failing to Make Machine Learning Fair, Safe & Balanced

Majority of Orgs Failing to Make Machine Learning Fair, Safe & Balanced

New research from O’Reilly Media has revealed that almost nine out of 10 (86%) businesses are deploying machine learning technologies without considering important questions regarding data quality, consumer privacy and the quality of machine learning applications.

The firm conducted its research among 2000 senior business leaders in the EU, discovering that over half (55%) of EU businesses have not included privacy provisions in their model-building checklist, whilst 53% do not account for compliance and 62% don’t include fairness and bias.

Only 14% of those polled accounted for compliance, privacy, fairness and bias in their model-building checklist, and O’Reilly Media warned that failing to do so will result in failed results from flawed, biased and unethical applications that could also put people’s privacy at risk.

“There is much more to machine learning than just optimizing your business metrics,” said Ben Lorica, chief data scientist at O’Reilly Media and AI London Conference chair. “It’s critical that those developing these transformational applications understand the power they’re harnessing, and how small errors or omissions can lead to major problems down the line.”

Lorica argued that, too often, the task of developing machine learning technology falls to data scientists without insight from lawyers, compliance and privacy experts.

“Since the introduction of the GDPR, businesses should be on heightened alert for anything that could compromise consumer privacy,” he added. “Yet, over half of machine learning projects still fail to take this into account. This is simply storing up trouble for the future.

“Meanwhile, other failings such as bias and fairness will mean that organizations won’t get full value from their ML investment – and could even end up with applications that are fundamentally inaccurate and therefore less than useless.

“The problem with any new technology is that developers and engineers are often focused on its potential for good, rather than worrying about dangers such as privacy. To maintain public trust in these technologies, it’s critical that we address these problems before machine learning applications come online,” Lorica concluded.

Source: Information Security Magazine