Dr Deepak Padmanabhan, a researcher from Queen’s University Belfast has developed an innovative new algorithm that will help make artificial intelligence (AI) fairer and less biased when processing data.
Dr Padmanabhan has been leading an international project, working with experts at the Indian Institute of Technology Madras to tackle the discrimination problem within clustering algorithms.
Companies often use AI technologies to sift through huge amounts of data in situations such as an oversubscribed job vacancy or in policing when there is a large volume of CCTV data. AI sorts through the data, grouping it to form a manageable number of clusters, which are groups of data with common characteristics. It is then much easier for an organisation to analyse manually and either shortlist or reject the entire group.
“AI techniques for data processing, known as clustering algorithms, are often criticised as being biased in terms of ‘sensitive attributes’ such as race, gender, age, religion and country of origin. It is important that AI techniques be fair while aiding shortlisting decisions, to ensure that they are not discriminatory on such attributes.”Dr Padmanabhan says: “When a company is faced with a process that involves lots of data, it is impossible to manually sift through this.
Clustering is a common process to use in processes such as recruitment where there are thousands of applications submitted. While this may cut back on time in terms of sifting through large numbers of applications, there is a big catch. It is often observed that this clustering process exacerbates workplace discrimination by producing clusters that are highly skewed.”The researcher has now created a method that, for the first time, can achieve fairness in many attributes. “Our fair clustering algorithm, called FairKM, can be invoked with any number of specified sensitive attributes, leading to a much fairer process.
“In a way, FairKM takes a significant step towards algorithms assuming the role of ensuring fairness in shortlisting, especially in terms of human resources. With a fairer process in place, the selection committees can focus on other core job-related criteria says Dr Padmanabhan.
“FairKM can be applied across a number of data scenarios where AI is being used to aid decision making, such as pro-active policing for crime prevention and detection of suspicious activities. This, we believe, marks a significant step forward towards building fair machine learning algorithms that can deal with the demands of our modern democratic society.”
“Employing AI techniques directly on raw data results in biased insights, which influence public policy and this could amplify existing disparities. The uptake of fairer AI methods is critical, especially in the public sector, when it comes to such scenarios.”
The research, which was conducted at Queen’s University’s Computer Science building, will be presented in Copenhagen in April 2020 at the EDBT 2020 conference, which is renowned for data science research.
(Image Courtesy: www.sisense.com)