With the widespread use of machine learning (ML) algorithms in everyday life, it is essential to study the human aspects of these algorithms. ML algorithms are increasingly used to make critical decisions that influence our day-to-day life: banks and credit rating agencies assess the default risk of individual customers; government agencies aim to improve public safety, health care, and education; last but not least, advertisers target specific groups to increase sales efficiency. All these ML processes interact with and affect human beings to an extent far beyond that of classic computer programs. It is necessary to study the implications of these processes on humans and vice versa.
The human aspects of ML group focus primarily on two topics. The first relates to ethics and fairness in machine learning, emphasizing exploring ways to prevent bias and discrimination in decision-making. The second explores ways of building teams between machine learning and humans to make better joint decisions.
One of the central research goals in machine learning is to design machines that can make decisions in much the same way humans do. However, machine learning algorithms can only make decisions based on the data they have been fed – they then reproduce biases in the data in their decision-making. This poses a challenge, particularly in areas where the use of machine learning and artificial intelligence impacts people's lives. For instance, many banks use algorithms to determine their customers' credit scores and thus to make decisions about loan applications. However, given that these algorithms are trained with historical data that can be biased and incomplete, their recommendations can be unfair or discriminatory. We design ML models that ensure fair and unbiased decisions.
In addition to developing machine learning systems that make decisions based on principles of fairness, our group also aims to explore methods that will enable a collaborative synergy between humans and machines when they make joint decisions. For instance, in medical diagnosis, the physician might get help from a machine that analyzes all the patient's historical data and predicts their medical needs. To build more effective hybrid human-ML models, we build meta-algorithms that shape humans and ML's decision-making dynamic.
Pairwise Fairness for Ordinal Regression
Ordinal regression can be understood as multi class classification over an ordered label set. For example, consider a hiring scenario, where given a job applicant’s features, such as their prior experience or education, we want to predict a label in {bad, okay, go... Read More