Artificial Intelligence Has Become A Tool For Classifying And Ranking People

Last updated: 10-06-2019

Read original article here

Artificial Intelligence Has Become A Tool For Classifying And Ranking People

Recommending content, powering chatbots, trading stocks, detecting medical conditions, and driving cars. These are only a small handful of the most well-known uses of artificial intelligence, yet there is one that, despite being on the margins for much of AI's recent history, is now threatening to grow significantly in prominence. This is AI's ability to classify and rank people, to separate them according to whether they're 'good' or 'bad' in relation to certain purposes.

At the moment, Western civilization hasn't reached the point where AI-based systems are used en masse to categorize us according to whether we're likely to be 'good' employees, 'good' customers, 'good' dates and 'good' citizens. Nonetheless, all available indicators suggest that we're moving in this direction, and that this is regardless of whether Western nations consciously decide to construct the kinds of social credit system currently being developed by China.

This risk was highlighted at the end of September, when it emerged that an AI-powered system was being used to screen job candidates in the UK for the first time. Developed by the US-based HireVue, it harnesses machine learning to evaluate the facial expressions, language and tone of voice of job applicants, who are filmed via smartphone or laptop and quizzed with an identical set of interview questions. HireVue's platform then filters out the 'best' applicants by comparing the 25,000 pieces of data taken from each applicant's video against those collected from the interviews of existing 'model' employees.

Needless to say, privacy advocates and academics aren't particularly keen on HireVue's "Hiring Intelligence Platform," which is used globally to conduct one million interviews and over 150,000 pre-hire assessments every 90 days. "It is going to favour people who are good at doing interviews on video and any data set will have biases in it which will rule out people who actually would have been great at the job,” UCL's Anna Cox told the Daily Telegraph.

HireVue's technology is already here, but there are signs that similar systems may end up being wielded in other areas and sectors, to potentially harmful effect. At the end of September, Sky News reported that Academies Enterprise Trust, one of the biggest academy school chains in England, had begun monitoring the mental health of pupils at its 150 schools using a tool called AS Tracking, which uses artificial intelligence to predict self-harm, eating disorders and drug abuse.

On their own, such systems can be beneficial, given that the introduction of AS Tracking in one school in Essex resulted in a 20% reduction in pupil self-harm. However, there's a genuine concern that once schools introduce AI classification and monitoring of pupils in one area, they'll apply it to other areas, eventually using it to select the 'best' students and refuse places to the 'worst.' This possibility is especially worrying in the case of private schools, or in the UK, also in the case of academies, which critics argue have been turning schools into businesses.

"With these types of technologies there is a concern that they are implemented for one reason and later used for other reasons," said Carly Kind, director of the Ada Lovelace Institute, who was speaking to Sky News. "There is the scope for mission creep, where somebody in a school says this would be a great tool to sort children into different classrooms, or decide which students should go on to university and which shouldn't."

This is still a fear rather than a reality, but when the use of AI is shown to result in improved results (and improved profits in certain cases), basic economic imperatives will dictate that rising numbers of schools adopt AI-based classification systems more broadly. And the same goes for businesses and organizations in general, as has been highlighted by other recent cases.

For example, the UK-based Trades Union Congress complained in April about the growing popularity of AI-based staff-monitoring tools such as Isaak, which as of April kept tabs on approximately 130,000 workers in the UK and abroad. Not only does Isaak record employee activity at the micro level (i.e. tracking whether they're typing or not), but it also ranks them according to how collaborative they are, using communication data to determine whether they're “influencers” or “change-makers.” On top of this, it can also compare employee activity data with sales performance figures, furnishing management with a detailed window into who's a 'good' worker and who isn't.

Similar examples can easily be found elsewhere. Montage is another applicant-filtering tool that utilizes artificial intelligence, and as of last October it was being used by around 100 Fortune 500 companies. A number of Fortune 500 companies also make use of Humanyze, which provides a wearable smart device that tracks employee behavior, conversations and movements, and that combines with data from emails and other digital communications to determine how engaged and productive any given worker is.

And to put this in perspective, a study published last year by Gartner found that analytical or AI-based employee-monitoring is currently being used by over 50% of companies that bring in more than $750m in annual revenues.

No doubt the identification of 'good' and 'bad' employees is already firmly established practice in many of the world's biggest corporations, but it's likely we'll see the classification of people spread outwards into other areas of society. Most ominously, a September report on "The Global Expansion of AI Surveillance" published by the Carnegie Endowment for International Peace found that at least 75 out of 176 countries internationally are regularly using AI tech for surveillance purposes, while this ratio increases to 51% of "advanced democracies."

This is a startling percentage, and when it emerges that authorities in the United States, for instance, are already using AI to predict where crimes will happen and to maintain a facial recognition database of around half the adult population, the possibility that the use of AI could expand further becomes a very tangible one.

Given that the essential function of AI is to classify masses of data into simplified categories, and given that the vast majority of data comes from people, it's almost inevitable that AI-based tools will be used increasingly to classify people. This may begin as a mostly benign practice, but given the profit-motive and the logic of a competitive global economy, it will be very hard to resist the temptation to use AI in a growing number of domains.

The effect of this might be to increase the fortunes and profits of corporations, as well as to ease the task of governments and authorities in an age of rising political instability, but it's also clear that the widespread use of classificatory AI comes with massive social dangers. Numerous studies have already shown that AI algorithms are only as 'objective' and 'fair' as the data they're fed, and if little is done to change such bias, their deployment in the years to come might only amount to a means of perpetuating unequal and unjust social structures.

Recommending content, powering chatbots, trading stocks, detecting medical conditions, and driving cars. These are only a small handful of the most well-known uses of artificial intelligence, yet there is one that, despite being on the margins for much of AI's recent history, is now threatening to grow significantly in prominence. This is AI's ability to classify and rank people, to separate them according to whether they're 'good' or 'bad' in relation to certain purposes.

At the moment, Western civilization hasn't reached the point where AI-based systems are used en masse to categorize us according to whether we're likely to be 'good' employees, 'good' customers, 'good' dates and 'good' citizens. Nonetheless, all available indicators suggest that we're moving in this direction, and that this is regardless of whether Western nations consciously decide to construct the kinds of social credit system currently being developed by China.

This risk was highlighted at the end of September, when it emerged that an AI-powered system was being used to screen job candidates in the UK for the first time. Developed by the US-based HireVue, it harnesses machine learning to evaluate the facial expressions, language and tone of voice of job applicants, who are filmed via smartphone or laptop and quizzed with an identical set of interview questions. HireVue's platform then filters out the 'best' applicants by comparing the 25,000 pieces of data taken from each applicant's video against those collected from the interviews of existing 'model' employees.

Needless to say, privacy advocates and academics aren't particularly keen on HireVue's "Hiring Intelligence Platform," which is used globally to conduct one million interviews and over 150,000 pre-hire assessments every 90 days. "It is going to favour people who are good at doing interviews on video and any data set will have biases in it which will rule out people who actually would have been great at the job,” UCL's Anna Cox told the Daily Telegraph.

HireVue's technology is already here, but there are signs that similar systems may end up being wielded in other areas and sectors, to potentially harmful effect. At the end of September, Sky News reported that Academies Enterprise Trust, one of the biggest academy school chains in England, had begun monitoring the mental health of pupils at its 150 schools using a tool called AS Tracking, which uses artificial intelligence to predict self-harm, eating disorders and drug abuse.

On their own, such systems can be beneficial, given that the introduction of AS Tracking in one school in Essex resulted in a 20% reduction in pupil self-harm. However, there's a genuine concern that once schools introduce AI classification and monitoring of pupils in one area, they'll apply it to other areas, eventually using it to select the 'best' students and refuse places to the 'worst.' This possibility is especially worrying in the case of private schools, or in the UK, also in the case of academies, which critics argue have been turning schools into businesses.

"With these types of technologies there is a concern that they are implemented for one reason and later used for other reasons," said Carly Kind, director of the Ada Lovelace Institute, who was speaking to Sky News. "There is the scope for mission creep, where somebody in a school says this would be a great tool to sort children into different classrooms, or decide which students should go on to university and which shouldn't."

This is still a fear rather than a reality, but when the use of AI is shown to result in improved results (and improved profits in certain cases), basic economic imperatives will dictate that rising numbers of schools adopt AI-based classification systems more broadly. And the same goes for businesses and organizations in general, as has been highlighted by other recent cases.

For example, the UK-based Trades Union Congress complained in April about the growing popularity of AI-based staff-monitoring tools such as Isaak, which as of April kept tabs on approximately 130,000 workers in the UK and abroad. Not only does Isaak record employee activity at the micro level (i.e. tracking whether they're typing or not), but it also ranks them according to how collaborative they are, using communication data to determine whether they're “influencers” or “change-makers.” On top of this, it can also compare employee activity data with sales performance figures, furnishing management with a detailed window into who's a 'good' worker and who isn't.

Similar examples can easily be found elsewhere. Montage is another applicant-filtering tool that utilizes artificial intelligence, and as of last October it was being used by around 100 Fortune 500 companies. A number of Fortune 500 companies also make use of Humanyze, which provides a wearable smart device that tracks employee behavior, conversations and movements, and that combines with data from emails and other digital communications to determine how engaged and productive any given worker is.

And to put this in perspective, a study published last year by Gartner found that analytical or AI-based employee-monitoring is currently being used by over 50% of companies that bring in more than $750m in annual revenues.

No doubt the identification of 'good' and 'bad' employees is already firmly established practice in many of the world's biggest corporations, but it's likely we'll see the classification of people spread outwards into other areas of society. Most ominously, a September report on "The Global Expansion of AI Surveillance" published by the Carnegie Endowment for International Peace found that at least 75 out of 176 countries internationally are regularly using AI tech for surveillance purposes, while this ratio increases to 51% of "advanced democracies."

This is a startling percentage, and when it emerges that authorities in the United States, for instance, are already using AI to predict where crimes will happen and to maintain a facial recognition database of around half the adult population, the possibility that the use of AI could expand further becomes a very tangible one.

Given that the essential function of AI is to classify masses of data into simplified categories, and given that the vast majority of data comes from people, it's almost inevitable that AI-based tools will be used increasingly to classify people. This may begin as a mostly benign practice, but given the profit-motive and the logic of a competitive global economy, it will be very hard to resist the temptation to use AI in a growing number of domains.

The effect of this might be to increase the fortunes and profits of corporations, as well as to ease the task of governments and authorities in an age of rising political instability, but it's also clear that the widespread use of classificatory AI comes with massive social dangers. Numerous studies have already shown that AI algorithms are only as 'objective' and 'fair' as the data they're fed, and if little is done to change such bias, their deployment in the years to come might only amount to a means of perpetuating unequal and unjust social structures.


Read the rest of this article here