In a recent article, WIRED senior writer Tom Simonite talked to Kate Crawford, author of Atlas of AI, to explore the ethical issues facing artificial intelligence and machine learning technologies.
“We’re relying on systems that don’t have the sort of safety rails you would expect for something so influential in everyday life,” notes Crawford. “There are tools actually causing harm that are completely unregulated.”
When people that aren’t in the industry hear me say that artificial intelligence and machine learning can become forces for positive change in society, they ask me to explain why these technologies have been mired in controversy for more than a decade. And why ethical issues seem to be getting worse versus getting better.
Indeed, in recent years several high-profile cases of ML technologies causing harm to marginalized parts of society have captured headlines. Household name brands like Amazon, Apple, Facebook, Google have been accused of algorithmic bias, thus affecting society. As a result, there is a growing sentiment that systems designed to improve everything from people’s financial lives to their physical well-being have become a threat to many populations already suffering.
To answer these questions, I draw from my own experience and observations in the field. There’s never one answer to such a complex set of issues. Still, one contributor is that, historically, the teams responsible for the systems that make millions of life-changing decisions every second have been largely homogeneous, built without extreme care for whether they are reflective of all of society.
In other words, there is an entire generation of data scientists and engineers in our industry that are building systems that impact groups of society they don’t understand.
To be fair, there’s nothing wrong with a company reflecting on solving some of the world’s most vexing problems by hiring the best and brightest talent to do so. However, unless there’s a conscious effort to build diversity into the fabric of an organization, the result is a pool of talent in the ML community that isn’t being discovered and nurtured, and their skills and experience are being wasted.
More importantly, teams that lack a diversity of backgrounds, experiences, and perspectives not only perpetuate workplace inequality but also serve as a barrier to solving many of the problems that ML and AI technologies have the potential to address.
Although no single company or team can solve the ethical AI dilemma, the hope for all companies in our industry is that they embrace fairness, transparency, and accountability in their hiring and R&D processes so developments in AI advance positive outcomes for all people and societies.
Some recent developments suggest that there is good reason to believe many companies will adopt these principles. More and more organizations are investing in teams to ensure algorithmic accountability and ethics with an ultimate eye towards improving how their products impact the world.
Yet, these steps are just the beginning. We know that implementing systems that are free from bias and ethical concerns is essential; however, achieving this goal requires direct action in the following areas:
In the world of AI and machine learning, we are quickly learning that data and models can often obscure the hard truths of a person’s lived experience. This is particularly true if the models are built by teams that are not representative of race, gender, sexual orientation, or socioeconomic status. Suppose we imagine different outcomes, have the readiness to pursue them, and start with the people behind the products first. In that case, we can create a new reality where ML and AI technologies truly serve all people fairly and without harm.