A Conversation About Ethics and AI

By Ayva Strauss    With the increasing prominence of artificial intelligence in our everyday lives, it is obvious that there needs to be some kind of ethical constraints placed...

By Ayva Strauss 

 

With the increasing prominence of artificial intelligence in our everyday lives, it is obvious that there needs to be some kind of ethical constraints placed on these algorithms. What is less obvious is how exactly we go about imparting human morality on lines of computer code.  

Data Scientist Cathy O’Neil visited Susquehanna University’s Stretansky Concert Hall earlier this month to address this very topic, in a speech titled “Auditing Algorithms.” O’Neil is the chief executive officer of O’Neil Risk Consulting and Algorithm Auditing, ORCAA, and the author of multiple books on the ethics surrounding data science. The university invited her to address the student body and faculty as a distinguished guest lecturer in the sciences.  

Early on in the speech, O’Neil told the audience that terms like “AI” or “algorithms” are designed to intimidate—to make the general public feel as though the topic at hand is too complicated and best left to the professionals. Thus, the ORCAA CEO started by giving her definition of algorithms and artificial intelligence: computational tools that use past data to make predictions or decisions about what is likely to happen in the future. Simply put, algorithms “look for patterns in the past and then propagate them into the future,” she said. 

O’Neil went on to say that when she was growing up, there were many more face-to-face interactions with people required for bureaucratic processes. A human being would tell you if you were approved for a loan or if you got the job. Today, she explained, artificial intelligence models are making key decisions about these crucial matters, like discriminating between job and loan applicants and even helping decide prison sentences. O’Neil said that in some cases, this automation has caused significant harm and put specific groups of people at a disadvantage. 

One problem with these algorithms, the ORCAA CEO contended, is that the companies who design them have different goals than the people who use them. She turned to Facebook as an example. “Facebook gets to decide what success for its news algorithm is,” she said. “And I’ll give you a hint as to what that looks like—it’s keeping us on Facebook for as long as possible.” 

O’Neil explained that for Meta Platforms Inc., the parent company of Facebook, more time spent on the app means more profit, which is the company’s definition of success. But that’s not necessarily the users’ definition of success, she went on; the company does not consider that more time on the app might be harmful to users’ general well-being. For the average person, more time spent on Facebook can translate into feelings of shame or hatred for the people around them, according to O’Neil. 

“We have different agendas, and whoever owns the algorithm gets to insert their agenda into it,” she added. 

In other cases, O’Neil said, it is not only conflicting definitions of success but the data itself that can lead to harmful algorithms. She told the story of Sarah Wysocki, a former teacher in the New York City public school system. Wysocki’s school fired her because an algorithm decided that she was among the worst-performing teachers in her district, based on standardized test scores. 

However, O’Neil said, the algorithm that led to Wysocki’s firing did not take into account the fact that her students came to the third grade without knowing how to read or write, and so low standardized test scores were not necessarily reflective of her teaching ability. 

“This is a woman that was fired based on a score nobody could explain to her,” she said. She added that it was mostly teachers from poor neighborhoods of color, like Wysocki, that were being fired based on this algorithm. O’Neil used this instance to demonstrate how artificial intelligence models can perpetuate inequality. 

When the stakes are as high as people’s daily welfare, jobs, or education, it is essential that data scientists come up with a way to ensure that their artificial intelligence models are not causing harm or perpetuating biases, O’Neil said.  

She believes she has found a solution to unfair algorithms—a method she calls the ethical matrix. “And it’s based on one question, which we ask over and over and over again, which is, For whom does this fail?” she said.  

The ORCAA CEO went on to explain that the ethical matrix is essentially a table, where the rows are everyone who might be affected by the algorithm, and the columns represent the problems an algorithm might pose for them.  

“What you’re going to do is come up with a list of stakeholders and ask them, directly, if possible, ‘What could go wrong for people like you?’” she said. The ethical matrix gives programmers a visual representation of the ethical problems their algorithms might pose for the people who use it.  

O’Neil emphasized that once the algorithms’ authors have this knowledge, there are actually two options. The obvious one is to find a solution before the algorithm is deployed. But the other option is to decide that the algorithm is better off  abandoned altogether because using it will only lead to unfairness. 

She closed her speech by highlighting that while technology professionals try to convince the general population that making ethical decisions around algorithms is complicated, in her view, it is rather simple. “This is not a mathematical conversation,” O’Neil said. “This is a conversation about fairness.” 

Categories
Arts and EntertainmentArts and Entertainment
No Comment