Artificial Intelligence, Algorithms, Machine Learning—AI is our friend…right?

Dec. 20, 2019 By Sarah Scheffler, Carlotta M. Arthur
Artificial Intelligence, Algorithms, Machine Learning—AI is our friend…right?
"Only One Business Communication Textbook Covers Artificial Intelligence and Intelligent Technologies" by Bovee and Thill. Licensed under CC BY-NC-SA 2.0

Perhaps nothing in the world of technology today elicits more extreme responses than the topic of artificial intelligence, or “AI”—computer or computational systems that model human thought and behavior. The building blocks of AI are “algorithms”—a set of rules or instructions that a computer then follows. Machine learning can be defined as “algorithms that can learn from data without relying on rules-based programming” (McKinsey & Co.). Some see AI as a panacea with the capacity to improve the human condition. For example, proponents of driverless cars point to the potential to significantly reduce traffic fatalities and to facilitate transportation for people living with mobility challenges. AI advocates in the health sector highlight algorithms that can reliably diagnose human diseases as well as, or better than, human physicians.

Other respected voices, however, have sounded the alarm about AI’s potential for negative societal impact. In 2014, world-renowned theoretical physicist Dr. Stephen Hawking was quoted as saying, "The development of full artificial intelligence could spell the end of the human race." This perspective was echoed by technology entrepreneur Elon Musk. Although “full” or “artificial general intelligence” (AGI)—computational systems with the same capacity to understand, learn, and behave as humans—are still a long way from being realized, some AI systems are already having negative societal impact. High profile examples of biased AI seem to headline the news regularly: biased algorithms that discriminate against African Americans in potentially life-threatening situations or against women on credit applications, with potentially negative economic consequences.

This week, Sarah Scheffler, Clare Boothe Luce Graduate Fellow in Computer Science at Boston University, shares her views on this topic and highlights work that she and her colleagues are doing to study and address bias in AI. We are thrilled that promising young scientists like Sarah are taking on this critical STEM ethics and society issue!

By Carlotta M. Arthur, Ph.D.


“What sphinx of cement and aluminum…ate up their brains and imagination?…Moloch whose mind is pure machinery!” Howl, Allen Ginsburg (1955)

The study of algorithmic fairness arose from the observation that algorithms that are accurate on average can still be consistently wrong on a subgroup of the population. No algorithm is perfect; mistakes will always happen somewhere. This becomes a problem when the error is concentrated on one small chunk of the population, especially if that small group is otherwise disadvantaged or disempowered.

I see algorithmic fairness as part of a larger movement that I’ve dubbed “tech environmentalism.” The metaphor is apt: environmentalism is, at least partly, a reaction to externalities imposed on our health and environment by industries that did not directly face the costs of the consequences. In the 20th century, the response to widespread use of harmful pesticides, elevated lead levels, and pollution was a mix of grassroots activism, policy changes, regulation, diplomacy, and new technology. The response wasn’t a panacea, but it largely solved many environmental issues that have since faded into the background.

More importantly, environmentalism shifted our views about certain tradeoffs in a significant way, and we should think similarly about the tradeoffs caused by today’s technology. These tradeoffs are more important than ever before because of the ability for algorithms to reach through space and time.

Because algorithms can be repeated nearly for free, they become widely used. A social media company can only put so many eyeballs on posts on its website, but an algorithm can screen them all relatively cheaply, even if the posts are made by people in every country in the world. This is both a good and a bad thing—it enables people to do things that were impossible before the Internet, but it also means that “one size fits all” algorithms are more popular. For example, language creates a barrier for text analysis because the majority of natural language processing methods are designed to analyze (American) English.

Just as importantly, algorithms consume very little “fuel,” so once an automated system is put in place, it remains there until someone turns it off. This impacts our security in the form of password systems written in the 90s that cap the number of characters your password can be. They impact the way we learn about the world by returning search results that are similar to things we’ve clicked before. And they immortalize our views of social norms and constructions by, for example, having six radio buttons for race and two radio buttons for gender. Changing these standards requires effort that no company is incentivized to spend.

The solutions are hard, and they’re a work in progress. I am proud to be researching ways to alleviate algorithmic bias, such as programming algorithms to recognize their own shortcomings and ask for human help. You can learn more about this work in my recent article in "The Conversation."

I think the same tools we used to build environmentalism can help reduce the negative externalities of technology, and magnify the positive ones. With a mix of activism, regulation, and technology, I’m optimistic about the future.

By Sarah Scheffler


Author Bios

Carlotta M. Arthur, Ph.D.

Carlotta M. Arthur, Ph.D. directs the Clare Boothe Luce (CBL) Program at the Henry Luce Foundation. Since its inception in 1989, CBL has been one of the most significant sources of private support for women in science, mathematics and engineering in higher education in the U.S. Prior to joining Luce in 2012, Carlotta directed the Mellon Mays Undergraduate Fellowship and Diversity Initiatives Programs at the Andrew W. Mellon Foundation. The first African American woman to earn a B.S. in Metallurgical Engineering from Purdue University, she later completed a Ph.D. in Clinical Psychology (Psychophysiology) at SUNY Stony Brook. Carlotta was a member of the inaugural cohort of W.K. Kellogg Scholars in Health Disparities at the Harvard School of Public Health; an Assistant Professor at Meharry Medical College, an HBCU in Nashville, TN; and an Adjunct Assistant Professor at the Dartmouth Geisel School of Medicine. Dr. Arthur is a member of the Society of Women Engineers (SWE) Research Advisory Committee and sits on the American Psychological Association Leadership Institute for Women in Psychology Advisory Committee.

Sarah Scheffler

Sarah Scheffler
Clare Boothe Luce Graduate Fellow
Department of Computer Science, Boston University

Sarah Scheffler is a 4th-year PhD student in cryptography studying under Prof. Mayank Varia at Boston University. Her research spans from fully technical to interdisciplinary work in computer science and law. Her cryptographic interests include multi-party computation and private set intersection; her interdisciplinary work covers research on compelled decryption and algorithmic fairness.


CBL30: Room for Wisdom|Clare Boothe Luce Program

Related News

March 26, 2024Ideas & Reflections
Over Three Decades of Advancing Women in STEM
Women in STEM
Feb. 5, 2024Foundation News
The Luce Foundation Awards $12.3 Million to Advance Gender Diversity in STEM
Clare Boothe Luce Program
Dec. 20, 2022Foundation News
Henry Luce Foundation Awards $11.3M in New Grants to Support Women in STEM
Clare Boothe Luce ProgramGrants Announcement
See All