fbpx

Applying AI to Social Issues

December 11, 2018

By Amélie Buc, 2015 Caroline D. Bradley Scholar

If someone had asked me before my ninth grade summer if I knew what Artificial Intelligence was, I would have talked about Ava in the film Ex Machina. Ava, though technically a robot, has a consciousness, morals of her own, an ability to manipulate emotion and an ability to act on her own instinct. The film inspires unsettling uncertainty: if AI beings like Ava were to join society, would they be considered human? What role would we play in such a world?

In reality, however, AI is far from being as autonomous as Ava is. Though pop culture has popularized an image of AI as robots and superhumans, an AI algorithm can actually be as simple as a 5-line program that scans a massive amount of data, looking for patterns within this data. Using these patterns, the program can make predictions for new instances of related data. For example, Google Photos was trained with millions of photos of labelled entities, so that when given photos it had never seen before, it could categorize these photos by what they pictured.

Although this explanation generalizes certain aspects of AI, what’s significant is that the data used to train AI algorithms is chosen by human programmers. This means that programmers can, even if unconsciously or inadvertently, introduce their racial and other biases into the algorithms they create. This can sometimes be innocuous, but more often this can institutionalize these biases. For instance, in 2015, Google Photos’ photo categorization software mislabeled a black person as a gorilla. This is only one extremely infamous incident out of thousands.

One source of the problem is the astounding lack of diversity in the AI industry; at Google, only 2.5% of the workforce is black, only 3.6% is Latinx and only 0.3% is Native American. The lack of true demographic representation in the AI industry is an issue that is under-acknowledged but increasingly pressing as the use of AI technologies propagates across industries and sectors, including law enforcement, healthcare and scientific research.

AI4ALL was founded by world-renown Stanford professor Fei-Fei Li and then-graduate student Olga Russakovsky with the goal of educating high schoolers from underrepresented groups, including women, about AI and the humanitarian impact AI can have if done right. The organization’s aim is to diversify both the AI workforce and inform on the ways in which AI can be applied to social issues.

In 2017, I participated in a two-week AI4ALL program at Stanford University, where close to thirty girls were exposed to a myriad of lectures by Stanford professors and other leaders in AI; we learned about how AI is applied to everything from education to linguistics research to robotics. We were also taught how to program our own AI algorithms addressing a social issue; one group identified cancerous tissue with Computer Vision, a subsect of AI, and my group used another subsect of AI, called Natural Language Processing, to categorize Tweets from the time of a natural disaster into topics like “food,” “water,” “other” and “misc.” in order to expedite resource allocation for disaster relief. I learned from the experience that AI can be leveraged as a tool for social good and as a means of addressing community needs in unprecedented ways.

AI4ALL has been one of the most enriching programs I’ve ever been able to participate in, and the alumni community is a group of some of the most talented yet approachable people I’ve ever met. I’ve made some of my closest friends through AI4ALL, and each is an inspiration; one founded a nonprofit STEM education organization that has taught over 500 children through over 70 workshops, and another started the Bay Area’s first all-girl high school hackathon. AI4ALL facilitates such efforts by providing a vast network of resources, support and funding.

In conclusion, I truly hope many members of the CDB community choose to apply to an AI4ALL program this summer. For those who are too old, there are still an infinite number of ways to begin understanding how to leverage AI algorithms for social change; AI4ALL is digitizing their curriculum on an OpenLearning platform that will be released in only a few months, and other platforms like Coursera and edX also offer online courses taught by university professors and AI experts.

AI could be the solution to predicting wildfires, ending racial discrimination in the justice system and understanding how to treat chronic illnesses. The industry just needs a diverse workforce – and you – to back it.

Amélie Buc is a 2015 Caroline D. Bradley Scholar. She is currently a junior at Trinity School in New York, New York. To learn more about the CDB Scholarship or apply for the 2019 class, visit the CDB webpage.