5 Questions With Fernanda Eliott

Exploring the risks, potential, and challenges of artificial intelligence

Published:
July 12, 2023

Tim Schmitt

As artificial intelligence (AI) continues to develop and is used in everything from art to medicine, its impact on society, potential for abuse, and possible risks have all become serious topics of discussion. In our search for information, we turned to Fernanda Eliott, assistant professor of computer science, who builds AI systems that use cognitive inspiration and reinforcement learning (RL) techniques. Her overall goal is to develop state-of-the-art cognitively inspired computational approaches to understand where meaningful decisions come from and to enhance human-technology partnerships through the development of AI systems that promote inclusion and assist people. Eliott began her academic career as an undergraduate philosophy student before moving into computer science. Rivaling Eliott’s interests in philosophy and computer science these days is her intense appreciation of coffee. When she’s not teaching or hosting discussions on AI over coffee with anyone who is interested, Eliott can be found roaming the halls of the computer science department in Noyce, mug of coffee and laptop in tow, or exploring coffee shops to engage in fun conversation or creative work.

Assistant Professor of Computer Science Fernanda Eliott. Eliott uses this slide in her classes to demonstrate her love of coffee to students.
Assistant Professor of Computer Science Fernanda Eliott. Eliott uses this slide in her classes to demonstrate her love of coffee to students.

Q: How has your study of philosophy affected your views on the development and use AI? 

A: My philosophy background influences how I think, select, and approach problems. I remember that my philosophy professors at USP (University of Sao Paulo) emphasized structured text reading skills, and I am so thankful for that! It was super helpful back then, but also later, to tackle technical content as I shifted to computer science. And, of course, my background in philosophy guides my excitement for asking questions and being comfortable questioning even what feels obvious and set in stone. It certainly plays a role in why I am so passionate about thinking deeply about designing testbeds and assessing my computational frameworks: I get to ask and test many questions, which is so much fun! I do AI, but my philosophy-driven brain engine plays a vital role.

Q: Where do you think the greatest potential lies with humanities use of AI? 

A: We are constantly creating so much content that keeping track of classical, fundamental references gets harder. It is so sad when people present ideas as if they were brand new when they actually came to light many years ago. Among many other beautiful and priceless contributions, humanities enable us not to forget our journeys as humans and to inquire where and why we are going in such and such directions.

Q: And what is the greatest risk or challenge posed by its use? 

A: One of the greatest challenges posed by a careless use of AI is to unmask our arrogance and its outcomes. Arrogance to presume we understand well enough the cultural diversity within our planet so as to control AI and technology’s impact in different communities. Or to assume we understand well enough other biological creatures and that whatever we know is good enough to justify old/current poor practices instead of consolidating widespread, accessible, and sustainable change. Or to presume that replacing exposure to different opinions, and perhaps even to the unpleasant, will not impact our ability to creatively engage with others and to strengthen our empathy.

Q: How can we help society be better equipped to face the challenges posed by its use? 

A: An important way is to help the general population develop AI literacy competencies. A significant part of that is acknowledging how anthropomorphism influences the way people interpret, interact, and share news about AI. For illustration, one of my MAP students brought inspiration from Synthetic Psychology to investigate how we interpret AI. In this research, we examined questions such as “Does a machine’s complex behavior necessarily mean complex design? What is the role of anthropomorphism in interpreting AI systems?” (Interestingly, this research was inspired by the first-year Tutorial I taught last fall.) Wrapping up, it is essential to foster collaboration and communication across disciplines and cultures to cultivate system-thinking skills and use them to apply humanity-centered design.

Q: You were named a Scialog Fellow last July. What work are you doing in that area and how is it progressing? 

A: For many years now, I have been investigating the computational modeling of sensations, emotions, feelings, and moral reasoning, as one of my goals is to understand decision-making. Currently, I am studying in what ways intuitive cognition and analytical cognition assist decision-making. I am thrilled to have eleven students conducting research this summer with me. A subset of them is building 3D models as we investigate sense-making and clues of emotions and common sense in scenes that encode abstract flavor (such as subtle jokes or metaphors). An exciting finding so far has been a reflection on how to treat contexts that call for a “presupposed participant” - scenes that expand their scope as they incorporate us, outside observers, as if we were part of the overall meaning. 

We use cookies to enable essential services and functionality on our site, enhance your user experience, provide better service through personalized content, collect data on how visitors interact with our site, and enable advertising services.

To accept the use of cookies and continue on to the site, click "I Agree." For more information about our use of cookies and how to opt out, please refer to our website privacy policy.