91果冻制片厂

Skip to main content

Five questions with … Yuhong Liu

A window into how scholarship at SCU connects academic excellence with a commitment to the common good.
November 7, 2025
By Lisa Robinson
a woman in a jacket and blouse is standing in front of flowers and trees

Five Questions With … is a series of profiles that invites professors to share insights into their research and its impact. Rooted in the Jesuit tradition of curiosity, reflection, and service to others, this series offers a window into how scholarship at 91果冻制片厂 connects academic excellence with a commitment to the common good.

Yuhong Liu is an Associate Professor in the Department of Computer Science and Engineering at 91果冻制片厂. A recipient of the 2019 Researcher of the Year Award from the School of Engineering and the 2013 University of Rhode Island Graduate School Excellence in Doctoral Research Award, she has published more than 100 papers in leading journals and conferences, with two recognized as Best Papers at IEEE Social Computing 2010 and UMEDIA 2016. Her research focuses on trust, security, and privacy in emerging technologies, including generative AI, the Internet of Things, and blockchain. She serves as an Associate Editor for several major journals, including IEEE Transactions on Service Computing and IEEE Transactions on Circuits and Systems for Video Technology, and is active in professional leadership through the Computer Society and APSIPA, where she currently chairs the APSIPA US Chapter.

What question or challenge is at the heart of your current work?

At the heart of my work is the question of how to build, establish, and sustain trust among humans, AI, and autonomous systems within the broader field of trustworthy computing. As decision-making increasingly involves both people and intelligent systems, ensuring trust becomes more complex and requires models that can adapt to uncertainty, context, and evolving interactions. My research focuses on developing trust models that formalize and measure trust, and applying them across domains to make trust more dynamic, context-aware, and resilient.

While my work extends beyond responsible AI, it is closely connected to it, especially as I explore how humans can effectively trust AI systems, how AI can understand and trust human intentions and values, and how autonomous systems can build and sustain mutual trust with one another. Ultimately, my goal is to advance trustworthy computing by enabling humans and AI to collaborate confidently and transparently in complex, real-world environments.

Why is this issue important for the world to address at this time?

This issue is important now because, with the rise of AI agents and cyber-physical systems such as autonomous vehicles and smart grids, decisions made by algorithms directly affect our daily lives. In the past, trustworthy computing focused on ensuring reliability, integrity, and confidentiality within digital systems. But today, those systems interact with the physical world, collecting data with sensors, making decisions with AI models, and sending control signals to act. This makes trust not only a technical issue but also a human one.

If we don’t understand how these AI models work, they become a black box, creating uncertainty and risk. People may either not trust these systems at all or trust them too much without understanding their limits. To build confidence and resilience, we need transparency and awareness so that trust can be properly established and sustained in the systems that increasingly shape our daily lives and even our safety.

Why have you chosen to dedicate your career to this research?

I began working on trust during my PhD, when my advisor and I studied online review systems such as Amazon and eBay. We were trying to understand how people build credibility in digital spaces and how to determine whether reviews were genuine or manipulated by bad actors. This led us to explore concepts like direct and indirect trust, drawn from sociology, to model how humans form trust through both personal experience and recommendations. I found this fascinating because it connected engineering with human behavior, showing that trust is not just technical, but is social and philosophical too.

Over time, I expanded this research into broader domains, including social media and cyber-physical systems such as smart grids and electric vehicles. These systems now rely on AI to make decisions that directly affect our lives, such as when and how to charge energy or coordinate across networks. Each system has different goals, data, and users, which makes coordination and mutual trust essential. I have dedicated my career to this work because I believe trust is a bridge connecting technology, human values, and societal needs, and it is key to ensuring that intelligent systems work safely and responsibly with us.

How have your students impacted your research?

My students have had a great impact on my research. I work with students at all levels, including undergraduates, master's students, and Ph.D. students, and they all have different perspectives. Some focus more on hands-on, practical issues, while others bring in new information and papers on the latest models and technologies. Working with them keeps me learning all the time, and I truly feel lucky to work with such talented students. They give me the momentum and motivation to keep moving forward.

Many of my students also participate directly in research and publish their findings. For example, one undergraduate started in my data structures class in her first year and later came to me with an interest in conducting research. We worked together, and she received the Clare Boothe Luce Scholar Award that supported her studies and conference travel. We published a paper together at a conference, and she is now pursuing her graduate studies. Another undergraduate worked with me through a collaboration with eBay. He became the first author on a conference paper, presented it at eBay’s AI Week, and later won second place in an NVIDIA Hackathon. Seeing my students grow and succeed in their research is one of the best parts of my work.

Book cover of 鈥淧utting AI in the Critical Loop: Assured Trust and Autonomy in Human-Machine Teams鈥

What is a book in your field that you think everyone should read?

Two books I recommend are “The Alignment Problem: Machine Learning and Human Values” by Brian Christian and “Putting AI in the Critical Loop: Assured Trust and Autonomy in Human-Machine Teams.” Both explore how humans and AI can work together in a trustworthy way.

The first book explores how AI systems can be designed to align with human values, which is an important part of responsible AI. It addresses challenges such as bias in data and decision-making, and how we can ensure AI models reflect fairness and accountability. The second book focuses on trust and collaboration between humans and machines, which is a central theme of my research. As AI becomes more integrated into our lives, it is not only about how machines align with us, but also how we understand and respond to their influence on our decisions and behavior.

 

Related Stories