Building an Ethical AI Framework in Social Sciences: Spotlight on FSU’s Hailey Kuang, PhD

  • Photo of Anne Spencer Daves College of Education, Health, and Human Sciences Building

An assistant professor of Measurement and Statistics at the Anne Spencer Daves College of Education, Health, and Human Sciences, Dr. Hailey Kuang has training in both computer science and psychometrics. Her experience finding ways to integrate these two fields has carried over into her current work with artificial intelligence (AI), as she and her colleagues explore ethical ways to incorporate AI use in the social sciences.

Kuang’s work with AI began in 2018 through PhD research with the Virtual Learning Lab at the University of Florida and a summer internship at the American Institutes for Research. Across multiple projects, she focused on analyzing log data from online learning and assessment environments to better understand learners’ behavior. In the process, she noticed that many traditional assessment models were insufficient at capturing the complexity of learners’ behaviors and responses, especially in tech-enhanced learning contexts. At the same time, her coursework and research in machine learning showed the potential of AI methods to model those complex patterns.

“FSU faculty and staff can support [an AI literacy framework] by creating opportunities for meaningful interaction with AI tools in teaching, research, and daily work, which also encourages reflection on how and why these tools are used. This enables AI literacy initiatives to be shaped by real-world use cases.”

Image of Hailey Kuang

In her work at FSU, Kuang focuses on adaptive assessment and personalized learning to improve instructional practices and support student achievement. She develops psychometric models that integrate facets of AI like machine learning, deep learning, and natural language processing to support fair and valid interpretations of learners’ performances.

What excites Kuang the most about AI in higher education is its potential to better support teaching, learning, and decision making at-scale. She believes AI tools can expand research capacity by reducing administrative and repetitive tasks, which once required extensive manual labor like coding interview transcripts, analyzing classroom videos, or developing instructional materials. This significantly reduces time and effort and allows faculty to focus on more meaningful engagement with students, gain richer insights into student learning, and provide more timely and personalized feedback.

Kuang argues strongly in favor of a “human-in-the-loop” framework, where AI assists with front-end, lower-level tasks like preliminary coding and feedback generation, while researchers and educators focus on higher-level work such as theory building, instructional design, mentoring, and policy interpretation.

She points to the concept of fairness in AI as a reason for continued human involvement. Fairness is often viewed as a technical issue that can be addressed solely at the algorithmic level, but partiality can arise throughout the AI lifecycle. Even when an AI system satisfies technical fairness criteria, it may still produce inequitable outcomes if the underlying data reflects long-standing social inequalities or if results are interpreted without appropriate social or historical context.

“Human oversight is crucial for ensuring the validity, rigor, and ethical use of AI in social sciences,” Kuang said. “Researchers and educators remain responsible for interpreting results, validating outputs, and addressing issues of bias, equity, privacy, and scalability while preserving human-centered, socially responsible inquiry.”

In her Machine Learning for Social Science course, Kuang integrates concepts of validity, rigor, and ethics. While students use real-world science data and tools like RapidMiner to work with a variety of supervised and unsupervised AI learning techniques, they also focus on developing concepts like model interpretation, validation, fairness, partiality, and ethical considerations. Kuang hopes that students walk away with a basic understanding of different types of AI tools (e.g., generative versus non-generative), how they work, and practical skills for using AI in academic and professional contexts.

These ideas carry over to her collaboration with Dr. Secil Caskurlu, an assistant professor of Instructional Systems and Learning Technologies at Anne’s College. They are working to develop an AI literacy framework centered on three components: knowledge of AI concepts, skills for using AI tools, and the ability to critically evaluate AI outputs and limitations.

“FSU faculty and staff can support this mission by creating opportunities for meaningful interaction with AI tools in teaching, research, and daily work, which also encourages reflection on how and why these tools are used,” she said. “This enables AI literacy initiatives to be shaped by real-world use cases.”

Kuang encourages FSU to continue supporting cross-disciplinary conversations and pilot initiatives that will allow researchers to explore AI-related ideas and collaborations.