Date of Award
January 2025
Document Type
Thesis
Degree Name
Master of Science (MS)
Department
Computer Science
First Advisor
Jileun Zhang
Abstract
As artificial intelligence (AI) systems increasingly shape how humans interact with digital environments, the need for transparency, security, and robustness in intelligent decision making has become critical. This thesis explores how explainable and secure AI techniques can be integrated into modern human-computer interaction (HCI) systems to enhance trust, resilience, and alignment with human operators.
We present three related studies, each addressing a distinct challenge in the design of human-centered AI. First, we apply XAI methods, specifically Local Interpretable Model-Agnostic Explanations (LIME), to deep learning (DL) based CAPTCHA solvers. By interpreting model attention patterns, we uncover exploitable weaknesses in text CAPTCHA designs and propose improvements aimed at making human verification systems more transparent.
Second, we introduce a unified framework for evaluating machine learning (ML) robustness under structured data poisoning attacks. We assess model degradation across traditional classifiers, deep neural networks, Bayesian hybrids, and LLMs, using attacks such as label flipping, data corruption, and adversarial insertion. By incorporating LIME into our evaluation process, we move beyond accuracy scores to uncover attribution drift and internal failure patterns that are vital for building resilient AI systems.
Third, we propose a justification generation system powered by LLMs for real time automation. Using the Tennessee Eastman Process (TEP) dataset, we fine-tune a compact instruction-tuned model (FLAN-T5) to produce natural language explanations from structured sensor data. The results show that lightweight LLMs can be embedded into operator dashboards to deliver interpretable reasoning, enhance traceability, and support oversight in safety-sensitive settings.
Together, these studies outline a framework for building AI systems that are not only capable, but also transparent, secure, and human aligned. This work advances the field of human-centered AI by emphasizing interpretability and robustness as foundational elements in the future of interactive intelligent systems.
Recommended Citation
Udoidiok, Ifiok, "Advancing Human-Computer Interaction Systems Through Explainable And Secure AI Integration" (2025). Theses and Dissertations. 7546.
https://commons.und.edu/theses/7546