Publications

For the full paper list (including pre-prints), please see Google Scholar.

2024

  1. PhD Thesis
    Effects of Machine-Learned Logic Theories on Human Comprehension in Machine-Human Teaching
    Lun Ai
    2024
    In my PhD, I aimed to bring accountability to the effects of human-AI interactions. The majority of AI systems remain frustratingly opaque to human comprehension. Explainable Artificial Intelligence seeks to bridge this gap by exploring AI systems that explain their inner structures, decisions, and functions to humans. Yet, a critical challenge persists in establishing a framework for assessing how well people can comprehend AI due to the lack of clear definitions and experimental procedures. To address this issue, I developed a framework to evaluate the human comprehension of AI explanations in the context of Inductive Logic Programming.

2023

  1. AAAI SSS23
    Human comprehensible active learning of genome-scale metabolic network
    L. Ai, S.-S. Liang, W.-Z. Dai, and 3 more authors
    AAAI 2023 Spring Symposium on Computational Approaches to Scientific Discovery, 2023
    I developed a new AI framework called ILP-iML1515 that could be used to efficiently learn new gene functions and navigate large metabolic networks. This research has been funded by the AI for Engineering Biology (AI-4-EB) project under BBSRC, and led by Prof. Stephen Muggleton in the Department of Computing and Prof. Geoff Baldwin in the Department of Life Sciences at ICL.
  2. MLJ
    Explanatory machine learning for sequential human teaching
    L. Ai, J. Langer, S. H. Muggleton, and 1 more author
    Machine Learning, 2023
    I extended the our framework to examine the effects of AI explanations (learned via Inductive Logic Programming) on human comprehension in sequential machine-human interactions. Our analysis demonstrated the potential of AI explanations to facilitate human discovery of computational algorithms and optimised problem-solving strategies.

2021

  1. MLJ
    Beneficial and harmful explanatory machine learning
    L. Ai, S. H. Muggleton, C. Hocquette, and 2 more authors
    Machine Learning, 2021
    I contributed to Explainable Artificial Intelligence by formulating our new framework to assess human comprehension of AI explanations (learned via Inductive Logic Programming) and provided evidence of quantified changes in human comprehension