Nick Byrd, Ph.D. @ByrdNick@nerdculture.de · activity timestamp last week Do people learn more from #AI decision assistants?This experiment found insignificant improvement in learning during and after three forms of AI-assisted decision-making, compared to human-only decision-making.https://doi.org/10.48550/arXiv.2602.10222 #edu #teaching #cogSci #eduTech #compSci Read more Read less Translate Translate 4 media alt The four experimental conditions. The four experimental conditions. alt The two learning metrics. The two learning metrics. alt "As shown in Figure 13, there is no significant difference across treatments for both the learning during intervention (𝐹 (3, 398) = 1.046, 𝑝 = 0.372) and learning after intervention (𝐹 (3, 398) = 1.193, 𝑝 = 0.312)." "As shown in Figure 13, there is no significant difference across treatments for both the learning during intervention (𝐹 (3, 398) = 1.046, 𝑝 = 0.372) and learning after intervention (𝐹 (3, 398) = 1.193, 𝑝 = 0.312)." alt "Figure 13: Comparisons on participants’ learning (a) while receiving the AI assistance intervention (b) after receiving the AI assistance intervention. Values for the Human-only treatment are computed based on the normalized change of decision accuracy between corresponding sessions of tasks (learning during intervention: tasks 6–15 vs. tasks 1–5, learning after intervention: tasks 16–20 vs. tasks 1–5); they provide a baseline for organic learning happened due to repetitive task completion without AI assistance interventions. Error bars represent 95% confidence intervals of the mean values." "Figure 13: Comparisons on participants’ learning (a) while receiving the AI assistance intervention (b) after receiving the AI assistance intervention. Values for the Human-only treatment are computed based on the normalized change of decision accuracy between corresponding sessions of tasks (learning during intervention: tasks 6–15 vs. tasks 1–5, learning after intervention: tasks 16–20 vs. tasks 1–5); they provide a baseline for organic learning happened due to repetitive task completion without AI assistance interventions. Error bars represent 95% confidence intervals of the mean values." arXiv.org Understanding the Effects of AI-Assisted Critical Thinking on Human-AI Decision Making Despite the growing prevalence of human-AI decision making, the human-AI team's decision performance often remains suboptimal, partially due to insufficient examination of humans' own reasoning. In this paper, we explore designing AI systems that directly analyze humans' decision rationales and encourage critical reflection of their own decisions. We introduce the AI-Assisted Critical Thinking (AACT) framework, which leverages a domain-specific AI model's counterfactual analysis of human decision to help decision-makers identify potential flaws in their decision argument and support the correction of them. Through a case study on house price prediction, we find that AACT outperforms traditional AI-based decision-support in reducing over-reliance on AI, though also triggering higher cognitive load. Subgroup analysis reveals AACT can be particularly beneficial for some decision-makers such as those very familiar with AI technologies. We conclude by discussing the practical implications of our findings, use cases and design choices of AACT, and considerations for using AI to facilitate critical thinking. Reply Boost or quote Boost Quote You cannot quote this post Like More actions Copy link Flag this post Block
alt "As shown in Figure 13, there is no significant difference across treatments for both the learning during intervention (𝐹 (3, 398) = 1.046, 𝑝 = 0.372) and learning after intervention (𝐹 (3, 398) = 1.193, 𝑝 = 0.312)." "As shown in Figure 13, there is no significant difference across treatments for both the learning during intervention (𝐹 (3, 398) = 1.046, 𝑝 = 0.372) and learning after intervention (𝐹 (3, 398) = 1.193, 𝑝 = 0.312)."
alt "Figure 13: Comparisons on participants’ learning (a) while receiving the AI assistance intervention (b) after receiving the AI assistance intervention. Values for the Human-only treatment are computed based on the normalized change of decision accuracy between corresponding sessions of tasks (learning during intervention: tasks 6–15 vs. tasks 1–5, learning after intervention: tasks 16–20 vs. tasks 1–5); they provide a baseline for organic learning happened due to repetitive task completion without AI assistance interventions. Error bars represent 95% confidence intervals of the mean values." "Figure 13: Comparisons on participants’ learning (a) while receiving the AI assistance intervention (b) after receiving the AI assistance intervention. Values for the Human-only treatment are computed based on the normalized change of decision accuracy between corresponding sessions of tasks (learning during intervention: tasks 6–15 vs. tasks 1–5, learning after intervention: tasks 16–20 vs. tasks 1–5); they provide a baseline for organic learning happened due to repetitive task completion without AI assistance interventions. Error bars represent 95% confidence intervals of the mean values."