loading page

ScholarOne - Employing Large Language Models to Enhance K-12 Students' Programming Debugging Skills, Computational Thinking, and Self-Efficacy
  • Shu-Jie Chen,
  • Chuang-Qi Chen
Shu-Jie Chen
East China Normal University

Corresponding Author:[email protected]

Author Profile
Chuang-Qi Chen
Author Profile

Abstract

Programming education is gaining attention at the K-12 level. In the digital era, computational thinking is seen as a key skill. Students in the programming debugging process can not only fix code errors but also exercise and cultivate computational thinking. However, learners at the K-12 level lack confidence in debugging programming due to a lack of foundational knowledge and difficulty in obtaining effective feedback in a debugging environment. The emergence of large language models (LLMs) provides a new pathway for novice programming debugging training. This study applied the advantages of these models to programming debugging, and explored how they can help students in debugging skills, computational thinking, and self-efficacy. The research reveals that through interaction with these advanced models, students can solve programming problems more quickly and strengthen their computational thinking and problem-solving abilities in practice. More importantly, this type of interaction increased students’ confidence in their self-programming abilities and enhanced persistence and motivation in the face of challenges. This study provides educators with new perspectives, demonstrates the great potential of large language models in programming instruction, and provides valuable references for future educational practices.
04 Jan 2024Submitted to Advance
01 Apr 2024Published in Advance