A coaching bot for students learning coding, computational Machine Learning, and AI

Prof. Raj Rao Nadakuditi is developing a generative AI coaching bot that provides feedback to strengthen self-regulated learning skills.
illustrated image of three children studying using online tools

Prof. Raj Rao Nadakuditi is developing a new kind of generative AI coaching bot that is designed to strengthen students’ critical thinking and self-regulated learning skills.

“The goal is to develop a bot that will guide students to understand the underlying conceptual framework of a problem in ways that lets them self-solve other conceptually similar problems, rather than just tutoring them to solve the specific problem that had them seek out for help in the first place,” Nadakuditi said.

The goal is to develop a bot that will guide students to understand the underlying conceptual framework of a problem in ways that lets them self-solve other conceptually similar problems.

Prof. Raj Rao Nadakuditi

Generative AI refers to a class of algorithms that can be used to generate content, such as text, images, and equations. From opening new possibilities in creative content industries to improving personalized medical plans for patients, generative AI has already made an impact on society. Many people are already experimenting with how it can be used to enhance learning environments as a personalized tutor for students.

Raj Nadakuditi
Prof. Raj Rao Nadakuditi

However, by simply providing an answer to the specific query, a generative AI tutor misses an opportunity to help students build their metacognitive skills. Metacognition is an awareness of how one thinks, which is essential for developing strong critical thinking and problem-solving skills. These skills are particularly important in computational machine learning and AI, as practitioners are expected to leverage algorithms, statistical models, and computational techniques to uncover patterns, identify trends, and automate decision-making processes.

Instead, generative AI tutors could help sharpen students’ metacognition skills by directing students to examine their thinking process. In this way, students can identify how they came to the wrong answer and adjust their process accordingly.

“This will be one of the first, state-of-the-art generative AI applications that explicitly grounds metacognition in a computing education context,” Nadakuditi said. “We want to empower the next generation of scientists and engineers to transform raw domain-specific data into actionable knowledge, which fuels innovation and drives computationally aided discovery.”

This will be one of the first, state-of-the-art generative AI applications that explicitly grounds metacognition in a computing education context.

Prof. Raj Rao Nadakuditi

Research has shown that timely, informative feedback can have a positive impact on students’ learning, which could make chatbots a promising tool in education. A survey of 3,017 high school and college students by Intelligent (2023) found that 85 percent of students preferred ChatGPT to studying with a tutor, especially for science and mathematics courses.

The danger is that chatbots are not infallible. They can provide plausible, yet ultimately incorrect, responses — a phenomenon known as “hallucinations.” To recognize hallucinations, students may be required to verify a chatbot’s work and explanations. However, there’s no proof that this method results in students mastering a conceptual understanding of computational machine learning.

By embedding an explicit method of prompting metacognitive practices in students, Nadakuditi believes the chatbot could replicate an effective personalized learning technique. The method would consist of multiple steps that elucidate the coaching method   of “asking more-telling-less” adapted by Prof. Nadakuditi to the context of coaching technical concepts:

Step 1: The learner is given a problem to solve, and then asked to describe how they arrived at a solution, so they can evaluate their own problem-solving process. The generative AI model then identifies conceptual problems that may lead to incorrect methods/answers. 

Step 2: The model produces a similar example that illustrates the concepts to the learner and describes step-by-step how the problem was solved.

Step 3: The model uses generative AI to produce a new problem, and the learner is asked to reflect on their thinking process as they solve it.

Step 4: The model generates yet another example with an error similar to the one the student was stuck on, but this time the learner has to debug the problem to showcase their understanding of the concept.

Step 5: The learner revisits the original problem and reflects on how they’re approaching the problem now in the hope that, armed with a new understanding of the concepts, they are able to solve it themselves.

“A system with this transparent educational grounding can serve as a model for similar efforts within computational fields and beyond,” Nadakuditi said.

Nadakuditi’s team includes a teenage alum of his Continuum course, which is an online coding class, as well as Prof. David Reeping, an Assistant Professor at the University of Cincinnati, who was formerly a postdoc at U-M working with Prof. Cindy Finelli. The research received funding from the Michigan Institute for Computational Discovery and Engineering — a unit within the Office of the Vice President for Research and a seed grant from the ECE Chair.