Date of Award

May 2020

Document Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department

School of Computing

Committee Member

Eileen T Kraemer

Committee Member

Lisa Benson

Committee Member

Kelly Caine

Committee Member

Murali Sitaraman

Abstract

Computer science (CS) is a popular but often challenging major for undergraduates. As the importance of computing in the US and world economies continues to grow, the demand for successful CS majors grows accordingly. However, retention rates are low, particularly for under-represented groups such as women and racial minorities. Computing education researchers have begun to investigate causes and explore interventions to improve the success of CS students, from K-12 through higher education. In the undergraduate CS context, for example; student difficulties with pointers, functions, loops, and control flow have been observed. We and others have utilized student responses to multiple choice questions aimed at determining misconceptions, engaged in retroactive examination of code samples and design artifacts, and conducted interviews in an attempt to understand the nature of these problems. Interventions to address these problems often apply evidenced-based active learning techniques in CS classrooms as a way to engage students and improve learning.In this work, I employ a human-centered approach, one in which the focus of data collection is on the student thought processes as evidenced in their speech and writing. I seek to determine what students are thinking not only through what can be surmised in retrospect from the artifacts they create, but also to gain insight into their thoughts as they engage in the design, implementation,and analysis of those artifacts and as they reflect on those processes and artifacts shortly after. For my dissertation work, I have conducted four studies: 1. a conceptual assessment survey asking students to “Please explain your reasoning” after each answer to code tracing/execution questions followed by task-based interviews with a smaller, different group of students 2. a “coding in the wild” think aloud study that recorded the screen and audio of students as they implemented a simple program and explained their thought process 3. interview analyses of student design diagrams/documentation in a software engineering course, tasking students to explain their designs and comparing what they believed they had designed with what is actually shown from their submitted documentation. These first three studies were formative, leading to some key insights including the benefits students can gain from feedback, students’ tendencies to avoid complexity when programming or encountering concepts they do not fully grasp, the nature of student struggles with the planning stages of problem solving, and insight into the fragile understanding of some key CS concepts that students form. I leverage the benefits of feedback with guided prompts using the misconceptions uncovered in my formative studies to conduct a final, evaluative study. This study seeks to evaluate the benefits that can be gained from a guided feedback intervention for learning introductory programming concepts and compare those benefits and the effort and resource costs associated with each variation, comparing the costs and benefits associated with two forms of feedback. The first is an active learning technique I developed and deem misconception-based feedback (MBF), which has peers working in pairs use prompts based on misconceptions to guide their discussion of a recently completed coding assignment. The second is a human autograder (HAG) group acting as a control. HAG simulates typical autograders, supplying test cases and correct solutions, but utilizes a human stand-in for a computer. In both conditions, one student uses provided prompts to guide the discussion. The other student responds/interacts with their code based on the prompts. I captured screen and audio recordings of these discussions. Participants completed conceptual pre-tests and post-tests that asked them to explain their reasoning. I hypothesized that the MBF intervention will offer avaluable way to increase learning, address misconceptions, and get students more engaged that will be feasible in CS courses of any size and have benefits over the HAG intervention. Results show that for questions involving parameter passing with regards to pass by reference versus pass by value semantics, particularly with pointers, there were significant improvements in learning outcomes for the MBF group but not the HAG group.

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.