Speaker
Description
This study examines the effectiveness of written corrective feedback from ChatGPT and peer feedback in improving academic writing among English-major postgraduates at a university. While previous studies have explored AI-generated feedback or peer feedback separately, limited research has compared their effects in EFL academic writing contexts, particularly in Vietnam. The study involved 20 postgraduate students from two separate classes in the same MA English Language program. A quasi-experimental design was employed, in which one class revised their writing using peer feedback, while the other used ChatGPT feedback. The revised essays were assessed by a blinded lecturer with a doctoral-level qualification using an analytic scoring rubric developed by the faculty for research writing. An explanatory sequential mixed-methods approach was adopted. Quantitative data from writing scores and survey responses were analyzed using descriptive statistics and independent-samples t-tests to identify differences in writing performance, followed by qualitative data from open-ended responses to provide further insights into learners’ experiences and perceptions. The results indicate that both types of feedback contributed to improvements in students’ academic writing. Peer feedback supported logical coherence, critical analysis, and audience awareness, whereas ChatGPT feedback was more effective in enhancing word choice, sentence structure, and grammatical accuracy. Writing produced by the ChatGPT group achieved higher mean scores than that of the peer feedback group (p = .033), suggesting a statistically significant difference. These findings suggest that ChatGPT feedback provide consistent, immediate, and accessible support for writing revision, while peer feedback depend on learners’ linguistic proficiency and ability to provide feedback.