Account creation, submission, and review notification emails may go to Spam. Please check your Spam folder and mark the email as "Not Spam" so that future conference emails go to your Inbox.

5–6 Aug 2025
HUFLIT University & HEW Center
Asia/Ho_Chi_Minh timezone

Comparative Analysis of AI vs. Human Feedback in Addressing Logical Fallacies in Argumentative Writing of Vietnamese Pre-service Teachers

Not scheduled
20m
Main Conference Hall (HUFLIT University & HEW Center)

Main Conference Hall

HUFLIT University & HEW Center

806 Le Quang Dao, Mỹ Hoà 3 Hamlet, Tan Xuan Commune, Hóc Môn district, Hồ Chí Minh city, Vietnam
Harnessing AI in TESOL: Opportunities and Challenges

Speakers

Ms Nguyễn Thị Kim Ngân (Hanoi National University of Education) Phạm Vũ Lê Mai (Hanoi National University of Education)

Description

Abstract
Logical fallacies are a common yet often overlooked issue in argumentative writing, significantly affecting learners’ coherence, task achievement, and overall writing scores—particularly in high-stakes assessments like the IELTS. This study investigates the comparative effectiveness of artificial intelligence (AI)-generated feedback (specifically using ChatGPT) and human feedback in identifying and improving logical fallacies in argumentative essays written by pre-service teachers at a pedagogical university in Vietnam. Employing a mixed-methods approach, the research integrates both quantitative and qualitative data from 50 B2/B2+ level students enrolled in a Reading-Writing 5 course, who are required to reach C1 level by the end of the course. Participants were randomly assigned to either an AI-generated feedback group or a human feedback group. Pre- and post-test assessments using mock IELTS Task 2 writing tests provided quantitative data, while qualitative insights were collected via participant questionnaires. Results indicated that both groups made progress, but the human feedback group showed slightly greater improvement in Task Achievement and Coherence and Cohesion—criteria closely related to logical reasoning. Questionnaire responses revealed that AI-generated feedback identified logical fallacies quickly and often provided revised versions; however, the revisions were sometimes formulaic and difficult to interpret, limiting their effectiveness in resolving complex logical issues. In contrast, human feedback was clearer, more interactive, and more motivating for learners. These findings suggest that while AI tools can support surface-level logical identification, human input remains essential for nuanced guidance. The study highlights the potential of a blended feedback model and calls for further development of AI tools to better support higher-order reasoning skills in academic writing.
Keywords: AI-Generated Feedback; Logical Fallacies; Human Feedback; Argumentative Writing; Pre-service Teachers; IELTS Writing

Primary author

Phạm Vũ Lê Mai (Hanoi National University of Education)

Co-author

Ms Nguyễn Thị Kim Ngân (Hanoi National University of Education)

Presentation materials

There are no materials yet.