Abstract: Feedback can be defined by Irons (2008) as “any information, process or activity which affords or accelerates student learning based on comments relating to either formative or summative assessment activities” (p. 7). The current study aims to synthesize quantitative research studies to further explore the impact of feedback on academic achievement. Results indicated the overall summary effect to be moderate and statistically significant (Hedges’ g = .40), thus lending support to the notion that feedback, considered a best practice, positively influences academic achievement. Moderator results suggested that teacher-provided and content-specific feedback at the K-12 level positively impacted student performance in the academic discipline. However, further research is warranted to explore the construct.
Keywords: Feedback, academic achievement, meta-analysis
概要 (Nalline S. Baliram & Jeffrey J. Youde: 一份抽象分析的汇总： 关于学术反馈对大学生成绩的影响): 反馈由Irons（2008）定义为“成全或加速学生学习的任何信息、过程或活动都是基于形成性或总结性评估活动的一些评论(第7页)。这份研究旨在总结定量的研究，以进一步探讨反馈对学业成绩的影响。结果显示，总体效果适中且具有统计学的意义(Hedges’g=.40），并支持了反馈作为最佳的一种实践，对学生成绩能产生积极影响的设想。调查结果表明，教师提供的关于内容特定的反馈在K-12级别中对学生在学术领域的表现有着积极的影响。但是，还需进一步的研究来验证。
Zusammenfassung (Nalline S. Baliram & Jeffrey J. Youde: Eine meta-analytische Synthese: Untersuchung der akademischen Auswirkungen von feedback auf die Leistung von Studierenden): Feedback kann mit Irons (2008) definiert werden als “jede Information, jeder Prozess oder jede Aktivität, die das Lernen der Schüler ermöglicht oder beschleunigt, basierend auf Kommentaren, die sich entweder auf formative oder summative Bewertungsaktivitäten beziehen” (S. 7). Die aktuelle Studie zielt darauf ab, quantitative Forschungsstudien zusammenzufassen, um die Auswirkungen von Feedback auf die akademische Leistung weiter zu untersuchen. Die Ergebnisse zeigten, dass der Gesamteffekt moderat und statistisch signifikant ist (Hedges’ g = .40), was die Vorstellung unterstützt, dass Feedback, das als Best Practice gilt, die akademische Leistung positiv beeinflusst. Die Ergebnisse des Moderators deuten darauf hin, dass sich das von der Lehrkraft bereitgestellte und inhaltsspezifische Feedback auf der Ebene der K-12 positiv auf die Leistung der Studierenden in der akademischen Disziplin auswirkt. Es bedarf jedoch weiterer Forschung, um das Konstrukt zu verifizieren.
Schlüsselwörter: Feedback, Studienleistungen, Meta-Analyse
Аннотация (Наллин С. Балирам, Джеффри Й. Йуд: Метааналитический синтез: исследование академического воздействия «обратной связи» (feedback) на успеваемость обучающихся): По Айронсу (2008) „обратную связь“ (feedback) можно определить как «информацию, процесс или действие, которые стимулируют и ускоряют обучение учащихся и основываются на комментариях, касающихся либо формативного, либо суммативного оценивания» (стр. 7). Цель настоящего исследования – обобщение квантитативных исследований для научного описания воздействия «обратной связи» на академическую успеваемость. Согласно результатам исследования, совокупный эффект является умеренным и статистически значимым (эффект g Хеджеса = .40), что подтверждает положительное влияние «обратной связи» (feedback), включенной в список “лучших практик”, на академическую успеваемость. Данные модификаторыэффекта указывают на то, что на уровне общего образования предоставленный преподавателем в качестве «обратной связи» и относящийся к определенной теме комментарий оказывает положительный эффект на успеваемость обучающихся по той или иной академической дисциплине. Однако для верификации данного конструкта необходимы дальнейшие исследования.
Ключевые слова: обратная связь (feedback), успеваемость обучающихся, метаанализ
Feedback, a component of formative assessment, is an important aspect of the current classroom learning environment. The Washington State Teacher Performance Assessment (TPA) requires every pre-certificated teacher to demonstrate evidence of classroom use of student reflection stemming from teacher or peer feedback. This implies strategies involving feedback have the potential to enhance instructional strategies that will improve student learning. In this study, the investigators will examine quantitative research studies that involve the impact on student achievement when feedback is integrated in the learning environment. More specifically, the investigators will use a meta-analytic approach to examine the effectiveness of the use of feedback as a classroom strategy. By collecting related quantitative studies and combining the findings of these studies into a calculated effect size, the overall impact for the classroom use of feedback can be determined.
Feedback is understood here as a crucial type of formative assessment that can help learners understand what they need to do to improve their learning as well as what was done well (Brookhart, 2008). Effective feedback should provide students with sufficient information on what to do next and should therefore enhance learning and academic achievement. Irons (2008) defined feedback as “any information, process or activity which affords or accelerates student learning based on comments relating to either formative or summative assessment activities” (p. 7). According to Brookhart (2008), effective feedback should be clear, age-appropriate, content specific, timely, and of high quality. John Hattie (2012) theorized feedback to be among the most powerful strategies that enhance achievement with an overall effect size of .79.
The impacts of feedback may depend on the nature of the feedback, since feedback for learning can take many forms. Feedback can be given collectively to a class, to a group of students, or to a single individual. Evaluative feedback provided by a teacher can be delivered in the form of grades and non-specific comments such as praise or criticism. Feedback has the potential to affect students’ sense of themselves and where they stand in relation to learning (Guskey & Marzano, 2003). However, according Brookhart (2008), feedback is not always helpful. It may leave students feeling either good or bad about themselves, “without any sense of what is inspiring their feelings except the external symbol of their success or lack of it” (Guskey & Marzano, 2003, p. 90).
Descriptive feedback, when directly linked to learning, allows students to make explicit connections between their thinking and other possibilities that they should consider (ibid, 2003). Descriptive feedback addresses misconceptions, lack of understanding, and provides a way to suggest the next steps a student should take. Irons (2008) emphasized that feedback must be clearly used for the sake of improving learning.
As noted by Irons (2008), feedback may not be appropriately utilized, because according to Hounsell (1987), students don’t always use feedback for improvement. Examples might include a student not explicitly coached on how to effectively utilize feedback, the feedback given did not contribute to student learning, or a student might be extrinsically interested in grades or marks. Furthermore, students may not be allowed the opportunity to enter into dialogue or discourse about their feedback (Irons, 2008). According to Holmes and Smith (2003), feedback may emphasize a power relationship between teachers and students especially if the teacher is providing all the feedback without opportunity for dialogue. Kluger and DeNisi (1996) conducted a meta-analysis on feedback and found that in 50 of 131 well-designed studies, giving feedback actually made academic performance worse.
Quality of Feedback
Since the purpose of feedback is to enhance student learning and content understanding, what might differentiate effective feedback from ineffective feedback? One might argue that effective feedback focuses on the task, the process and self-regulation. It is descriptive and will include positive feedback (praise) along with constructive criticism (Brookhart, 2008). However, teachers must be aware of their students’ abilities, learning needs, and interests when deciding how and what feedback to give (ibid.). To be useful to students, feedback must be relevant to the students’ reflection and learning process (Black & Wiliam, 1998 and Guskey & Marzano, 2003). Additionally, feedback should be corrective in that it should allow students to troubleshoot their own performance or area in which they are struggling. Effective feedback can be individual or collective as long as it promotes deeper reflection and understanding of the content at hand. Brookhart (2008) proposed giving feedback in small steps to help students assimilate the information.
Timeliness of Feedback
Providing feedback in a timely manner enables the students to understand it and incorporate it in their learning (Brookhart, 2008). Some would argue that feedback needs to be provided within minutes of completing a task (Cowan, 2003). This may not be a realistic scenario in most larger classrooms, as it would most likely happen only during small group discussions, individual activities, and tutorials. Nevertheless, Brookhart (2008) emphasized that feedback needs to come while students are still mindful of the topic, assignment, or performance in question. In other words, feedback should come when there’s still time to correct their errors. If feedback is given that is no longer relevant to current or future content, it’s effectiveness may be diminished.
Purpose of the Study
The purpose of this study is to determine the impact of teacher feedback on student academic achievement via quantitative research synthesis, or meta-analysis. This meta-analysis examined feedback given to students in grades K-12 and in higher education (HE) settings and included teacher-to-student and student-to-student feedback. The research question for this study is the following; does feedback have an effect on student academic achievement? The investigators hypothesized that there would be a statistically significant difference in academic achievement for students who received feedback, when compared to those who did not receive feedback.
Review of Relevant Research Studies on Feedback
Research on feedback strategies dates back several decades. Although studies have suggested that feedback improves academic performance, many of these studies suffer from major limitations. For example, Butler and Nisan (1986) used a mixed-methods study to test the effects of different feedback conditions on performance and motivation. Although their findings suggested statistically-significant positive results, there were several factors to consider. The sample consisted of sixth-graders. Generalizing the findings of this study beyond this age group is problematic, since students in various grade levels react differently to feedback received (Brookhart, 2008). Additionally, there was a time constraint involved. The students were given very few minutes to review their feedback before moving forward to the next assessment. This time constraint may have impacted the validity of the findings.
In a study conducted by Siewert (2011), the researcher sought to determine whether fifth-graders with learning disabilities would be motivated to complete assignments when written feedback was provided within 24 hours (p. 20). The results of the study suggested that effective feedback given to students in a timely manner positively impacted student learning as well as their confidence in developing the ability to understand content knowledge. However, there were several issues with the methodology used in the study design. First, a small sample size (n = 22) was utilized. Second, only four out of the 22 students sampled required special education services, two students were identified as gifted, and the remaining 16 students were part of the general education program. Third, during the study, several students were frequently pulled out of the classroom for various reasons.
Nunez, Suarex, Rosario, Vallego, Cerezo, & Valle (2015) examined the relationship between teacher feedback on homework and academic achievement. The sample included 454 students in grades 5 to 12 from three schools in northern Spain. The study sought to determine how teacher feedback impacted homework completion, the amount of time students spend on homework and homework management leading to academic achievement. The findings suggested a positive and significant correlation between student perception of teacher feedback on homework and the quality and amount of homework the students completed. Moreover, the quality and amount of homework completed positively and significantly predicted academic achievement. According to student perceptions from the study, the findings suggested that homework feedback from the teachers decreased significantly as grade levels increased.
High-quality studies involving feedback as a component of formative assessment suggest that when students are able to regulate their own progress by recognizing where the gaps between their desired goal and current knowledge may lie, feedback allows them to work toward obtaining the goal (Sadler, 1989). In a study conducted by Bangert-Drowns, Kulik, Kulik, & Morgan (1991), teacher-provided feedback on tests and homework were helpful to lower- achieving students because the comments focused on errors and included specific suggestions for improvement. With such feedback, students felt encouraged to focus their attention thoughtfully on the task rather than simply being concerned with getting the right answer.
In the current study, the investigators will conduct a meta-analysis that examines the impact of feedback on academic achievement in both K-12 and higher education settings. The study differentiates who provided the feedback to whom (teacher-to-student feedback versus student-to-student feedback) and identifies the types of feedback provided (content-specific feedback, praise and objective feedback). A central goal of this study is to further advance the body of knowledge regarding effective ways to provide students with feedback to improve student achievement and learning.
Meta-analysis is a form of research synthesis where an investigator searches for, collects, and synthesizes quantitative research on a topic. By synthesizing the experimental research on the impact that feedback has on student achievement, broader conclusions can be drawn. According to Rosenthal and DiMatteo (2001), a well-designed and executed meta-analysis can provide insight into the impact that a treatment has on a sampled population. Specifically, the present study seeks to quantify and calculate an overall effect size for a collection of related empirical research studies on several types of teacher-to-student feedback and student-to-student feedback in K-12 and higher education settings.
One advantage of conducting a meta-analysis is that it samples a much larger population than could be included in an individual experiment (Field & Gillett, 2010). Second, the inclusion of both published and unpublished research may yield a fuller picture of the impacts of a treatment or intervention, thus minimizing publication bias. Third, the traditional literature review may be biased in favor of studies that support a specific theoretical position or outlook, while the meta-analytic approach is likely to provide a less-biased view (Rosenthal & DiMatteo, 2001). Overall, the diverse range of studies included in a meta-analysis may provide cross-validation.
Meta-analyses do have their criticisms. The post-positivist or constructivist theoretician might criticize the reductive nature of quantitative research overall, especially when applied to schools. John Creswell, a proponent of mixed-methods research designs, argues that knowledge gained via experimental studies divorced from real-world contexts may lack applicability to real-world situations, such as a typical school classroom. If an experiment randomly assigns subjects to treatment and control groups, such a study lacks ecological validity since one would be unlikely to encounter a similar situation in a real-life context. Thus, the usefulness of knowledge gained from experimental studies is likely overstated when applied to classroom settings (Creswell, 2003).
While acknowledging these criticisms, such drawbacks can be minimized if one conducts a meta-analysis with robust design and implementation. According to Field and Gillet (2010), a properly conducted meta-analytic process has six steps: 1. Conduct a literature search; 2. Choose and apply search and inclusion criteria; 3. Calculate effect sizes for each included study; 4. Calculate meta-analysis effect size; 5. Do additional analysis; and 6. Write up the results (Field & Gillet, 2010, p. 666). The current study’s methodology follows this six-step process.
The investigators conducted an extensive search of the empirical literature examining the construct feedback. This literature included studies on teacher-to-student feedback and student-to-student feedback in both K-12 and higher education. These studies measured the impact of feedback on academic achievement, where student academic achievement was identified as the dependent variable. To locate these studies, the investigators carried out computer searchers of three electronic databases: ERIC, Education Source and Psych Info. Search terms used included “Feedback” and “Academic Achievement” or “Academic Performance” or “Academic Success”. These criteria produced approximately 3000 results. Next, the researchers included additional parameters to narrow down the results. These parameters included peer-reviewed quantitative studies in published in academic journals from 1960 to present, which narrowed the field to 419 studies for consideration. The researchers scrutinized each study to determine its suitability for inclusion in this meta-analysis. Additionally, the researchers sought to locate additional relevant studies by reviewing the reference lists of these and other studies.
Search and Inclusion Criteria
From the initial pool of 419 studies, a screening determined which ones were appropriate to include in this meta-analysis. The investigators limited the included studies to experimental and quasi-experimental studies that identified a comparison or control group and that compared students who received feedback to those who did not. Each study was required to report quantitative measurement that explained how feedback impacted academic achievement. Furthermore, studies had to report quantitative data, including mean and standard deviation for both the experimental and control/comparison groups, as well group sample sizes. After screening for these requirements, the initial pool of 419 studies was reduced to eight studies. From these eight studies, the researchers were able to extract 26 viable sets of data for comparative analysis. Table 1 lists the data sets drawn from the selected studies.
Table 1: Data Sets
|Author (Year)||Data Set||Control Group||Experimental Group|
|Koenig et al. (2016)||A||No Feedback – Assessment 1||Performance Feedback – Assessment 1|
|Koenig et al. (2016)||B||No Feedback – Assessment 4||Performance Feedback – Assessment 4|
|Koenig et al. (2016)||C||No Feedback – Assessment 7||Performance Feedback – Assessment 7|
|Labuhn, et al. (2010)||A||No Feedback – Standard 2||Individual feedback – Standard 2|
|Labuhn, et al. (2010)||B||No Feedback – Standard 2||Social Comparison Feedback – Standard 2|
|Labuhn, et al. (2010)||C||No Feedback – Standard 1||Individual Feedback – Standard 1|
|Labuhn, et al. (2010)||D||No Feedback – Standard 1||Social Comparison Feedback- Standard 1|
|Labuhn, et al. (2010)||E||No Feedback – Standard 0||Individual Feedback – Standard 0|
|Labuhn, et al. (2010)||F||No Feedback – Standard 0||Social Comparison Feedback – Standard 0|
|Butler & Nisan (1986)||A||No Feedback – Session 3||Comments – Session 3|
|Butler & Nisan (1986)||B||No Feedback – Session 3||Grades – Session 3|
|Hwang et al. (2016)||A||No Feedback||Feedback|
|Adiguzel et al. (2016)||A||Comparison – Text||Text & Video Feedback|
|Butler (1987)||A||No Feedback – High Level||Comments – High Level|
|Butler (1987)||B||No Feedback – High Level||Praise – High Level|
|Butler (1987)||C||No Feedback – High Level||Grades – High Level|
|Butler (1987)||D||No Feedback – Low Level||Comments – Low Level|
|Butler (1987)||E||No Feedback – Low Level||Praise – Low Level|
|Butler (1987)||F||No Feedback – Low Level||Grades – Low Level|
|Newman et al. (1974)||A||No Feedback||Immediate Feedback – Test|
|Newman et al. (1974)||B||No Feedback||One Day Delay – Test|
|Newman et al. (1974)||C||No Feedback||Seven Day Delay – Test|
|Newman et al. (1974)||D||No Feedback||Immediate – Retest|
|Newman et al. (1974)||E||No Feedback||One Day Delay – Retest|
|Newman et al. (1974)||F||No Feedback||Seven Day Delay – Retest|
|Paige (1966)||A||No Feedback||Feedback|
Calculating Effect Size
An effect size is a “standardized measure of the magnitude of observed effect” and reports an intervention’s impact in terms of standard deviation units (Field & Gillett, 2010, p. 668). This standardized measure allows different studies that may have measured different variables to be compared. Common effect measures include Cohen’s d, Glass’ delta (Δ), and Hedges’ g. Standardized effect sizes are calculated by dividing the difference in means by the pooled standard deviation of each condition. To measure a group difference, the mean difference is divided by the combined standard deviation, which yields the effect size (Ferguson, 2009).
These three measures of effect size have slight differences. For example, Cohen’s d uses a pooled standard deviation of experimental and control groups. Since both groups are given equal weight in the Cohen’s d formula, differences in group sizes may skew the standard deviation, and thus the effect size. Cohen’s d also has the potential to overestimate the calculated effect size in small samples (Borenstein et al., 2009). To address differences in standard deviation between control and experimental groups, a researcher can use Glass’ Δ. Glass’ Δ uses the standard deviation of the control group only, since the control group standard deviation would likely be closer to the entire population than the experimental group (Ferguson, 2009).
An overestimation bias in small samples can be addressed by using Hedges’ g, which yields a less-biased estimate by using a pooled and weighted standard deviation (Borenstein et al., 2009, p. 27). Of the 26 data sets included in this meta-analysis, all of them reported measures of group differences include mean, standard deviation, and sample size for treatment and control groups. In Table 2, the investigators calculated and reported the effect size for each data set using all three measures. However, only Hedges’ g was used for meta-analysis since this measure should yield a less-biased estimate.
Table 2: Effect sizes of data set
|Author (year)||Data Set||Cohen’s D||Glass’ Delta||Hedges’ g|
|Koenig et al. (2016)||A||-0.15||-0.14||-0.15|
|Koenig et al. (2016)||B||0.61||0.65||0.60|
|Koenig et al. (2016)||C||0.67||0.78||0.66|
|Labuhn, et al. (2010)||A||-0.11||-0.12||-0.11|
|Labuhn, et al. (2010)||B||-0.15||-0.17||-0.15|
|Labuhn, et al. (2010)||C||-0.25||-0.21||-0.24|
|Labuhn, et al. (2010)||D||0.06||0.05||0.06|
|Labuhn, et al. (2010)||E||0.37||0.79||0.35|
|Labuhn, et al. (2010)||F||0.98||2.22||0.93|
|Butler & Nisan (1986)||A||1.75||2.18||1.74|
|Butler & Nisan (1986)||B||0.24||0.25||0.24|
|Hwang et al. (2016)||A||0.67||0.68||0.66|
|Adiguzel et al. (2016)||A||-0.13||-0.21||-0.11|
|Newman et al. (1974)||A||-0.06||-0.05||-0.06|
|Newman et al. (1974)||B||-0.20||-0.18||-0.19|
|Newman et al. (1974)||C||0.37||0.30||0.34|
|Newman et al. (1974)||D||-0.15||-0.13||-0.15|
|Newman et al. (1974)||E||-0.22||-0.23||-0.22|
|Newman et al. (1974)||F||0.43||0.37||0.40|
Calculate Meta-Analysis Effect Size
Once a common effect size is calculated for each of the selected studies, the investigators calculated a combined meta-analysis effect size for all studies. Before this is calculated, the investigators must choose to view the results through the lens of either a fixed-effects model, or a random-effects model. The investigators made this determination based on populations, sampling, study characteristics, and overall conclusions that hope to be drawn (Borenstein et al., 2009). A fixed-effects model is appropriate when similar research designs are used in included studies and assumes that all studies represent a population with a fixed-effect size. Thus, any differences in effect sizes can be attributed to sampling error (Field & Gillett, 2010). Since the fixed-effect model generates a weighted average of effect size estimates, each individual participant is considered to be the unit of analysis.
A random-effects model, in contrast, considers each study to be the unit of analysis, as not all studies have similar treatments, and not all are drawn from similar populations. Any differences observed in a random-effects model can be attributed to variations between included studies, as well as sampling error (Field & Gillett, 2010). In educational studies, these differences might include grade level, student socio-economic status, and teacher expertise.
In the fixed-effects model, included studies with larger sample sizes have a larger impact in the overall mean effect calculation, as these studies are assigned higher weights. Conversely, a random-effects model assigns weights proportionately, but in a much smaller range. Thus, studies with larger sample sizes are given less weight, and individual studies have less overall impact on the overall summary effect (Borenstein et al, 2009). When drawing overall conclusions, a random-effects model allows broader conclusions to be drawn, as generalizing the effect size beyond the sampled population is possible. Any inferences one might draw from a fixed-effects model are limited to only the include studies, and their populations, included in the selected studies (Field & Gillett, 2010).
In the present study, the investigators chose a random-effects model to calculate the overall effect size. It is an appropriate model in this case because the studies selected share common research design (an experimental or treatment group receiving feedback compared to a control group which did not). However, due to between-study variations in research design, it is unlikely that the studies could be considered functionally equivalent. It is more likely that there were differences in the studies that likely impacted the results. In other words, real differences exist in effect sizes across studies that are not based solely on sampling error. Therefore, a common effect size should not be assumed, and a random-effects model is justified. The use of a random-effects model better allows for generalizations to be drawn beyond the populations included, which may be useful for policy recommendations (Borenstein et al., 2009, p. 83-84). The investigators used Comprehensive Meta-Analysis, Version 3, to analyze the effects of feedback on academic achievement when considering the 26 included data sets.
In addition to calculating the overall effect size of the 26 data sets, the investigators sought to explore effects of three moderator variables, including student grade level, provider of feedback and type of feedback. Student grade level was divided into two categories, K-12 and higher education. Provider of feedback was divided into two categories, teacher-to-student feedback and student-to-student feedback. Type of feedback was divided into three categories including content-specific, praise, and objective. Table 3 illustrates how each data set was categorized according to the moderator variables.
Table 3: Moderator Variables
|Author (year)||Data Set||Student grade level||Provider of feedback||Type of feedback|
|Koenig et al. (2016)||A||K-12||Teacher||Content Specific|
|Koenig et al. (2016)||B||K-12||Teacher||Content Specific|
|Koenig et al. (2016)||C||K-12||Teacher||Content Specific|
|Labuhn, et al. (2010)||A||K-12||Teacher||Objective|
|Labuhn, et al. (2010)||B||K-12||Teacher||Objective|
|Labuhn, et al. (2010)||C||K-12||Teacher||Objective|
|Labuhn, et al.(2010)||D||K-12||Teacher||Objective|
|Labuhn, et al. (2010)||E||K-12||Teacher||Objective|
|Labuhn, et al. (2010)||F||K-12||Teacher||Objective|
|Butler & Nisan (1986)||A||K-12||Teacher||Content Specific|
|Butler & Nisan (1986)||B||K-12||Teacher||Objective|
|Hwang et al. (2016)||A||K-12||Peer||Objective|
|Adiguzel et al. (2016)||A||Higher Education||Peer||Content Specific|
|Butler (1987)||A||K-12||Teacher||Content Specific|
|Butler (1987)||D||K-12||Teacher||Content Specific|
|Newman et al. (1974)||A||Higher Education||Teacher||Objective|
|Newman et al. (1974)||B||Higher Education||Teacher||Objective|
|Newman et al. (1974)||C||Higher Education||Teacher||Objective|
|Newman et al. (1974)||D||Higher Education||Teacher||Objective|
|Newman et al.||E||Higher Education||Teacher||Objective|
|Newman et al. (1974)||F||Higher Education||Teacher||Objective|
|Paige (1966)||A||K-12||Teacher||Content Specific|
According to Koenig et al. (2016), the content-specific feedback was based on the students’ performance and provided in both visual and oral formats. The researchers noted that “the visual presentation was in the form of a feedback page that was inserted into the writing packet. The oral presentation was completed by the experimenter who reviewed the information presented on the feedback page” (p. 282). The study by Adiguzel et al. (2016) was conducted in Turkish university consisting of freshman elementary and Turkish education pre-service teachers. The students provided content-specific feedback in the form of text and video. The content-specific feedback in the study conducted by Paige (1966) consisted of immediate feedback on the students’ work that included the students being able to view the correctly worked-out problem.
The type feedback provided to the treatment groups in the study conducted by Labuhn, et al. (2010), identified as objective feedback. In addition to providing a score, the experimenter told the students how many points several of the other students had scored. This was identified as social comparison feedback. Hwang et al. (2016) used student-to-student feedback. The students use an assessment rubric as a guide when providing a score to their peers. Newman et al. (1974) also used objective feedback in their study. Each test item was projected on the screen with the correct answer after students electronically answered the questions using clickers.
Butler and Nisan (1986) used both objective and content-specific feedback. In their study, objective feedback was in the form of a score, while the content-specific feedback was written and related to the task on hand. Similarly, Butler’s (1987) study used content-specific, objective feedback along with praise. The content-specific feedback was in the form of written comments that consisted of one sentence that related specifically to the students’ performance of each task. The praise provided to the students consisted of the phrase “very good”. Finally, a numerical score ranging from 40 to 99 was provided. This was considered a form of objective feedback.
Summary Overall Effect
The inclusion of all studies yielded a summary overall effect of Hedges’ g = 0.40. Tests of statistical significance indicate support for rejection of the null hypothesis (p = .003). Meta-analysis results were further analyzed for differences according to moderators, which included student grade level, provider of feedback, and types of feedback. The results from these analyses follow.
Student Grade Level
The studies included in this meta-analysis were divided into two grade level categories, kindergarten through high school (K-12) and college/university or higher education (HE). The summary effects for each level were analyzed individually. The meta-analysis for K-12 studies (n = 19) indicated a summary overall effect of Hedges’ g = .55. Tests of statistical significance indicate support for the rejection of the null hypothesis (p = .001). The meta-analysis for college/university or higher education studies (n = 7) indicated a summary overall effect of Hedges’ g = -.01, with statistical non-significance indicated (p = .911).
Provider of Feedback
The studies included in this meta-analysis were divided into two categories according to who provided the feedback and included teacher-to-student feedback or student-to-student feedback. The summary effects for each category of feedback provider were analyzed individually. The meta-analysis for teacher-provided feedback (n = 24) indicated a summary overall effect of Hedges’ g = .41. Tests of statistical significance indicated support for the rejection of the null hypothesis (p = .004). The meta-analysis for peer-provided feedback (n = 2) indicated a summary overall effect of Hedges’ g = .32 with statistical non-significance indicated (p = .395).
Type of Feedback
The studies included in this meta-analysis were divided into three categories according to the type of feedback provided, including content-specific feedback, praise or objective feedback (objective feedback included a numerical score, a letter grade, or whether the student response was right or wrong). The summary effects for each category feedback type were analyzed individually. The meta-analysis for content-specific feedback (n = 8) indicated a summary overall effect of Hedges’ g = .91. Tests of statistical significance indicated support for the rejection of the null hypothesis (p = .003). The meta-analysis for praise feedback (n = 2) indicated a summary overall effect of Hedges’ g = .42. Tests of statistical significance indicated support for the rejection of the null hypothesis (p = .033). The meta-analysis for objective feedback (n = 16) indicated a summary overall effect of Hedges’ g = .13 with statistical non-significance indicated (p = .144).
Table 4: Effect Sizes
|Moderator Variables||n||Effect Size||p-values|
|Student Grade Level||K-12||19||.55||.001*|
|Provider of Feedback||Teacher-to-student||24||.41||.004*|
|Type of Feedback||Content-Specific||8||.91||.003*|
|Note. *significance at the .05 level.|
Summary Overall Effect
Results of this meta-analysis of quantitative research studies on the impact of feedback on academic achievement indicated an overall moderate effect size (Hedges’ g = .40) with statistically significance (p = .003). According to Marzano’s model, this is equivalent to a 17% percentile gain. This gain suggests that students receiving feedback will on average most likely outperform 67% of student sample who receive no feedback (Marzano Research, 2015). When students are provided with feedback, the results support the hypothesis that feedback positively impacts student achievement. However, with only 26 data sets drawn from eight studies, additional exploration regarding the impact of feedback for all levels of education is warranted.
Student grade level. A moderate, positive, and statistically significant effect (Hedges’ g = .55) was calculated for the use of feedback at the K-12 level. Results suggest that students at the K-12 level may show positive impacts in academic achievement when provided with feedback. At the higher education level, the calculated effect size was statistically non-significant (p = .911) and thus inconclusive.
Provider of feedback. A moderate, positive, and statistically significant effect (Hedges’ g = .41) was calculated for the use of teacher-to-student feedback. Results suggest that students may show positive impacts in academic achievement when provided with teacher-to-student feedback. Student-to-student feedback results were statistically non-significant (p = .395) and thus inconclusive.
Type of feedback. A strong, positive, and statistically significant effect (Hedges’ g = .91) was calculated for the use of content-specific feedback. Results suggest that students may show positive impacts in academic achievement when provided with content specific feedback. A moderate, positive, and statistically significant effect (Hedges’ g = .42) was indicated for the use of praise feedback. However, the small sample size (n = 2) give these investigators pause when drawing further conclusions. Objective feedback results were both statistically non-significant (p = .14) and weak (Hedges’ g = .13)
The summary overall effect and the moderator effects in this meta-analysis suggest that feedback can have a positive impact on student achievement. Grade level analysis suggests that students at the K-12 levels may benefit the most from feedback. Feedback is most effective when provided to a student from a teacher, rather than feedback delivered from one student to another. Content-specific feedback seems to provide the most positive impact on academic achievement.
These findings align with Brookhart’s (2008) recommendations for effective feedback (clear, age-appropriate, content-specific, timely, high quality). A teacher is likely the party best-equipped to provide content-specific feedback that improves student understanding and thus achievement. The teacher has a mastery and knowledge of subject matter which would make any feedback they provide deeper and more useful than what a student could provide to his or her peer. However, the current structure of the school system and time constraints in a typical school day may limit how much time a teacher can devote to individual teacher-to-student feedback.
Several areas of future research are suggested by this meta-analysis. Overall, more studies examining the impact of feedback on student achievement are needed. Specifically, studies that examine the impact of student-to-student feedback are especially needed for further analysis. It would be especially useful to determine how to better equip students to provide feedback to their peers. Additionally, more quantitative research studies on feedback in college and university settings is called for to address a research gap identified by the current study.
- Adiguzel, T., Varank, I., Erkoç, M. F., & Buyukimdat, M. K. (2017). Examining a Web-Based Peer Feedback System in an Introductory Computer Literacy Course. In EURASIA Journal of Mathematics, Science & Technology Education, 13(1), pp. 237–251.
- Bangert-Drowns, R. L., Kulik, C. C., Kulik, J. A., & Morgan, M. (1991). The instructional effect of feedback in test-like events. In Review of Educational Research, 61(2), pp. 213-238. URL: https://doi-org.ezproxy.spu.edu/10.3102/00346543061002213
- Black P. & Wiliam, D. (1998). Assessment and classroom learning. In: Assessment in Education, 5, pp. 7-74. URL: http://dx.doi.org/10.1080/0969595980050102
- Borenstein, M. (2009). Effect sizes for continuous data. In H. Cooper, L. V. Hedges, & J. C. Valentine (Eds.), The handbook of research synthesis and meta-analysis, 2nd ed. New York, NY: Russell Sage Foundation, pp. 221–235.
- Brookhart, S. (2008). How to give effective feedback to your students. Alexandria, VA: Association for Supervision and Curriculum Development.
- Butler, R. (1987). Task-involving and ego-involving properties of evaluation: Effects of different feedback conditions on motivational perceptions, interest, and performance. In Journal of Educational Psychology, 79(4), pp. 474–482. URL: https://doi-org.ezproxy.spu.edu/10.1037/0022-06188.8.131.524
- Butler, R. & Nisan, M. (1986). Effects of no feedback, task-related comments, and grades on intrinsic motivation and performance. In Journal of Educational Psychology, 78, pp. 210-216. URL: http://dx.doi.org/10.1037/0022-06184.108.40.206
- Cowan, J. 2003. Assessment for learning: Giving timely and effective feedback. In Exchange, 4, pp. 21–22.
- Creswell, J.W. (2003). Research design: Qualitative, quantitative, and mixed methods approaches. (2nd ed.). Thousand Oaks: Sage.
- Ferguson, C. J. (2009). An effect size primer: A guide for clinicians and researchers. In Professional Psychology: Research and Practice, 40(5), pp. 532–538. URL: https://doi-org.ezproxy.spu.edu/10.1037/a0015808
- Field, A. & Gillett, R. (2010). How to do a meta-analysis. In British Journal of Mathematical & Statistical Psychology. 63(3), pp. 665-694. URL: doi:10.1348/000711010X502733
- Guskey, T. R. & Marzano, R. J. (2003). Assessment as learning: Using classroom assessment to maximize student learning. Thousand Oaks, CA: Corwin Press, Inc.
- Hattie, J. (2012). Visible learning for teachers: Maximizing impact on learning. London: Routledge.
- Holmes, L. and Smith, L. (2003). Student evaluations of faculty grading methods. In Journal of Education for Business, 78(6), pp. 318- 323. URL: https://doi.org/10.1080/08832320309598620
- Hounsell, D. (1987). Essay writing and the quality of feedback. In J. T. E. Richardson, M. W. Eysenck, and D. W. Milton (Eds.), Student Learning: research in education and cognitive psychology. D.W. Milton Keynes: Open University Press
- Hwang, G-J., Tu, N-T., & Wang, X-W. (2018). Creating Interactive E-Books through Learning by Design: The Impacts of Guided Peer-Feedback on Students’ Learning Achievements and Project Outcomes in Science Courses. In Journal of Educational Technology & Society, 21(1), pp. 25–36.
- Irons, A. (2008). Enhancing learning through formative assessment and feedback. New York, NY: Routledge.
- Kluger, A. N., & DeNisi, A. (1996). The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. In Psychological Bulletin, 119, pp. 254-284. URL: http://dx.doi.org/10.1037/0033-2909.119.2.254
- Koenig, E. A., Eckert, T. L., & Hier, B. O. (2016). Using Performance Feedback and Goal Setting to Improve Elementary Students’ Writing Fluency: A Randomized Controlled Trial. In School Psychology Review, 45(3), pp. 275–295. URL: https://doi.org/10.17105/SPR45-3.275-295
- Labuhn, A. S., Zimmerman, B. J., & Hasselhorn, M. (2010). Enhancing students’ self-regulation and mathematics performance: the influence of feedback and self-evaluative standards. In Metacognition & Learning, 5(2), pp. 173–194. URL: https://doi-org.ezproxy.spu.edu/10.1007/s11409-010-9056-2
- Marzano, R. J. (2009). Designing and Teaching Learning Goals and Objectives. Bloomington: Marzano Research Laboratory.
- Newman, M. I. (1974). Delay of information feedback in an applied setting: effects on initially learned and unlearned items. In Journal of Experimental Education, 42, pp. 55–59. URL: https://doi.org/10.1080/00220973.1974.11011494
- Nunez, J. C., Suarez, N., Rosario, P., Vallejo, G., Cerezo, R., & Valle, A. (2015). Teacher’s feedback on homework, homework-related behaviors, and academic achievement. In The Journal of Educational Research, 108, pp. 204-216. URL: https://doi.org/10.1080/00220671.2013.878298
- Paige, D. D. (1966). Learning while testing. In The Journal of Educational Research, 59(6), pp. 276–277. URL: https://doi-org.ezproxy.spu.edu/10.1080/00220671.1966.10883355
- Rosenthal, R., & DiMatteo, M. R. (2001). META-ANALYSIS: Recent Developments in Quantitative Methods for Literature Reviews. In Annual Review of Psychology, 52(1), p. 59. URL: https://doi-org.ezproxy.spu.edu/10.1146/annurev.psych.52.1.59
- Sadler, D. R. (1998). Formative assessment: Revisiting the territory. In Assessment in Education: Principles, Policy & Practice (5), pp. 77–84. URL: https://doi.org/10.1080/0969595980050104
- Siewert, L. (2011). The effects of written teacher feedback on the academic achievement of fifth-grade students with learning challenges. In Preventing School Failure, 55, pp. 17-27. URL: http://dx.doi.org/10.1080/10459880903286771
About the Authors
Dr. Nalline S. Baliram: Assistant Professor of Teacher Education, Seattle Pacific University (Seattle, USA). E-mail: firstname.lastname@example.org
Dr. Jeffrey J. Youde: Adjunct Professor of Teacher Education, Seattle Pacific University (Seattle, USA). E-mail: email@example.com