Group self-assessment is desirable and acceptable in
collaborative
learning
IJsbrand Kramer, INSERM 1026 - BioTis, University of Bordeaux, PO Box
45, 146 rue Léo Saignat, 33076 Bordeaux, France
Email:ijsbrandkramer@gmail.com
Key words: collaborative learning, ground rules, peer assessment,
rating, secondary schools, higher education, university
Running title: acceptability of group self-assessment
Mobile: 0033 (0)6 2131 8220
Summary
Collaborative learning is considered to be an effective learning method.
In principle, it promotes two key conditions for effective learning,
namely good (social) control of metacognition and internal regulation of
learning behaviour. Both require healthy social interaction in addition
to good cognitive engagement. This social interaction and cognitive
engagement cannot be imposed on groups, but can be fostered, among other
factors, by individual accountability. Group self-assessment can help to
achieve this. We have developed a group self-assessment procedure and
shown in a previous study that it steers students towards internal
regulation of behaviour (autonomous motivation). As we use the procedure
in many collaborative projects, the question is whether students accept
it and whether it has the intended effect on the group work experience.
An acceptance survey was developed for this purpose. Students’ responses
indicate that they find the self-assessment procedure convenient, that
the assessments are fair, that it improves the collaborative experience
and strengthens group beliefs. 153
Introduction
This paper describes a study, using an newly developed acceptance
survey, of how university students experience group self-assessment in
collaborative learning projects. Collaborative learning is considered to
be an effective learning method, provided that the group respects
appropriate internal dynamics(Johnson, Johnson
& Smith, 1998;Springer, Stanne
& Donavan, 1999). A number of factors need to be satisfied such as
positive interdependence, indvidual accountability, face-to-face
promotive interaction, employment of social skills and group processing.
By collaborative we refer here to ”the use of a self-contained task and
the focus on joint activity with the aim of creating shared
understanding” (p. 177,Tolmie,
et al., 2010). Indeed, the strength of collaborative learning lies in
engaging in the co-construction of meaning, a process enabled by
transactive dialogue, also referred to as shared cognition (Blatchford
et al.,2003,2006;Garrison &
Akyol, 2015;Tolmie
et al., 2010;Van den Bossche et
al., 2006). The term transactive implies a developmentally effective
dialogue. In fact, transactive dialogue promotes metacognition, i.e.
self-monitoring of learning with the help of others
(Flavell,
1979;Garrison &
Akyol, 2015, table 1, p69;Hadwin &
Oshige, 2011). Of course, the social regulation of learning in groups
goes beyond the individual characteristics of self-monitoring activities
and implies a dimension of social skills
(Iiskala,
et al. 2011). A lack of social skills, or the failure to use them,
hinders the process of transactive dialogue. Without all this, students
are better off working alone.
Besides its positive influence on metacognition, collaborative work can
also improve the quality of motivation among group members. We’re
talking here about the sense of autonomy (sense of agency), competence
(shared) and relatedness that, according to Self-Determination Theory,
are important drivers of high volition engagement in learning tasks
(internalized regulation) (chapter 4,Ryan & Deci,
2017). We find internalized regulation essential for enjoyable and
productive education, as described in our previous studies
(Kramer et al
2017; Kramer
et al., 2022).
Three important conditions, among others, for a constructive transactive
dialogue are cognitive engagement, psychological safety
(Van den Bossche
et al., 2006) and the feeling that individual contributions are
recognized in the collective product
(Johnson, Johnson
& Smith, 1998;Slavin, 1996). It
is therefore important to allow group members to assess their
collaborative engagement, with the possibility of linking this
assessment to an individual (project) score. In this way, individual
accountability and group processing are encouraged; two factors of theinternal dynamics mentioned above. In addition, if one of the
members does not collaborate despite all precautions, groups can be
comforted by a differentiating individual score (installing a sense of
fairness).
We have developed an online group self-assessment procedure to meet
these conditions
(Kramer et
al., 2022). It is in many ways similar to other category-based group
assessment procedures described previously
(Brown, 1995;Conway, 1993;Freeman &
McKenzie, 2002 (SPARK);Ohland et al.,
2012 (CATME)) but with an important difference that students set their
own ground rules
(Kramer et
al., 2022;Kramer,
2024). This approach is based on the arguments that groups should have
maximum autonomy (Self-Determination Theory)
(Ryan & Deci,
2000) and that groups that make forward-looking agreements about how
they will work together have been shown to be more focused and motivated
to make adjustments in group functioning
(DeChurch & Haas,
2008). As a third argument, making decisions about the standards of
performance and rating the quality of the performance in relation to
these standards strengthens learning
(Boud &
Falchikov, 2006). Besides all this, group self-assessment also
provides an opportunity, when applied during the collaborative task, for
instructor/leader intervention in the event of inappropriate group
functioning.
Research question
Assessing one’s peers and oneself requires some commitment, and although
there are theoretically important learning benefits
(Boud &
Falchikov, 2006), it seems unwise to expose students to this type of
activity involuntarily, as this would reduce the usefulness of
assessment (Van der
Vleuten, 1996). The research question is therefore whether students
find the process acceptable and whether they see the benefits that
experts envisaged. To this end, an acceptance survey was designed to
explore whether it was good to be assessed, whether it was fair to be
assessed by peers, whether the process (setting up the ground rules and
online voting) was easy to carry out, whether it changed group beliefs
and whether it influenced group work. The target group consists of
higher education students (18-23 years of age) involved in a
collaborative learning project of sufficient size and duration (at least
two weeks full time).
Methods
Participants
Characteristics
University students were at two different levels, first and third year.
The first year students (n = 88) were in preparatory classes for entry
to life sciences engineering schools (the ”Grandes Ecoles”) in a French
university. They had a mean age of M = 18.98, SD = 0.46 and were 70%
female. These students participated in a collaborative science writing
blog project
(Kramer &
Kusurkar, 2017). The third year students (n = 83) were predominantly
Dutch students in their third year of medical school and participated in
a collaborative science writing blog project in a Dutch or French
University. They had a mean age of M = 22.1, SD = 0.88 and were 81%
female.
The self-assessment
procedure
The self-assesment procedure comprises four stages. In the first stage,
after a brief introduction, the students define 5 to 7 ground rules for
productive collaboration. It was found that they’re pretty much experts
at it (Kramer,
2024), and by letting do it themselves there’s minimal interference
from teachers (and a high degree of autonomy for the group). About
halfway through the project, the groups carry out an initial
self-assessment. In the third stage, the teacher discusses the voting
results with the groups and offers help in case of conflict. The fourth
stage consists of a second group self-assessment, and the result is used
to calculate an individual project score (see alsohttps://groupworking.net/5-setting-the-ground-rules/).
The supervising teacher feeds the group composition and the 5-7 survey
questions proposed by the students into a software application. Access
to the application on a specific date is controlled by logins and
passwords. Students can vote on their smartphones. They vote for other
group members and for themselves. The individual score is calculated by
dividing individual ground-rule compliance by the average group
compliance. The resulting coefficient is then multiplied by the project
score. The application also provides information about the coherence of
the assessment, i.e. the extent to which one’s own view is consistent
with that of others. We reasoned that a realistic self-assessment is a
good measure of the extent to which someone is aware of his or her
functioning in the group
(Kramer et
al., 2022).
Measures
We measured the acceptability of the self-assessment procedure, as well
as its impact on group working and group beliefs, using a 22-item
“acceptance survey” divided into 5 scales. The scales were
“convenience of voting procedure”, “principle of group assessment”,
“fairness of self-assessment (peer assessment)”, “impact on group
working” and “impact on group beliefs”. References that underpin the
items of the impact scales are:DeChurch & Haas,
2008; Karau &
Williams, 1993;Kramer et al.,
2022; van den
Bossche et al., 2006.
The survey was used at the end of the collaborative projects, during the
project closure session in class. Students had online access (Google
”Forms”) and voted on each item on a Likert scale from 1 - 7. The anchor
points were: strongly disagree - disagree - somewhat disagree - neutral
- somewhat agree - agree - strongly agree. A number of statements were
negative, but have been converted to positive in Figure 2 (to make the
graph more compact). The internal consistency of the scales was
sufficient, with the exception of fairness of peer assessment. Not all
scales have the same number of participants. A number of items have been
changed in the pilot period from which resulted the current survey.
Data analysis
Consistency of the student replies to the different questions within
each scale was analysed with the use of the Chronbach alpha reliability
coefficient and with the use of a Pearson “r” correlation analysis.
Measures were made with the DATAtap online statistics calculator(DATAtab, 2024). The students’
responses are presented in a horizontal bar chart to show the
distribution of opinions for each question.
Results
Reliability of the survey
The consistency and degree of correlation of the survey scales reveal
that all the scales, with the exception of the “fairness of peer
assessment”, have acceptable to very good consistency (Cronbach values
in figure 2). The Pearson correlation heatmap confirms this, with the
different scales being easily identifiable; there are clear correlating
boxes. Again, fairnes is the exception (figure 2).