Assessing Student Participation at School: Developing a Multidimensional Scale

International Journal of Student Voice

A peer-reviewed, independent, open-access journal

Pennsylvania State University

Volume 5                            IJSV                           September, 2019

Assessing Student Participation at School: Developing a Multidimensional Scale

Donnah L. Anderson  Charles Sturt University, Australia

Anne P. Graham  Southern Cross University, Australia

Nigel P. Thomas University of Central Lancashire, UK

Citation: Anderson, D. L., Graham, A. P., & Thomas, N. P. (2019). Assessing student participation at school: Developing a multidimensional scale. International Journal of Student Voice, 5(1).

Abstract: In the past few years there has been a growing interest in student participation at school, and in whether participation is connected with student wellbeing or with academic success. One problem when studying student participation is that it seems to mean different things to different people. For some people it is just about students attending school and going to lessons. For others it is about students making decisions about things that matter to them, or being part of “student voice” activities at school. Another problem is that we do not have good ways to measure how well schools are doing at student participation, with tools that take account of the different ways that students can participate. This article reports how a new tool has been created to measure student participation. The new tool is called the Student Participation Scale. It was created in New South Wales (NSW), Australia. The researchers read books and articles on student participation. They also talked to school staff and students to find out what student participation meant to them, and they asked them about what questions should go into the tool. Once they created the Student Participation Scale, the researchers tested it on 1,435 secondary school students. The Scale asks 38 questions to measure six types or “elements” of student participation:

  1. Students working together with peers and school staff,
  2. Students having a voice about schooling,
  3. Students having a say with influential people at school,
  4. Students having influence on decisions made at school,
  5. Students having a voice about school activities outside of the classroom, and
  6. Students having choice.

These elements of student participation were the same for boys and girls, for different grade or year groups, for students who spoke English as a second language, for students from an Indigenous background, and for students with a disability. The Scale was also consistent and valid. That is, it measured what the researchers said it would measure. The Student Participation Scale is easy and free for schools to use. It can be used to measure which elements of participation are happening most, and which ones schools might try to improve. There is also a guidebook that has instructions and tips for using the Scale in schools.

Keywords: student voice, student participation, survey.

Online Discussion Questions:

  • The authors identified six important elements of participation. Do you think they identified the right elements? 
  • The Student Participation Scale has 38 questions (see column 1 of Table 2). Could the wording of any of these questions be improved?
  • Discuss whether you could be confident in the results from the Student Participation Scale given the process involved in developing it and the items it measures?
  • How could the Student Participation Scale be used for further research and/or to help improve practice in schools?

 

Introduction

In recent years there has been an increasing international focus in policy, practice, and research on student participation at school. Several factors are driving this interest, not least of which is a children’s rights agenda, specifically informed by Article 12 of the United Nations Convention of the Rights of the Child (UNCRC; United Nations, 1989). Article 12 states that children have a right to voice their opinions in decisions that affect them and to have their opinions given due weight in such decisions. Many contemporary developments within education policy and practice align with such an emphasis, including personalized learning and the rise of student-centered pedagogies (Whitty & Wisby, 2007). Additionally, there is a growing international evidence base that links student participation at school with improved outcomes regarding life skills, self-esteem, social status, democratic skills, citizenship, student-adult relationships, school ethos (Holdsworth, 2000; Mager & Nowak, 2012), student health and wellbeing (de Roiste, Kelly, Molcho, Gavin, & Nic Gabhainn, 2012), and agency, belonging, and competence (Mitra, 2004). Paradoxically, while there has been strong interest toward improving student participation in schools, understandings of what student participation involves; what its constituent elements are; and how schools should measure, assess, and monitor their progress in relation to student participation have been less clearly articulated.

Student participation has been defined in various ways in scholarly and applied literature and can mean anything from mere attendance at school (just “being there”), to “taking part” in classroom and extra-curricular activities, to “having a say” about topics that concern the individual young person (Holdsworth, 2000, pp. 354-355). Such definitional ambiguity has meant that any clear conceptualization of student participation has been elusive and, in practical terms, has conspired to inhibit the operationalization of the construct for practical, measurement, or assessment purposes.

This article reports on one component of a large mixed-methods research project focused on understanding student participation and wellbeing at school. In particular, it reports on the development of a reliable and valid scale, the Student Participation Scale (SPS), which measures elements of student participation at school.

Measuring Children’s and Young People’s Participation

Having a framework and tools for assessing and measuring the impact of participation on children and young people themselves; on institutions; and on polices, services, and communities is critically important (Crowley & Skeels, 2009). Such evidence is necessary for progressing implementation of Article 12 of the UNCRC (Skeels & Thomas, 2007). In their systematic review of the effects of student participation in decision making at school, Mager and Nowak’s (2012) key finding was the need for more comprehensive high quality research. We agree with this argument. However, to measure the impact of participation on various outcomes and to conduct quantitative research designs to achieve this, such as longitudinal and control group designs (Kirby & Bryson, 2002), the first step is to create a valid and reliable scale to measure student participation. As shown in the review that follows, any scales that are developed need to include multiple items that measure the elements of participation.

Models of Children’s and Young People’s Participation

There have been numerous conceptual models of children’s and young people’s participation discussed in the literature. Despite their different emphases, many models have in common a structure which either encompasses multiple spaces in which authentic participation takes place or posits multiple constituent elements of participation.

For example, Holdsworth (2000) argues that student participation occurs through two major spaces: (1) in school governance, for example school councils, committees or boards, and student representative councils; and (2) in curriculum, for example classroom learning partnerships and student participation strategies or projects. Mannion, Sowerby and I’Anson (2015) reported four spaces where student participation occurs: (1) the formal curriculum, (2) the extended curriculum or extra curriculum, (3) decision-making groups, and (4) informal contact among peers and adults.

Other models posit multiple elements of participation structured in various hierarchies of participation (e.g., Hart, 1997; Holdsworth, 2000; Lundy, 2007; Mitra, 2005; Shier, 2001; see also Thomas, 2007). As shown in Figure 1, these models include Mitra’s (2005) three elements of the “pyramid of voice,” Lundy’s (2007) four elements of participation, Shier’s (2001) five pathways to participation, Holdsworth’s (2000) six-rung student participation ladder, and Hart’s (1997) eight rungs of participation. Common to all five models are notions of young people having opportunities for voice, being listened to and heard, having their views influence decisions, and working collaboratively and sharing leadership or power with adults. As demonstrated by these examples, many of the models of children’s and young people’s participation support conceptual definitions of a complex construct with multiple components. Thus, these models, if used to inform operational definitions and the structure of tools with which to measure participation, support multiple item and multifactorial measurement scales.

Hierarchical models of the elements of children and young people’s participation.

Figure 1. Hierarchical models of the elements of children and young people’s participation.

Resonating with these models, the Student Voice Rubric (Sussman, 2012) uses six areas of student voice and 17 elements of participation to form a matrix aimed at supporting the implementation of student voice in New York schools. While this innovative tool is useful for understanding, identifying, and monitoring the places and qualities of student voice activities in schools, its tick-box format produces a nominal measurement only (i.e., the student voice indicator is either present or not) and does not produce a quantitative scale, which limits its use for research and practice.

Existing Quantitative Scales to Measure Participation

Recently, a range of quantitative scales have been developed which are aimed at measuring children’s and young people’s participation at school or in decision making more broadly. Some scales, with unknown reliability and validity, have been developed for program evaluation and professional development purposes (e.g., Feinstein & O’Kane, 2005; Welsh Government, 2011; Wu, Weiss, Kornbluh, & Roddy, 2014; Youth and Adults Transforming Schools Together, n.d.). Other scales and specific items to measure participation have been developed for research focused on investigating the association between participation and various outcome variables. For example, de Roiste et al., (2012) used a self-report survey of 10,334 students aged 10-17 years in Ireland to investigate the association between student participation, health, and wellbeing. To measure student participation, survey respondents rated the extent to which they agreed/disagreed with three items: “In our school students take part in making the rules”; “I am encouraged to express my own views in my class(es)”; and “Students get involved in organising school events.” These three items were each analyzed independently. According to the domain sampling model of scale construction, measurement of complex, multifaceted, and abstract constructs such as student participation are best measured using multiple items to measure each facet (Kaplan & Saccuzzo, 2013). Using only a single item to measure each of de Roiste et al.’s three aspects of participation limits the content validity of the analyses. Furthermore, as demonstrated above in the overview of models of participation, a wider and systematically derived range of spaces and elements of participation would be beneficial to achieve comprehensive content validity.

The Child and Adolescent Participation in Decision Making Questionnaire (CAP-DMQ; O’Hare, Santin, Winter, & McGuiness, 2016) is a 10-item self-report scale developed in Ireland and provides a generalized measure of children and young people’s participation in decision making. The scale items were mapped against Lundy’s (2007) four elements of participation (space, voice, audience, and influence) and was found to have good reliability and validity and to be invariant across age and gender. Although the brief length of this scale has benefits of being simple and quick to administer, the 10 items formed a single factor, which limits measurement of participation to an overall composite score and generalized construct. Development of a slightly longer scale that remains simple and still relatively quick to administer would have the benefit of allowing multiple items to be developed for each of the theorized spaces and/or elements of participation. Such a scale would enable researchers to reliably and validly investigate the role played by various elements of participation in their association with a variety of outcome variables, such as student wellbeing, academic performance, and engagement with school.

The Current Study

The present article reports development of the Student Participation Scale (SPS), which aims to reliably and validly measure the elements of student participation in school settings using multiple items for each element and producing continuous scores. The results reported here are one aspect of a large research project: Improving Wellbeing through Student Participation at School. The project was supported by an Australian Research Council Linkage grant, and had partners from education and government organizations (New South Wales Department of Education, Lismore Catholic Schools Office, and the New South Wales Advocate for Children and Young People). The research team was assisted by an expert advisory group of 15 members—four representatives from the partner organizations, two school principals, two teachers, and seven secondary school students. The involvement of students in guiding the research is ethically and methodologically significant, as it endeavors to utilize their expertise while reflexively engaging with the strengths and complexities of implementing student participation in a meaningful and authentic way.

The research project used a mixed-methods approach. The study explored how student participation is currently understood, practiced, and experienced in government and non-government schools in NSW, Australia. The first stage of the project analyzed policy surrounding student participation, and the second used qualitative methods consisting of interviews with policy makers, school principals, and teachers, as well as focus groups with Year 7-10 students, to explore how student participation was understood and practiced. The third and fourth stages are the focus of this article. The aim of Stage 3 was the development of a psychometrically sound quantitative scale to measure student participation, which was then developed further and used in the final stage. Stage 4 tested the relationships between student participation, recognition, and wellbeing at school. The present article briefly describes the processes of Stage 3 and reports the final structure, reliability, and validity of the SPS from a large sample gathered from an online survey in Stage 4.

Development of the Student Participation Scale

The SPS was developed using dual methods. First, deductive methods were used to create an initial bank of survey items to measure student participation. Then empirical methods were employed in Stages 3 and 4 of the study to test and refine the psychometric properties of the scale. The deductive methods involved drawing on existing theories and models of participation, including those positing spaces of participation, particularly Mannion et al. (2015), and those presenting multiple elements of participation, particularly Hart (1997), Holdsworth (2000), Lundy (2007), Mitra (2005), and Shier (2001). Further, the qualitative data from the interviews and focus groups informed the structure and content of items in the scale. Specifically, the qualitative stage found that students experienced participation at school in four key ways: having voice, having influence, having choice, and working together. It also identified five spaces for participation at school: in the classroom, school activities outside of class, formal participatory opportunities, student-teacher relationships, and educational policy development. These models and research findings directly informed construction of a matrix of student participation, which consisted of a 4 x 5 cell table with the four key elements from Stage 2 of the study across the horizontal axis and five key arenas or spaces for student participation down the vertical axis. This matrix was used as a scaffold for the research team and project advisory group to create the initial bank of 57 survey items.

The initial 57-item scale was piloted for feedback on its content, administration, phrasing, and formatting, with two samples: (1) a school-based sample of 61 students evenly distributed from Years 7-10, from a Catholic school and a Government school in NSW, and (2) the 12 members of the NSW Advocate for Children and Young People’s Youth Advisory Council, aged between 12 and 24 years. The pilot study involved administration of the scale to the pilot samples, followed by an invitation to provide written feedback and discussion as a group. In response to the pilot feedback five scale items were edited for clarity, and two items were omitted due to redundancy, leaving 55 items.

In Stage 3 of the study, the structure of the SPS was then tested and refined on two separate samples from seven schools (three Catholic schools and four State schools) ranging in size from 129 students to 1200 students in regional and metropolitan NSW. Principal Components Analysis (PCA) on sample 1 (N = 253, age range 11 – 17, Mage = 13.40, SDage = 1.22) and sample 2 (N = 283, age range 11 – 17, Mage = 13.81, SDage = 1.22) supported a 40-item scale with six components of participation. The six components of participation were: working together, having voice about schooling, having a say with influential people at school, having voice about activities outside the classroom, having influence on decisions made at school, and having choice at school. During Stage 3 the phrasing of items was also adjusted based on both statistical analyses (i.e., PCA, confirmatory factor analyses, reliability analyses, and validity analyses) and rigorous discussion with the research team and project advisory group. In the second sample of Stage 3 the SPS appeared to have good internal consistency within each element of participation, sound content and construct validity, and an invariant structure across demographic groups (gender, grade, or year at school; Australian Aboriginal and Torres Strait Islander [Australian Indigenous] status; Cultural and Linguistic Diversity status [CALD status]; and disability status). At this point the 40-item scale was ready to be administered in Stage 4 of the study, with further analysis conducted on a larger sample. These results from Stage 4 are reported below.

Method

Participants

Purposive sampling was used to recruit a diverse range of schools to take part in the project. Potential schools were identified via the My Schools website (https://www.myschool.edu.au/), which provides details of each Australian school’s demographic and academic performance characteristics. Diversity was sought in terms of school size, socioeconomic status, geographic and cultural characteristics, whether schools were single sex or co-educational, and also schools taking differing approaches to student participation. Some schools identified as “lighthouse schools” for their leadership in the area of student participation were also invited. The Stage 4 sample was recruited from 16 secondary schools (nine Catholic schools and seven government schools) from regional and metropolitan NSW, ranging in size from 379 to 1,065 students. In total, 1,481 participants started the survey, and 1,435 completed it. Participant ages ranged from 11 to 17 years with a median age of 14 years (M = 13.88, SD = 1.26). Table 1 reports the frequencies and percentages of the participants in demographic categories.

Table 1: Frequencies and Percentage of the Sample in Demographic Categories

    N (% of sample)
Gender  
  Male 624 (43.5)
  Female 742 (51.7)
  I describe my gender in a different way 47 (3.3)
  I’d rather not say right now 22 (1.5)
Year  
  7 455 (31.7)
  8 418 (29.1)
  9 276 (19.2)
  10 286 (19.9)
CALD status  
  English only 1205 (84.0)
  English + other language 186 (12.9)
  Other language only 13 (0.9)
  I’d rather not say right now 31 (2.2)

 

  N (% of sample)
Australian Indigenous status  
  Aboriginal 123 (8.6)
  Torres Strait Islander 15 (1.0)
  Both Aboriginal and Torres Strait Islander 11 (0.8)
  Neither Aboriginal nor Torres Strait Islander 1225 (85.4)
  I’d rather not say right now 61 (4.3)
  Missing data 0 (0)
Disability status  
Has a disability 97 (6.8)
Does not have a disability 1098 (76.5)
Not sure 195 (13.6)
I’d rather not say right now 45 (3.1)

Materials

Participants in the online survey responded to demographic items first, followed by measures of wellbeing at school, recognition at school, the student participation scale, and validity items. The wellbeing and recognition items will be reported in a separate article.

Demographic items. Demographic items asked about gender (male/female/I describe my gender in another way/I’d rather not say right now); age (11-17 years); year at school (7/8/9/10); Indigenous status (Australian Aboriginal/ Torres Strait Islander/Both/Neither/I’d rather not say right now); language spoken at home [CALD] (English only/English and another language/ Another language only/I’d rather not say right now); and disability status (Yes/ No/Not sure/I’d rather not say right now); “Does your school have a Student Representative Council (SRC)?” (Yes/No/ Not sure); and, “If Yes, Are you a member of your school’s SRC?” (Yes/No).

Student Participation Scale. The SPS consisted of 40 items measuring six elements of participation: working together (9 items), voice about schooling (9 items), having a say with influential people (7 items), voice about activities outside the classroom (3 items), having influence (7 items), and having choice (5 items). Table 2 shows all 40 items, which were responded to using 5-point Likert scales, where 1 indicated “strongly disagree,” 2 indicated “disagree,” 3 indicated “neither agree nor disagree,” 4 indicated “agree,” and 5 indicated “strongly agree.” The factor structure, reliability, and validity of the SPS are reported in the Results section.

Validity items. To test the convergent validity of the SPS, engagement with school was measured using a 19-item validated scale by Fredericks, Blumenfeld, Friedel, and Paris (2005). Fredericks et al.’s scale has three subscales: behavioral engagement (five items, e.g., “I follow the rules at school”), emotional engagement (six items, e.g., “I like being at school”), and cognitive engagement (eight items, e.g., “I check my school work for mistakes”). Fredericks et al. reported good internal consistency for the behavioral (α = .72 – .77), affective (α = .83 – .86), and cognitive (α = .82) sub-scales.

Procedure

After obtaining ethics approval from the university and all relevant school systems, school principals were telephoned by relevant research partners from either the government or Catholic school system. If principals verbally agreed that their school could take part in the study, an email invitation was then formally issued. The principal or their designated personnel recruited teachers to facilitate administration of the survey in their classes. Facilitating teachers received an instruction page which introduced the project and survey process as well as provided the link to the survey. Teachers gave each student in their class opt-out parent and student consent forms and information letters. Only students who did not return any opt-out form took part in the study. All participation was voluntary, anonymous, and confidential. Students completed the online survey in a classroom setting where privacy was emphasized. Submission of survey responses was deemed consent. On average, the survey took participants around 12 minutes to complete. Participating schools were sent a summary of key results.

Results

Quantitative data was analyzed using IBM-SPSS Statistics 20 and AMOS Version 20.

The Structure of the SPS

Exploratory Factor Analysis (EFA) using Maximum Likelihood method was conducted on a randomly selected half (N = 717) of the responses to the 40 items of the SPS. The second random half of the sample was used in Confirmatory Factor Analysis (CFA; see below). The data were factorable, with several strong inter-correlations; the KMO was .97’ and Bartlett’s test of Sphericity was significant, p < .001. EFA revealed the presence of six factors with eigenvalues exceeding 1. Cattell’s (1966) scree plot revealed four potential breaks – at one factor, two, five and six factors. The Monte Carlo Parallel Analysis procedure (Watkins, 2000) supported five factors. Various extractions were attempted, including two, five, and six extracted factors with direct oblimin rotation procedures employed. The clearest pattern matrix to interpret was when six factors were extracted. Loadings above .40 were retained. There were no cross-loadings, and all but two items loaded above .40 (see Table 2). The six factors were labeled as follows and after extraction explained 61.11% of total variance:

  1. Working Together (9 items): 44.51%
  2. Voice about Schooling (9 items): 5.53%
  3. Having a Say with Influential People (5 items): 3.24%
  4. Having Influence (7 items): 2.64%
  5. Voice about Activities (3 items): 2.45%
  6. Having Choice (5 items): 2.74%

The two items that did not load above .40 (see Note, Table 2) were not retained. Cronbach’s alpha showed internal consistency of the six factors was excellent (see Table 2). Table 3 shows the correlations between the six factors ranged from moderate to strong, supporting the use of an oblique rotation method.

Table 2: Exploratory Factor Analysis: Cronbach’s Alpha and Item Loadings for Six Factors of the Student Participation Scale (N = 717)

  SPS Item Loading M SD
Factor 1: Working together (α = .91)
1 At my school: Students work with teachers outside of class time to make things happen at school .74 2.99 1.04
2 At my school: Students work together outside of class time to get things changed at school .72 2.89 1.02
3 At my school: Students work with teachers to find a positive way forward .64 3.23 1.02
4 At my school: Students usually make decisions with teachers in meetings .61 2.86 1.07
5 In school activities such as sporting teams, clubs, excursions, camps, fundraising and socials: My classmates and I often make decisions together .61 3.38 0.98
6 In the classroom: My classmates and I often make decisions together about our learning .56 3.17 1.07
7 In school activities such as sporting teams, clubs, excursions, camps, fundraising and socials: Students sometimes contribute to the wider community (businesses, organisations, other schools etc…) .55 3.26 0.96
8 In school activities such as sporting teams, clubs, excursions, camps, fundraising and socials: My teachers and I often make decisions together .52 2.94 1.01
9 In the classroom: My teachers and I often make decisions together about my learning .52 3.01 1.03
Factor 2: Voice about schooling (α = .95)  
10 At school, I usually get to say what I think about: How my work is assessed -.88 2.82 1.14
11 At school, I usually get to say what I think about: How I am taught -.84 2.84 1.12

 

SPS Item Loading M SD
12 At school, I usually get to say what I think about: What I learn -.83 2.89 1.12
13 At school, I usually get to say what I think about: Classroom rules -.82 2.90 1.15
14 At school, I usually get to say what I think about: Homework -.79 2.72 1.15
15 At school, I usually get to say what I think about: How students are disciplined -.73 2.64 1.13
16 At school, I usually get to say what I think about: How the school supports students -.67 2.90 1.11
17 At school, I usually get to say what I think about: What happens in home rooms or roll call groups -.65 2.97 1.13
18 At school, I usually get to say what I think about: How the classroom space is organised -.63 2.67 1.09
Factor 3: Having a say about influential people (α = .89)
19 At school, I get the chance to say what I think…To the Deputy -.95 2.97 1.10
20 At school, I get the chance to say what I think…To the Principal -.86 2.86 1.14
21 At school, I get the chance to say what I think…To the SRC and/or student leaders -.59 3.13 1.09
22 At school, I get the chance to say what I think: In year group or house meetings -.58 3.07 1.08
23 At school, I get the chance to say what I think: To my teachers outside of class time (such as in the playground, or in the teacher’s office) -.45 3.37 1.06
Factor 4: Voice about activities (α = .92)  
24 In school activities, such as sporting teams, clubs, excursions, camps, fundraising and socials, I usually get to say what I think about: How the activities are organised -.89 2.91 1.05

 

SPS Item Loading M SD
25 In school activities, such as sporting teams, clubs, excursions, camps, fundraising and socials, I usually get to say what I think about: Which activities are offered -.76 3.04 1.06
26 In school activities, such as sporting teams, clubs, excursions, camps, fundraising and socials, I usually get to say what I think about: How often the activities happen -.76 2.84 1.07
Factor 5: Having influence (α = .92)  
27 Most of the time in the classroom: My opinion is considered by teachers -.89 3.28 1.00
28 Most of the time in the classroom: My opinion is listened to by teachers -.85 3.36 1.05
29 Most of the time in the classroom: The teachers tell me how my opinion was used -.77 3.02 1.05
30 Most of the time in the classroom: My opinion makes a difference and things change -.69 2.87 1.05
31 Most of the time in school activities, such as sporting teams, clubs, excursions, camps, fundraising, and socials: The teachers tell me how my opinion was used -.48 2.90 1.02
32 At my school: Staff take students’ opinions seriously -.46 3.06 1.06
33 At my school: Staff take notice of what students say to them -.45 3.28 1.03
Factor 6: Having choice (α = .81)  
34 At my school I usually get a lot of choice about: How much I get involved in school activities (such as sports, camps, socials, plays…) .71 3.55 1.08
35 At my school I usually get a lot of choice about: The type of school activities I do (such as sports, camps, socials, plays…) .69 3.42 1.13
36 At my school I usually get a lot of choice about: Who I sit near .53 3.44 1.07

 

SPS Item Loading M SD
37 At my school I usually get a lot of choice about: How I present my school work (e.g., as an essay or poster) .43 3.37 1.07
38 At my school I usually get a lot of choice about: How I look .40 2.99 1.24

Note. Items that did not load on any factors above .4 were ‘At my school: The Principal or Deputy takes notice of what I say’ and ‘At my school: My views inform the work of the SRC or school leaders’.

Table 3: Inter-correlations of the Six Factors of the Student Participation Scale

1. Working Together  2. Voice about schooling 3. Having a say with influential people 4. Voice about Activities 5. Having Influence 6. Having Choice
1 -.58 -.54 -.52 -.66 .45
2 .55 .66 .56 -.38
3 .50 .56 -.40
4 .45 -.37
5 -.45

 

CFA was conducted on the second random half of the Phase 4 sample (N = 717). Model 1 tested the six participation factors and their 38 relevant observed items based on the EFA results reported above. All factors were allowed to covary based on the correlation results between factors in the EFA. The model fit indices showed mixed results (Table 4). The chi-square (χ2) value for Model 1 was large and significant, p < .001, indicating a poor fit of the data to the model. The normed chi-square value (χ2/df) exceeded all guidelines regarding its acceptable value of 2, 3, or 5 (Kline, 2005) and indicated poor fit of the data to the model. In Model 1, the Adjusted GFI (AGFI) indicated a fairly good fit of the data to the model as values close to 1.00 indicate good fit (Byrne, 2001). The Comparative Fit Index (CFI) was below .95, indicating poor fit (Hu & Bentler, 1999). The Standardized Root Mean Square Residual (SRMR), which shows perfect fit of the model to the data when residuals are zero, was lower than the criterion of .10 (Kline, 2005), indicating the model fitted the data well. The Root Mean Square Error of Approximation (RMSEA) value suggested reasonable fit based on Brown and Cudeck’s (1993) rule of thumb that that RMSEA less than or equal to .05 indicates a close approximate fit, values between .05 and .08 represent reasonable error of approximation, and values above .10 suggest poor fit. The narrow confidence interval (95% CI) suggested excellent precision of the RMSEA. In sum, the RMSEA and SRMR supported model fit, while the other indices suggested re-specification would be beneficial.

Table 4: Model Fit Indices for the Measurement of the Six Elements of Student Participation

Model χ2 df p χ2/df AGFI CFI SRMR RMSEA 90% CI for RMSEA
1 4281.34 650 .001 6.59 .82 .91 .05 .06 .06,.06
2 2757.77 641 .001 4.30 .88 .95 .04 .05 .05,.05

Large modification indices (i.e., above 80) for the covariances between error terms were inspected. As it made substantive sense that nine pairs of error terms would be associated due to item content with very similar meaning and the pairs of items loaded on the same factor, they were re-specified to covary in Model 2. For example, on the “Having influence” factor, the errors associated with “My opinion is listened to by my teachers” and “My opinion is considered by teachers”; “Staff take notice of what students say to them” and “Staff take students’ opinions seriously”; and “The teachers tell me how my opinion was used in the classroom” and “My opinion makes a difference and things change,” were allowed to covary. Similarly, four pairs of errors were allowed to covary on the “Working together” factor, and one pair was allowed to covary on both the “Having a say with influential people” factor and the “Voice about schooling” factor. Error terms that covaried across different factors were not respecified. The Model 2 fit indices are reported in Table 4. The change in chi square was significant, p < .001, however the chi-square test was still significant at p < .001, indicating poor fit. However, all the practical indices suggested the model was fitting the data well, and no further modifications were conducted.

Invariance of the Model Across Demographic Groups

The CFA Model 2 was tested on the whole sample (N = 1434) for both configural and metric invariance across several demographic categories (see Table 5). The whole sample was used as some group sizes were too small when using just half the sample. Configural invariance refers to the number of factors and the pattern of factor-indicator relationships being identical across groups. Metric invariance refers to a model in which the factor loadings are equal across groups. As the chi square test is impacted by large sample sizes (van de Schoot, Lugtig, & Hox, 2012) and CFI is independent of sample size, both indices and other fit indices are reported in Table 5.The results show that the model achieved both configural invariance and metric invariance for gender, Australian Indigenous status, CALD status, disability status, and year level. The CFI, normed chi square (χ2/df), and RMSEA all indicated good model fit. Therefore, configural invariance was supported. There were no significant changes in the chi square value (p >.05), therefore metric invariance was also achieved. In sum, the factor structure of the measure of participation was invariant across demographic groups.

Table 5: Tests of Demographic Invariance for the Student Participation Scale

Variable Model χ2 df Δχ2 Δdf χ2/df CFI RMSEA 90% CI for RMSEA
Gender Configural 3344.91* 1280 2.61 .94 .03 .03 -.04
  Metric 3380.58* 1318 35.66 38 2.57 .94 .03 .03 -.04
Australian Indigenous status Configural 3767.01* 1282 2.94 .94 .04 .04 -.04
  Metric 3808.86* 1320 41.85 38 2.89 .94 .04 .04 -.04
CALD status Configural 3709.39* 1282 2.89 .94 .04 .04 -.04
  Metric 3756.53* 1320 47.14 38 2.85 .94 .04 .04 -.04
Disability status Configural 3535.62* 1282 2.76 .94 .04 .04 -.04
  Metric 3594.48* 1320 58.86 38 2.72 .94 .04 .04 -.04
Year at school Configural 5700.58* 2564 2.22 .92 .03 .03 -.03
Metric 5837.68* 2678 137.10 114 2.18 .92 .03 .03 -.03

Notes. *p < .001. To interpret the change in chi square (Δχ2) values the critical value for chi square with 40 degrees of freedom is 55.76, p < .05, and for 120 degrees of freedom is 146.57, p < .05. All RMSEA values were not significant, p = 1.00.

Convergent Validity

Composite scores for the six elements of participation were calculated by averaging the relevant items on each sub-scale. To test the construct validity of the SPS, these mean sub-scale scores were correlated with Fredericks et al.’s (2005) three dimensions of student engagement (see Table 6). All correlations were positive and statistically significant at p < .001 and ranged from weak (behavioral engagement), to moderate (cognitive engagement), to strong (emotional engagement). The convergent results for emotional and cognitive engagement support the construct validity of the SPS. Correlations between individual items of the behavioral engagement scale with the SPS variables were explored (see Table 6, bottom section). Removal of two items on the behavioral engagement scale which were not associated with the SPS variables, “I get into trouble at school” (reversed) and “When I am in class I just act as if I’m working” (reversed), increased the strength of the correlations between the behavioral engagement dimension and the SPS factors (see bottom row of Table 6). These results support the construct validity of the SPS.

Table 6: Pearson’s Correlations Between the Mean Scores for the Elements of Participation with Student Engagement Sub-Scales

  Elements of Participation
  Having influence Voice about schooling Having a say with influential people Having choice Working together Voice about activities
Engagement sub-scales (Fredericks et al., 2005)
Behavioural engagement (N = 1433) .28 .18 .21 .27 .25 .12
Affective engagement (N = 1432) .60 .55 .50 .51 .58 .46
Cognitive engagement (N = 1424) .39 .40 .36 .32 .44 .30
Behavioural engagement items (Fredericks et al., 2005) (N = 1426)
1. I follow the rules at school .35 .24 .28 .32 .33 .17
2. I get into trouble at school (reversed) .07 .00 .01 .07 .00 -.05
3. When I am in class I just act as if I’m working (reversed) .05 .01 .02 .03 .03 -.02
4. I pay attention in class .31 .23 .27 .32 .32 .19

 

Elements of Participation
Having influence Voice about schooling Having a say with influential people Having choice Working together Voice about activities
5. I complete my work on time .27 .21 .23 .27 .27 .16
Mean of items 1, 4 and 5 .36 .26 .30 .35 .36 .21

Note. All correlations were statistically significant at p < .001.

Discriminant Validity

A one-way between groups analysis of variance (ANOVA) showed that SRC members (M = 3.20, SD = 0.71, 95% CI [3.07 – 3.33], n = 110), who were assumed to have greater experience with student participation, scored significantly higher on the mean total participation score compared with non-SRC members (M = 3.05, SD = 0.70, 95% CI [3.01 – 3.09], n = 1187), F(1, 1295) = 4.89, p = .027, ηp2 = .004. The effect size was very small.

Multiple analysis of variance on the six mean scores for the elements of participation showed there was a significant but small difference between those who were members of the SRC and those who were not, Wilk’s Λ = .99, F(6, 1290) = 2.39, p = .026, multivariate η2 = .011. Inspection of mean scores indicated that SRC members scored higher on all six elements of participation than non-SRC members (see Figure 2). However, one-way ANOVA showed that only “Having a say with influential people” reached statistical significance, with SRC members (M = 3.41, SD = 0.83, 95% CI [3.24 – 3.58]) scoring significantly higher than non-SRC members (M = 3.09, SD = 0.89, 95% CI [3.04 -.3.14]), F(1, 1295) = 12.76, p< .001, ηp2 = .01. The effect size was small.

Mean elements of participation ratings of SRC and Non-SRC members. Error bars represent +/- 1SE.

Figure 2. Mean elements of participation ratings of SRC and Non-SRC members. Error bars represent +/- 1SE.

Discussion

Development of the Student Participation Scale using deductive and empirical methods has produced a reliable and valid 38-item self-report scale to measure student participation at school in both the government and non-government sectors. The SPS was developed in consultation with young people, which is uncommon in quantitative research (see Lundy & McEvoy, 2009). The SPS is easy and quick to administer to individual students, whole classes or the entire student population of a school. The SPS provides a means for schools to collect students’ voice about their experience of participation at school and will provide schools with a tool to measure, monitor, and improve their implementation of student participation.

The factor analysis results support a robust factor structure of the SPS, with six elements of participation accounting for a large proportion of total variance (61%). The six elements of participation are, in order from largest to smallest: working together, having voice about schooling, having a say with influential people at school, having voice about activities outside the classroom, having influence, and having choice. This factor structure and the loading of each item onto each factor was shown to be invariant across several demographic categories: gender, year level at school, Australian Indigenous status, CALD status, and disability status. The reliability of the six factors was shown to be excellent, as demonstrated by strong internal consistency.

Construct validity of the SPS was supported by convergent validity of the six elements of participation with student engagement with school sub-scales (Fredericks et al., 2005). Of particular note is the strong positive correlation between the participation variables and emotional engagement with school —suggesting that students who reported greater participatory experiences also enjoyed school more, and vice versa. The correlations between the participation variables and cognitive engagement were positive and moderate, which supports construct validity of the SPS. While the correlations between the participation variables and behavioral engagement were initially weak, omission of two reverse-scored items from the behavioral engagement scale (“I get into trouble at school” and “When I am in class I just act as if I’m working”) increased the correlations to moderate strength for most sub-scales and the mean total participation score. In some schools where participation is practiced routinely, children in “trouble” may be provided with more opportunities for voice than those not in trouble, while in other schools such students may have less voice, leading to the near-zero correlations reported between this item and the SPS. Importantly, student conformity with regard to following rules and student participation are not expected to be highly correlated, as in some instances participation may involve going beyond rules to make a change at school. In sum, the correlation results support the convergent validity of the SPS.

The significant results comparing Student Representative Council (SRC) members with non-members on mean total participation scores and sub-scale scores for having a say with influential people at school support the construct validity of the SPS by providing evidence of the scale’s ability to discriminate a group known to have greater experience of student voice with influential people at school. The non-significant results for the difference between SRC and non-SRC members on the other SPS sub-scales are most likely due to lack of statistical power due to the relatively small number of SRC members compared to non-SRC members. The relatively small mean differences and small effect sizes between SRC and non-SRC members on all sub-scale scores resonate with a key finding from the qualitative stage of the Participation study. That is, SRCs were heavily criticized by students (both SRC and non-SRC members), and thus the small actual differences in scores most likely do not indicate lack of validity of the SPS, but rather poor, transient, or inconsistent delivery of participation within SRCs.

Content validity of the SPS was supported by use of deductive methods to create the original item pool. The initial item pool was devised by consulting the UNCRC Article 12 (United Nations, 1989); various established multi-component models of the spaces (Holdsworth, 2000; Mannion et al., 2015) and components (Hart, 1997; Holdsworth, 2000; Lundy, 2007; Mitra 2005; Shier, 2001) of participation; as well as consultation with the project advisory group which included school principals, teachers, and students from both state and Catholic school systems. The research team members also included four international experts on children and young people’s participation. Drawing on these academic, theoretical, and practical sources of information guided the content domain of the initial item pool, ensuring it was both systematically derived and comprehensive.

Alignment of the factor analyses results with the models of participation (see Figure 1) further supports the content validity and structure of the SPS. For example, the elements of participation in the SPS refer to voice about schooling and voice about activities, which conceptually aligns with Lundy’s (2007) “voice,” Holdsworth’s (1986) “youth/student voice,” and Shier’s (2001) “children are supported in expressing their views.” The SPS element of “having a say with influential people at school” intersects with Mitra’s (2005) “building leadership capacity” and Holdsworth’s “being listened to seriously and with respect.” The SPS element of “having influence” aligns with Lundy’s “influence,” Holdsworth’s “incorporating youth/student views into action taken by others,” Hart’s (1997) “adult initiated, shared decisions with children” and Shier’s “children’s views are taken into account.” The element of “having choice” resonates as a form of being consulted (Hart) and “children are involved in decision making processes” (Shier). “Working together” coincides with Hart’s “child initiated, shared decisions with adults,” Holdsworth’s “sharing decision making,” Mitra’s “collaborating with adults,” and Shier’s “children share power and responsibility for decision making” aspects of participation. Working together was the largest element of participation in the factor analysis results. These results resonate with the models of participation which place shared decisions with adults toward the higher end of the hierarchies. Working together, both in our data and in the theoretical models of participation, is the “capstone” of authentic student participation.

Implications and Limitations

The SPS provides a multi-dimensional quantitative scale to measure the extent of student participation at school. It will provide researchers with a means to investigate research questions focused on the impact of student participation on various outcomes, such as on children and young people themselves, on institutions, on polices, services and communities (Crowley & Skeels, 2009; Mager & Nowak, 2012). Such evidence will progress implementation of Article 12 of the UNCRC (Skeels & Thomas, 2007). Given the cross-sectional design used to develop the SPS and the correlational nature of the results, future studies need to use longitudinal designs and control studies (see also Kirby & Bryson, 2002) so that predictive validity of the SPS can be tested. Future research also needs to test the factor structure of the scale in other settings, including other education systems and school cultures. Such further studies are important as they will add to the extant evidence of the psychometric soundness of the SPS in international settings. The factor analysis results support a multi-dimensional model of student participation, with six key elements, and thus, models or theories of student participation should reflect a multifaceted structure in order to capture student participation in a comprehensive way.

In school settings, the SPS will provide a useful tool to measure, monitor, and improve their implementation of student participation. This takes on increased significance in contemporary educational environments where the potential benefits of participation, including for school effectiveness and improvement and for student wellbeing, are now broadly recognized, but valid and reliable measurement has been lacking. An information pack, which includes instructions on administering, scoring, and interpreting the SPS, will be available at no cost to schools, along with a complementary Good Practice Guide which provides practical suggestions, resources, insights from schools and reflective questions for school leaders, teachers, and students on how to build effective student participation and embed it as an integral part of the culture within schools (see bit.ly/ParticipationStudy).

In sum, the results reported in this article support the SPS as a reliable and valid tool with which to measure multiple dimensions of student participation at school.

References

Brown, M. W., & Cudeck R. (1993). Alternative ways of assessing model fit. In K. A. Bollen & J.S. Long (Eds.). Testing structural equation models (pp. 136-162). Newbury Park, CA: Sage.

Byrne, B. M. (2001). Structural equation modelling with AMOS: Basic concepts, applications and programming. Mahwah, NJ: Lawrence Erlbaum.

Catell, R.B. (1966). The scree test for a number of factors. Multivariate Behavioral Research, 1, 245-276.

Crowley, A., & Skeels, A. (2009). Getting the measure of children and young people’s participation: An exploration of practice in Wales. In B. Percy-Smith & N. Thomas (Eds.), A handbook of children and young people’s participation: Perspectives from theory and practice (pp. 184-192). London, UK: Routledge.

de Roiste, A., Kelly, C., Molcho, M., Aoife, G., & Nic Gabhainn, S. (2012). Is school participation good for children? Associations with health and wellbeing. Health Education, 112, 88-104. doi:10.1108/09654281211203394

Feinstein, C., & O’Kane, C. (2005). The spider tool: A Self-assessment and planning tool for child led initiatives and organisations. London, UK: International Save the Children Alliance.

Fredericks, J. A., Blumenfeld, P., Friedel, J., & Paris, A. (2005). School engagement. In K.A. Moore, & L.H. Lippman (Eds.), What do children need to flourish? Conceptualizing and measuring indicators of positive development (pp. 305-321). New York, NY: Springer.

Hart, R. (1997). Children’s participation: The theory and practice of involving young citizens in community development and environmental care. London, UK: Earthscan.

Holdsworth, R. (1986). Student participation and the participation and equity program (PEP Discussion Paper No. 2). Canberra, ACT: Commonwealth Schools Commission.

Holdsworth, R. (2000). Education in Asia: Schools that create real roles of value for young people. Prospects, 30(3), 349-362. doi:10.1007/BF02754058

Hu, L. T., & Bentler, P. M. (1999). Cut-off criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modelling: A Multidisciplinary Journal, 6, 1-55. doi:10.1080/10705519909540118

Kaplan, R. M., & Saccuzzo, D. P. (2013). Psychological testing: Principles, applications and issues (8th ed.). Belmont, CA: Wadsworth.

Kirby, P., & Bryson, S. (2002). Measuring the magic. London: Carnegie UK Trust.

Kline, R. B. (2005). Principles and practice of structural equation modelling. (2nd ed.). New York, NY: Guildford Press.

Lundy, L. (2007). “Voice” is not enough: Conceptualising Article 12 of the United Nations Convention on the Rights of the Child. British Educational Research Journal, 33, 927-942. doi:10.1080/01411920701657033

Lundy, L., & McEvoy, L. (2009). Developing outcomes for educational services: A children’s rights-based approach. Effective Education, 1, 43-60. doi:10.1080/19415530903044050

Mager, U., & Nowak, P. (2012). Effects of student participation in decision making at school: A systematic review and synthesis of empirical research. Educational Research Review, 7, 38-61. doi:10.1016/j.edurev.2011.11.001

Mannion, G., Sowerby, M., & I’Anson, J. (2015). How young people’s participation in school supports achievement and attainment. Retrieved from Children & Young People’s Commissioner Scotland website: https://www.cypcs.org.uk/

Mitra, D. L. (2004). The significance of students: Can increasing student voice in schools lead to gains in youth development? Teachers College Record, 106, 651-688. doi:10.1111/j.1467-9620.2004.00354.x

Mitra, D. (2005). Increasing student voice and moving toward youth leadership. Prevention Researcher, 13(10), 7-10.

O’Hare, L., Santin, O., Winter, K., & McGuiness, C. (2016). The reliability and validity of a child and adolescent participation in decision-making questionnaire. Child: Care, Health and Development, 42, 692-698. doi:10.1111/cch.12369

Shier, H. (2001). Pathways to participation: Openings, opportunities and obligations. Children & Society, 15, 107-117. doi:10.1002/chi.617

Skeels, A., & Thomas, E. (2007). Participation. In R. Croke & A. Crowley (Eds.), Stop, look and listen. Cardiff, UK: Save the Children.

Sussman, A. (2012). Student voice rubric. Retrieved from What Kids Can Do website: http://www.whatkidscando.org

Thomas, N. (2007). Towards a theory of children’s participation. International Journal of Children’s Rights, 15, 199-218. doi:10.1163/092755607X206489

United Nations. (1989). Convention on the rights of the child. Retrieved from http://www.ohchr.org/

van de Schoot, R., Lugtig, P., & Hox. J. (2012). A checklist for testing measurement invariance. European Journal of Developmental Psychology, 9, 486-492. doi:10.1080/17405629.2012.686740

Watkins, M. W. (2000). Monte Carlo PCA for Parallel Analysis [Computer software]. State College, PA: Ed & Psych Associates.

Welsh Government. (2011). Self-evaluation tool for pupil voice in schools in Wales. Cardiff, UK: Author.

Whitty, G., & Wisby E. (2007). Real decision making? School councils in action (Research Report No. DCSF-RR001). London, UK: Department for Children, Schools and Families, Institute of Education, University of London.

Wu, H.-C., Weiss, J., Kornbluh, M., & Roddy, L. (2014). Youth-adult partnership rubric: A tool for professional development and program evaluation in youth settings. East Lansing: Michigan State University.

Youth and Adults Transforming Schools Together. (n.d.). YATST student and teacher survey questions. Retrieved from What Kids Can Do website: http://www.whatkidscando.org/