Attended the ALTC Assessment Forum today; some very useful ideas on assessment from the invited speaker, Chris Rust, and from a panel featuring various ALTC project people (David Boud, Geoff Crisp and others). What follows are some notes and ideas…
(I also presented a poster at this event which can be found at http://netcrit.net/writing
Chris Rust’s views on assessment at university
Rust began by restating the dominant paradigm: learning is at the heart of the student experience, generally speaking. He asserted that, in the UK, assessment is an issue for concern of quality agencies and learning and teaching development bodies. The evidence comes from student surveys, quality audits, research. Feedback, in particular, stands out as a point of considerable frustration for students: survey evidence cited that indicates positive responses from students about teaching which contrast with their criticisms of the (lack of) effective feedback.
Reflects with “embarrassment” on his 1990s enthusiasm for explicitness – lured into the trap of thinking that, if only assessment could be made very clear and precise (e.g. highly detailed matrices of criteria for assessment and levels and so on), it would all be better. Rust reports on his work on assessment in the 1990s in Business: detailed instructions on assessment made staff and students happy and yet their learning didn’t improve. He reports a sudden moment of change: based on social constructivism, it became clear that just as one cannot transmit the ‘knowledge’ of the discipline being taught, so too one cannot transmit the ‘knowledge’ of what/how to assess.
Broadly speaking, what his presentation emphasises is that there needs to be active student engagement with the criteria by which work is assessed (between the ‘giving’ of the task by a teacher and its ‘doing’ by the student); and then active engagement with the feedback (which is itself based on the criteria). What do we mean by words like ‘active engagement’ (eg Rust’s ‘actively engage with criteria’)? The example provided, a workshop approach for first-year students to learn what criteria are and how assignments are graded, suggests that ‘active engagement’ means being and playing at the role of teacher. As Rust puts it “getting the students into the mindset of the marker” [which, incidentally, is analogous to thinking carefully of the audience for which one is working in any form of communication or media]. Students who do this task do better. “Clearly, there are many ways to get students involved as markers in the process of assessment”; examples listed include self-assessment; peer-feedback; peer-assessment; marking exercises.
What this experience tells us is that criteria involve tacit knowledge: tacit knowledge or, as the constructivists would say the generated meanings from social interaction, can never be made explicit. Indeed the more that one seeks ‘explicitness’, the more likely it is that key elements might be concealed.
Audience members then reported experiences and activities, confirming to some extent Rust’s contentions, though participants tended to note the challenges of time and also of motivating students to participate. Rust also recounted a story of students not caring very much about doing ‘well’; they simply sought to pass and thus efforts to use prior feedback or other collaboration to improve the final submission may not work since most students do not wish to put in effort beyond passing. Further discussion also considered the challenges of systematising innovative practices across a department: essentially to build and implement and sustain a shared culture.
Rust moved on to discuss feedback, reflecting on the failure of the current response to the students’ survey dissatisfaction with feedback (the current response is, largely, more and faster feedback). He cites Hattie (1987): feedback is the “most powerful single influence” on learning; cites other studies, as well. Rust argues against the idea that students are not motivated to get feedback; he asserts that students don’t care about our feedback to them because it is vague, unhelpful, not understandable. In other words, academics rapidly make students unmotivated to engage with feedback. Critical importance of the link between the grade and the feedback. Students get annoyed about the inconsistency of academic responses: different markers respond to, and emphasise, different things.
In response, citing Nicol (2009), Rust argues that the way forward is to make feedback dialogic; indeed ‘more and faster’ feedback in one direction – from academic to student – will make no difference. Subtlety- dialogue is more than 2 people; it’s a conversation amongst many, especially students discussing with other students.
How can we make this dialogic approach work? First of all, we need to prepare students and educate them about feedback: align expectations because the purposes of feedback vary and unless you contextualise the feedback for its purpose, students get confused. Second, students do not recognise when they are getting feedback: cites the example of tutors giving oral feedback in a laboratory, and students don’t realise this conversation is feedback. (While Rust does not explicate this point, we can see here how the evidence of surveys is artificial and misleading!). And, to achieve these kinds of shared conventions, workshops which demonstrate to students how to use feedback are necessary.
Rust raises the problem of modularity: final assignments involve feedback which arrives well after the completion of the unit or module. He suggests that we require students to take the feedback from an earlier unit and apply it and use it in a later one – e.g. a covering sheet for a future assignment which says what changes have been made by that student.
Reference to Brown and Hirschfield (2008) – students need to believe in their own responsibility for learning; we then use assessment formatively to reinforce and support that responsibility. Also cites Angelo – Motive, Opportunity and Means to use feedback. And, as a way to implement this approach, then the assignments need to be structured as ‘draft and re-draft’, with a shift in the effort – so that the major giving of feedback is done mid-semester, with simply a grade at the end. Feedback could be diagnosis and direction to further support. It is important to shift informal learning to discussion after assignments, about them, not the preparation. Rust notes: timeliness of feedback, especially prior to next assignment; value of automated quizzes which provide feedback; and provision of general feedback quickly, and individual feedback later. Cites some school evidence that feedback and marks results in students ignoring the feedback; also notes William – cannot have assessment that is both summative and formative. Finally, with reference to the Open University’s allocation of 60% of budget to assessment, suggests we need to re-allocate resources from ‘teaching’ to ‘assessing’.
Rust’s conclusion is that peer and self assessment are not just part of the process of achieving ‘graduate attributes’ but should themselves be attributes [are they, to some extent, already there in terms of reflection and life long learning?]. Furthermore, staff themselves need to be consistent and clearly collaborative in assessment practices. (reference to Rust, O’Donovan and Price: 2005).
ALTC Panel on Assessment
Boud: “Assessment for longer-term learning”
The first priority is asking “what effect does assessment have on student learning”? And if it is negative answer, then change required. Validity, reliability and so on are of secondary importance.
Also points out that people need to be careful not to sound behaviourist: that to assert grading is essential to getting students to do things does tend to rely on a negative presumption of learners’ motivations. Perhaps there are intrinsic motivations?
He notes the problematic status of authenticity in assessment
Visit http://assessmentfutures.com for more detailed information, including the development of the Assessment 2020 propositions, such as :Assessment will be effective when “it is used to engage students in productive learning”.
Crisp: “rethinking assessment in a participatory learning environment”
Web 2.0 discussed as “Collaborative, distributed and cooperative environment”; it’s about students learning not as an individual. Yet assessment is about individual students, working with restricted resources (including time). One aspect: the ethics and effectiveness of the processes of collaborative work may need to be assessed.
Diagnostic assessment builds up a relationship with students (and can be quite negative – to demonstrate to students that they are lacking, or deficient). Diagnosis helps us to understand and care about what students know, valuing it.
Considers the way that immersive, game-like and scenario-based virtual environments may become highly significant as places to learn; so, how can the assessment be done? Crisp wants to locate the assessment within the collaborative environment.
De La Harpe: The B Factor Project (beliefs about graduate attributes)
Presentation of data from a survey of academics’ views on graduate attributes. Data apparently shows that there is more acceptance of the value of graduate attributes than, perhaps, we might believe.
Sanderson and Yeo: Moderation for Fair Assessment for Transnational Learning
Critical issues around moderation and the management of assessment across multiple contexts and cultures; proposes the usefulness of community of practice, frameworks based on good practice etc.
Found that there are very diverse practices of moderation even within units: there is no common practice, even within a unit of study. There are individuals who are very well tuned in to processes and procedures; and there are individuals without knowledge. Some people understand the tacit corporate knowledge; others don’t. Some people in partnerships are well-connected; others are isolated.
Project expands the nature and meaning of the term moderation to include every aspect of quality control; essentially moderation is synonymous with quality control and assurance [which begs the question of what one then calls moderation!].