Author Archives: Martin Oliver

Notes from a one-day event on changing assessment across an institution

I recently attended a one-day event organised by the TeAL project at Middlesex University. This provided an interesting and frank account of the challenges as well as the successes they faced. It was particularly good to hear from senior leaders and teachers, as well as the project team. There were many issues that are relevant to this project’s work, and that of the Assessment Working Group. I brought back and shared some of the documents from the event – particularly the flow chart talking through the change process, and the table that maps things like values, roles and processes against stages and questions.

IOE Learning and Teaching Conference session

Gwyneth gave a presentation about the project at the IOE’s learning and teaching conference. The presentation also involved an update from the institution’s Assessment Working Group. Gwyneth used the opportunity to invite discussion and feedback from staff, which raised several issues for us to think about, including:

  • The issues raised seemed representative of the experiences of most – but not all – of the people in the room. If anything, people expected greater use of essays than was reported in relation to formative feedback.
  • It was felt that people may feel more able to experiment and innovate with formative feedback than with summative feedback.
  • Geography uses a bridging module between PGCE and other Masters-level courses, with a formative presentation and a written summative report.
  • Some areas are already considering feedback on earlier work. In this case, feedback sheets are online; markers are encouraged to read prior feedback before marking new work. However, in this programme there’s no face-to-face feedback, which is why the written feedback needs to be careful and detailed.
  • The same project also involved a standardisation meeting that generated an exemplar that markers use as a point of reference to guide current practice.
  • In some areas, in the final summative work students are asked to reflect on how they’ve taken feedback on board, and also to self-assess the degree to which they’ve done this.
  • Peer feedback was discussed. It was felt that this had worked well in specific programmes (high quality, specific, supportive feedback was mentioned), although there were issues, including how important it is to help students understand what a good piece of work is, what the limits of trust and confidence could be in this, etc. It was suggested that working towards peer assessment required students to make progress in relation to understanding, self-assessing, etc and so could be a good way of working through some of these issues, helping people understand their own work (and what needs to be done to it) better. It was suggested that this might be easier in a PGCE group because an environment of trust and discussion was built up across the programme. It could be harder to create this online.
  • There were concerns that the flexibility of the current offer – particularly following curriculum review – could raise issues for coherence of feedback and development. Consistent contact with a tutor could help in relation to this, although there may be an issue with tutors supporting students on modules from other programmes where they don’t have expertise. Feedback on structural aspects (e.g. academic literacy) might be possible to support, however.
  • Getting assessment integrated across modules is important but is likely to take years to achieve.
  • Recognising the time needed for marking, assessment and feedback is important – it might be necessary to spend less time on teaching and more on assessment/feedback.
  • It may help to have a conversation about what good feedback looks like at an institutional level.

Notes from an Elluminate seminar with David Nicol

I took part in a JISC-sponsored Webinar today, run by David Nicol, entitled, “Assessment and feedback: in the hands of the student”.

David’s presentation raised several points that are useful for our project:

  • He discussed the idea that the purpose of feedback might be about developing the capacity in students for evaluative judgement, not just to receive feedback on specific pieces of work. He linked evaluative judgement to ideas of critical thinking, and emphasised use of feedback and knowledge building (rather than just focusing on giving feedback).
  • The emphasis on “timely/detailed/clear” feedback, driven by student national surveys, was criticised for adopting a “delivery” model of feedback, rather than a cognitive one in which students are expected to decode; evaluate and compare; identify discrepancies; revise and construct knowledge; and transfer this understanding to new areas.
  • Practical strategies for fostering this included responding to comments; sequencing assignments to encourage drafting and re-drafting; overlapping tasks; patchwork texts that need to be ‘stitched’ together; reflection on feedback; and ipsative asessment (Gwyneth was name checked here).
  • Echoing our own discussions, Simon was arguing about the importance of separating out grading work from commenting on it/providing feedback. Learning about academic standards isn’t the same as learning how to improve work. This was in the context of self-review by students, but the point may well stand more generally.
  • Peer review is a useful model, since it encourages engagement with comments, prompts revisions and so on. Similarly, offering peer review comments to others fosters learning: it prompted reflection on their own work too (since their work is used as a point of reference), and also encouraged engagement with the marking criteria. Important principles here would include maximising the number of reviews undertaken, engaging in dialogue and relating feedback offered to students’ own work.

Resources from the session, including the recording, slides and text chat are available from the Design Studio.

Participating in Assessment “Swap Shop”

This Monday, I took part in the JISC Assessment technology “swap shop” Elluminate meeting. As well as providing an update for people on our project, and what we’re up to, there were a few points that came of relevance to us:

  • There’s use of a Moodle plugin for managing assessment, developed by ULCC. The project site is, http://as.exeter.ac.uk/support/educationenhancementprojects/current_projects/ocme/. Here’s some blurb from their project plan: “The system (which is being developed by our Moodle hosting partner ULCC) will integrate fully with the ELE (Exeter Learning Environment) Moodle installation to allow students to submit coursework through ELE and receive a Turnitin similarity score. Assignments may then be marked on-line using Turnitin GradeMark or Word ‘track changes’. Feedback (which may consist of uploaded files or completed bespoke feedback forms) will be returned to students via ELE. Personal tutors will also be able to view feedback for their tutees.”
  • There’s also quite a simple administrative project about feedback and notification, e.g. The date when feedback to students is due. It’s just using VLE calendar and Outlook. (http://www.jisc.ac.uk/whatwedo/programmes/elearning/assessmentandfeedback/glamorgan.aspx)
  • There seem to be quite a lot of overlap with the Dundee project, interACT, which is focused on medical education but which is concerned with feedback processes that seem to look a lot like ipsative processes. They’re working with Blackboard rather than Moodle, however.
  • The LDSE modelling tools are being pushed for the programme as a whole, so we’re a bit ahead of the game in that we’ve already committed to using this.

Programme evaluation workshop

A while back, I attended a programme-wide workshop on evaluation for the project, with Gwyneth – I meant to post about this at the time but I’ve been swamped, so wanted to make sure I posted something up about this before it slips my mind entirely.

This meeting provided a chance to think about the project’s evaluation in a little more detail. The slides from the event are available, as is a recording of the session. There was a helpful discussion of things like what measures we might have access to that could describe impact, and there was a lot of interest in the possibility of linking assessment practice to workload in some way – concerns about efficiency were widespread, as was the hope of raising quality whilst lowering costs. Our plans are to do this using the tools developed in the LDSE project, although we’ll have to see how well this works when we try it.