Category Archives: Meeting

Bloomsbury learning environment event on assessment

Gwyneth, Holly and Tim presented tools from the project and for the first time a formal presentation by Tim on the Moodle reporting system at:

BLE Seminar: Exploring the use of digital technologies to enhance assessment and feedback 4th June

While students reflecting on feedback is not a new idea the reporting of feedback from all modules in a programme did not seem to happen elsewhere. Capturing feedback is well recieved and could be useful for tutors to get an overview of a student’s progress. The current system is only for staff and it was suggested that students might want access to the reports – but this woudl require more Moodle development work.

Slides can be viewed here.


JISC Experts Meeting

On 23 April I attended the JISC Learning and Teaching Practice Experts Group meeting in Birmingham, an opportunity for those involved in JISC projects past and present to get together and share experiences.

There were many contributions but I was particularly interested in the Assessment and Feedback Strand B project ‘The Evaluation of Assessment Diaries and Grademark’ which Alice Lao spoke about. The survey of 250 students and 18 staff interviews following 4 years of implementation seemed to illustrate many of the issues with how technology does or doesn’t work with established practices.

So for example the Assessment Diary software sometimes disrupted existing practice, in a good way, by prompting reflection at a programme team level on the phenomenon of ‘assessment bunching’. Conversely, where staff see it as yet another administrative chore and don’t complete it fully or on time, and didn’t induct students on how to use the functions for accessing their online feedback the whole thing fell down.

So more on this at their project blog and the final report is available from JISC But my personal conclusion was that humans are the key to all technology!

JISC webinar on organisational change

Today’s webinar consisted of accounts from the Curriculum JISC projects which are finished/finishing on how they managed large scale organisational change. Contributors gave tips on what worked at differnent stages. Some key issues I identified for the AC project at 3 stages are:

  • Start of project-resisting pressure to deliver too soon which I don’t think we have a problem with. Also stakeholder engagement which maybe we need to address including IT, registry and QA.
  •  Running the project-there was a discussion about formal reports being very dull and alternative such as use of social media, video, stories of problems and how they were solved and use of process mapping. But commitees (and JISC) want formal reports so not sure how we can be more creative. Another approach suggested was the submarine approach where not much is publicised during the project until there are results which are disseminated.
  •  Finishing the project-a long way off for us-there was agreement that it is not the system that is the product but the environment and what people do differently. We have discussed this in our meetings that enabliing more discussion across the IOE about the role and purpose of formative feedback would be a useful outcome.

When and where to have more conversations about the project at the IOE will be important for us.

See recording at

IOE Learning and Teaching Conference session

Gwyneth gave a presentation about the project at the IOE’s learning and teaching conference. The presentation also involved an update from the institution’s Assessment Working Group. Gwyneth used the opportunity to invite discussion and feedback from staff, which raised several issues for us to think about, including:

  • The issues raised seemed representative of the experiences of most – but not all – of the people in the room. If anything, people expected greater use of essays than was reported in relation to formative feedback.
  • It was felt that people may feel more able to experiment and innovate with formative feedback than with summative feedback.
  • Geography uses a bridging module between PGCE and other Masters-level courses, with a formative presentation and a written summative report.
  • Some areas are already considering feedback on earlier work. In this case, feedback sheets are online; markers are encouraged to read prior feedback before marking new work. However, in this programme there’s no face-to-face feedback, which is why the written feedback needs to be careful and detailed.
  • The same project also involved a standardisation meeting that generated an exemplar that markers use as a point of reference to guide current practice.
  • In some areas, in the final summative work students are asked to reflect on how they’ve taken feedback on board, and also to self-assess the degree to which they’ve done this.
  • Peer feedback was discussed. It was felt that this had worked well in specific programmes (high quality, specific, supportive feedback was mentioned), although there were issues, including how important it is to help students understand what a good piece of work is, what the limits of trust and confidence could be in this, etc. It was suggested that working towards peer assessment required students to make progress in relation to understanding, self-assessing, etc and so could be a good way of working through some of these issues, helping people understand their own work (and what needs to be done to it) better. It was suggested that this might be easier in a PGCE group because an environment of trust and discussion was built up across the programme. It could be harder to create this online.
  • There were concerns that the flexibility of the current offer – particularly following curriculum review – could raise issues for coherence of feedback and development. Consistent contact with a tutor could help in relation to this, although there may be an issue with tutors supporting students on modules from other programmes where they don’t have expertise. Feedback on structural aspects (e.g. academic literacy) might be possible to support, however.
  • Getting assessment integrated across modules is important but is likely to take years to achieve.
  • Recognising the time needed for marking, assessment and feedback is important – it might be necessary to spend less time on teaching and more on assessment/feedback.
  • It may help to have a conversation about what good feedback looks like at an institutional level.

Notes from online seminar about Assessment Efficiency

The University of Hertfordshire presented their Strand A iTEAM project about ‘Integrating Technology-Enhanced Assessment Methods’ as well as the ESCAPE project on 15th March 2012.

Educationally effective and resource effective assessment is the gold standard of assessment methods, which often is difficult to achieve. In the postgraduate arena, summative assessment in the form of 5,000 word essays is often the norm, but how can we be sure this is educationally and resource effective?

The ESCAPE project attempts to answer this question, and they provide a toolkit to ‘calculate’ the time spent on different assessment methods (calculator spreadsheet not yet available publicly at the time of the session). This approach is not too dissimilar from the LDSE project‘s attempt to quantify the learning design process (and would work well as another component of the LDSE toolkit), and it suffers from similar difficulties: The data input is quite often based on guesswork and might not reflect reality adequately enough, though the point was made that the toolkit promotes reflection, which might unearth new insights, and more importantly, it can be used for rough (or highly accurate, depending on the baseline data quality) comparisons of different assessment types.

The online discussion quickly pointed out potential misuse of time calculations, in that they might favour efficiency over quality. However, the presenters insisted that assessment improvements should be about making thoughtful choices and not going for the cheapest option, and the tools were developed with this goal in mind. Indeed, the presenters acknowledge that the real picture is more complex than a set of numbers, and one should not forget the actual purposes of assessment, including the timing of assessment.

The ESCAPE project has therefore developed a series of assessment patterns to visualise how various forms of assessment and the associated feedback is used within modules, and how they interlink. The presenters emphasised that a one-size-fits-all approach is not really desirable – ideally, any assessment item would inform subsequent assessment items, either within one or even across multiple modules. Examples of assessment patterns are available from the Effective Assessment in a Digital Age Workshops site (see Session 7).

It is this final part of the presentation that displayed the highest relevance to our Assessment Careers project, as the ESCAPE team recognises the value of feedback that affects learning (and other assessment items) across many modules. ESCAPE’s timeline visualisation approach is something we might want to adopt – apparently the timelines/patterns have been used at several institutions already with great success, even as a part of validation processes. Interestingly, a team at Greenwich University picked up this idea and is currently developing an online tool to visualise assessment in modules in a similar way.

Notes fromonline seminar about Making Assessment Count project

The project team led by Gunter Saunders at Westminster have developed a tool – e-Relfect-for linking feedback, self-review questionnnaires and student reflections on this in a learning journal supported by  tutorials.
Details are at:

Gwyneth attended an online seminar presented by Gunter and had the following thoughts:

  • The principles behind this tool fit well into our AC framework for this project.
  • Automating such a feedback process made it easy for students to use but it could be frustrating
  • The benefits are that it encourages learners to act on feedback
  • It all depends on the quality of feedback from staff and it is not clear whether or not using the system encourages staff to reflect on the feedback they give – there are still QA issues for MAC to address.
  • I wonder if there are less complex ways of achieving something similar
  • Nearly all those participating in the seminar voted that a strength of this system is encouraging feedback dialogue and reflection over a whole programme. This fits again with the AC longitudinal approach.
  • Also this tool is for undergraduates and the Y/N questions in the questionnaire might not be so well recieved by PGs who might expect something more sophisticated.

MAC is worth looking at for the pilots even if we did not use the toolkit in its present form.

Participating in Assessment “Swap Shop”

This Monday, I took part in the JISC Assessment technology “swap shop” Elluminate meeting. As well as providing an update for people on our project, and what we’re up to, there were a few points that came of relevance to us:

  • There’s use of a Moodle plugin for managing assessment, developed by ULCC. The project site is, Here’s some blurb from their project plan: “The system (which is being developed by our Moodle hosting partner ULCC) will integrate fully with the ELE (Exeter Learning Environment) Moodle installation to allow students to submit coursework through ELE and receive a Turnitin similarity score. Assignments may then be marked on-line using Turnitin GradeMark or Word ‘track changes’. Feedback (which may consist of uploaded files or completed bespoke feedback forms) will be returned to students via ELE. Personal tutors will also be able to view feedback for their tutees.”
  • There’s also quite a simple administrative project about feedback and notification, e.g. The date when feedback to students is due. It’s just using VLE calendar and Outlook. (
  • There seem to be quite a lot of overlap with the Dundee project, interACT, which is focused on medical education but which is concerned with feedback processes that seem to look a lot like ipsative processes. They’re working with Blackboard rather than Moodle, however.
  • The LDSE modelling tools are being pushed for the programme as a whole, so we’re a bit ahead of the game in that we’ve already committed to using this.

Second team meeting 12/1/12

This second team meeting was used to firstly to recieve a summary of the views of the steering group. The suggestion of inlcuding doctoral students in the project was a good idea especially since there are already plans to provide a way of capturing doctoral supervision sessions and feedback in Moodle. Secondly, the reserchers reported on progress with the Baseline report. The data has been collected, although there may be some gaps to fill in later, and will be synthesised next week ready for a draft of the report at the end of January.

The remainder of the meeting focussed on developing pilot plans and the REAP assessment principles were discussed. The 3 pilot leaders present thought that these would be useful for pilot planning and will discuss further with the programmes teams involved.