This was the final conference for a NTFS funded project on programme-focused assessment or PFA. This is already practiced in tteh US e.g. Alverno College and there are examples from the project in the UK too.
This means assessment of programme learning outcomes rather than module learning outcomes and it means integratative assessment from more than one module.
The PFA approach compliments our Assessment Career approach in that it encourages linking of modules and cumulative work rather than the fragmentation that is often found under modularisation.
Other benefits inlcude the possibility of synoptic assessments that can draw on skills from the whole programme. The EdD thesis would be a good example of this.
We discussed a few issues about implementation of PFA including the possible need for flexible regulations and better working in programme teams.
I think that combining PFA and the Assessment Career approach to feedback would work well and enhance assessment on a large scale.
I presented a paper on ipsative assessment at the HEA event “Self-assessment: strategies and software to stimulate learning” held at the Open University.
The concept of ipsative assessment attracted much interest but was out of context from the rest of the day which focussed on online or computer-based self-testing. I would query whether self-testing is self-assessment since it is the academic who designed the test who is doing the assessment via computer software. There was however discussion of certaintly -based marking which does require the student to self-assess on how confident they are about an answer.
Software from NZ “Peerwise ” was mentioned as being innovative in that it allows learners to create their own questions and comment on each other’s questions as well as answer them. Again this was for multi-choice type questions – but the idea could be extended to longer answer questions. I liked the idea of a community of learners discussing assessment questions.
The replay of the event is now available at: http://stadium.open.ac.uk/stadia/preview.php?whichevent=1955&s=1.
Copies of the presentations are available on the HEA’s website at: http://www.heacademy.ac.uk/events/detail/2012/seminars/themes/tw037_ou
I presented the pilot plans and the pilot methodology to the IOE Teaching Committee on May 23rd. The plans were very well received with words like ‘fantastic’ used and Mary commented that it was good to hear such enthusiasm. There was a question about whether the asssignment forms would apply to formative or summative assessment or both and Ian’s pilot as he pointed out has both with 2 slightly differnet forms.
Today’s webinar consisted of accounts from the Curriculum JISC projects which are finished/finishing on how they managed large scale organisational change. Contributors gave tips on what worked at differnent stages. Some key issues I identified for the AC project at 3 stages are:
- Start of project-resisting pressure to deliver too soon which I don’t think we have a problem with. Also stakeholder engagement which maybe we need to address including IT, registry and QA.
- Running the project-there was a discussion about formal reports being very dull and alternative such as use of social media, video, stories of problems and how they were solved and use of process mapping. But commitees (and JISC) want formal reports so not sure how we can be more creative. Another approach suggested was the submarine approach where not much is publicised during the project until there are results which are disseminated.
- Finishing the project-a long way off for us-there was agreement that it is not the system that is the product but the environment and what people do differently. We have discussed this in our meetings that enabliing more discussion across the IOE about the role and purpose of formative feedback would be a useful outcome.
When and where to have more conversations about the project at the IOE will be important for us.
See recording at http://bit.ly/jiscdslschange
This is one of the HEA seminar series on assessment and was well attended with some enthusiasm from participants.
There was a real mix of traditional and innovative assessment approaches presented inlcuding examinations, MCQs, scenario based assessment, use of research folders and lecture casts and these were sometimes combined in the same programme.
Two interesting ideas:
1. Carl Gombrich, Philosophy: Use of a lecturecast send to students in advance with students posting questions they would like answered online and voting on the most popular questions which then got answered in the taught session. Although based on transmission -the lecture- the student questions were interpreted as self-formative assessment.
2. Chiara Ambrosio, History of science module students researched a topic which could be a continuation of something a previous student had started reseraching. The aim was eventual publication of the research. A research folder was presented for summatuve assessment -like a portfolio.
Students also had to read to read others’ projects and were tested on these in an exam which seemed a bit incongruous to me but seemed to work. So there was a clear link between peer formative assessment and summative assessment. A student joined the presenters and was very enthusiastic about the assessment approach.
See outputs soon at http://www.heacademy.ac.uk/events/detail/2012/seminars/themes/ts049_ucl.
The project team led by Gunter Saunders at Westminster have developed a tool – e-Relfect-for linking feedback, self-review questionnnaires and student reflections on this in a learning journal supported by tutorials.
Details are at:https://sites.google.com/a/staff.westminster.ac.uk/mace/home
Gwyneth attended an online seminar presented by Gunter and had the following thoughts:
- The principles behind this tool fit well into our AC framework for this project.
- Automating such a feedback process made it easy for students to use but it could be frustrating
- The benefits are that it encourages learners to act on feedback
- It all depends on the quality of feedback from staff and it is not clear whether or not using the system encourages staff to reflect on the feedback they give – there are still QA issues for MAC to address.
- I wonder if there are less complex ways of achieving something similar
- Nearly all those participating in the seminar voted that a strength of this system is encouraging feedback dialogue and reflection over a whole programme. This fits again with the AC longitudinal approach.
- Also this tool is for undergraduates and the Y/N questions in the questionnaire might not be so well recieved by PGs who might expect something more sophisticated.
MAC is worth looking at for the pilots even if we did not use the toolkit in its present form.
A brief visit to the British Museum after lunch.
The Camel group for our project which includes Manchester Metropolitan University, Dundee and Queens University Belfast as well as the IOE met with our critical friend Peter Chatterton on 26.1.12.
This second team meeting was used to firstly to recieve a summary of the views of the steering group. The suggestion of inlcuding doctoral students in the project was a good idea especially since there are already plans to provide a way of capturing doctoral supervision sessions and feedback in Moodle. Secondly, the reserchers reported on progress with the Baseline report. The data has been collected, although there may be some gaps to fill in later, and will be synthesised next week ready for a draft of the report at the end of January.
The remainder of the meeting focussed on developing pilot plans and the REAP assessment principles were discussed. The 3 pilot leaders present thought that these would be useful for pilot planning and will discuss further with the programmes teams involved.