Assessment & Feedback – from reluctance to emotional response

At the recent JISC Assessment & Feedback programme meeting, I ran a session with the Strand B projects in which we revisited some questions we first discussed a year ago. Thus, instead of ‘What do you want to happen as a result of your project’s assessment and feedback innovation?’ we talked about what has happened. And, rather than ‘How will you know the intended outcomes have been achieved?’ we discussed the indicators and evidence that projects have actually gathered over the last year. These are particularly relevant given that Strand B projects are all about Evidence and Evaluation of assessment and feedback related innovations.

The questions were really just to get us started, although the Strand B project teams are such a keen group they didn’t need much encouragement! In fact, we had a very open discussion, and what emerged were some of the issues and benefits of evaluating large-scale changes in assessment and feedback using technology, as well as some interesting findings.

All the project teams want to gather a balanced view of the changes being implemented within their institutions, but many had issues with collecting data from ‘reluctant users’. In other words, individuals who are reluctant to use a given technology can also be difficult to involve in the evaluation process. This is by no means unique to this context, or to evaluation. Indeed, some projects found that reluctant users also tended to be less likely to take up training opportunities, something that might only be picked up later, when difficulties with using the technology arose. This really underpins Ros Smith’s reflections from the programme meeting on the need to open a dialogue with course teams, so that implementing these kinds of changes is as much about working with people and cultures as with technology. Being ready to capture the views of those who are having difficulties, or offering a light touch evaluation alternative for reluctant users might be options that provide a more balanced stakeholder perspective.

For some projects, the evaluation process itself had provided the push for lecturers to engage with online assessment and feedback tools. In one case, a lecturer who had previously noted that ‘my students don’t want me to use this approach’ took part in a focus group. During this, the lecturer heard direct from students that they did want to use online tools for assessment. Needless to say the project team were delighted that the lecturer went on to trial the tools.

Effective training of staff was also picked up as essential, particularly as how lecturers go on to communicate use of tools to students influences student uptake and use. This led on to discussions about the importance of training students, and how evaluation activity can help in understanding how well students interpret feedback. Essentially ensuring that students are gaining the most from the feedback process itself and not having difficulties with the tools used to support the process.

What surprised a number of projects was how the evaluations had picked up strong emotional reactions to assessment and feedback both from students and staff. There is a wider literature that looks at “Assessment as an ’emotional practice’” (Steinberg, 2008) and this is underpinned by studies into the links between learning identities, power and social relationships (such as this paper by Higgins, 2000). While the Strand B projects might not have set out to study emotional reactions, it seems there will be some interesting findings in this area.

The importance of relationships was also reflected in findings of a mismatch between students and lecturers in terms of perceptions of the intimacy afforded by online and hard copy assessment and feedback. Staff felt closer to students and more in a dialogue with them when marking hard copy. They wanted to sign or add a personal note to a physical piece of paper. While students felt more enabled to engage in a dialogue online, perhaps because this was felt to be less intimidating.

During the meeting we also discussed the methods and tools projects have been using for their evaluations, but that will be the subject of another blog post.

*Amended from a post on the Inspire Research blog*

Crystallising Evaluation Designs – A Reality Check for Developing Digital Literacies

by Jay Dempster, JISC Evaluation Associate

DL evaluation visuals

The JISC Developing Digital Literacies Programme team has been supporting its institutional projects to design and undertake a holistic evaluation. Projects are thinking critically about the nature, types and value of evidence as early indicators of change and several projects now have some useful visual representations of their project’s sphere of influence and evaluation strategy. A summary produced this month is now available from the JISC Design Studio at: http://bit.ly/designstudio-dlevaluation.

Structuring & mapping evaluation

The point is to reach a stage in designing the evaluation where we can clearly articulate and plan an integrated methodology that informs and drives a project towards its intended outcomes. Projects that have achieved a clearly structured evaluation strategy have:

  1. defined the purpose and outputs of each component;
  2. considered their stakeholder interests & involvement at each stage;
  3. identified (often in consultation with stakeholders) some early indicators of progress towards intended outcomes as well as potential measures of impact/success;
  4. selected appropriate methods/timing for gathering and analysing data;
  5. integrated ways to capture unintended/unexpected outcomes;
  6. identified opportunities to act upon emerging findings (e.g. report/consult/revise), as well as to disseminate final outcomes.

Iterative, adaptive methodologies for evaluation are not easy, yet are a good fit for the these kinds of complex change initiatives. Approaches projects are taking in developing digital literacies across institutions include:

What is meant by ‘evidence’?

Building into the evaluation ways to capture evidence both from explicit, formal data-gathering activities with stakeholders and from informal, reflective practices on the project’s day-to-day activities can offer a continuous development & review cycle that is immensely beneficial to building an evidence base.

However, it can be unclear to projects what is meant by ‘evidence’ in the context of multi-directional interactions and diverse stakeholder interests. We have first considered who the evidence is aimed at and second, to clarify its specific value to them.

This is where evaluation can feed into dissemination, and vice versa, both being based upon an acute awareness of one’s target audience (direct and indirect beneficiaries/stakeholders) and leading to an appropriate and effective “message to market match” for dissemination.

In the recent evaluation support webinar for projects, we asked participants to consider the extent to which you can rehearse their ‘evidence interpretation’ BEFORE they collect it, for instance, by exploring:

Who are your different stakeholders and what are they most likely to be interested in?

What questions or concerns might they have?

What form of information/evidence is most likely to suit their needs?

An evaluation reality-check

We prefaced this with an  ‘evaluation reality-check’ questionnaire, which proved a useful tool both for projects’ self-reflection and for the support team to capture a snapshot of where projects are with an overall design for their evaluations. What can we learn from these collective strategies, how useful is the data that is being collected?

By projects sharing and discussing their evaluation strategies, we are developing a collective sense of how projects are identifying, using and refining their indicators and measures for the development of digital literacies in relation to intended aims . We are also conscious of the need to build in mechanisms for capturing unexpected outcomes.

Through representing evaluation designs visually and reflecting on useful indicators and measures of change, we are seeing greater clarity in how projects are implementing their evaluation plans. Working with the grain of those very processes they aim to support for developing digital literacies in their target groups, our intention is that:

What’s up with evaluation for developing digital literacies?

by Jay Dempster, JISC Evaluation Associate

'Unbrella'

Sound evaluation designs for developing digital literacies stem from projects achieving clarity in two aspects: first, having a strong sense of what they are trying to do, for whom (beneficiaries) and in what ways; and second, identifying relevant and valid ways of measuring outcomes related to those aims and activities.

Outcomes may be short term, tangible outputs and benefits within the project’s funding period, medium term indicators of impact during and beyond the project lifetime, or shared successes that make a difference over the long term to institutional strategies and practices.

In the context of the JISC Developing Digital Literacies programme, a review of project plans and discussion with project teams reveals aims and outcomes that span various levels and affect many different stakeholders, including:

With such scope and complexity come inevitable challenges to evaluation. As with any change initiative, it’s been important for projects to avoid trying to ‘change the world’; to resist biting off more than they can chew within the funded time frame and resource. Focusing early activities on baselining has been one way of identifying the relevant scope and parameters.

Right now, the synthesis and evaluation support role has been about helping projects to clarify, ratify and stratify the framework they are using for developing digital literacies, as well as to identify the practicalities of baselining methods and tools. Evaluation is supported by baselining, but it’s doing a different job. Support and guidance has centred on not seeing evaluation necessarily as separate and distinct to core proejct activities.

For many projects, baselining has helped kick start this more integrated approach to evaluation, one that involves key stakeholders in continuous data gathering and reflection. Some of the audits and surveys created for baselining may be reused or repurposed at key stages across the project lifecycle. Project plans have evolved as teams find opportunities to carry out evaluation tasks as part of the project’s development and consultative activities.

Projects will also need to reflect regularly on the effectiveness of their work processes and collaborations to maximise their short and medium term outcomes and bring about their organisational change objectives in the longer term. Projects are encouraged to develop their plans iteratively and transparently on the programme wiki, sharing the various ways in which they have been capturing evidence that is both credible and relevant.

We’ll be running an ‘evaluation’ webinar next month to talk through how projects have approached some of the ideas and challenges emerging from their plans and baseline reports. As we resolve some common challenges of evaluation collaboratively, we’ll be back to blog some more.

Image CC BY-NC-SA ecstaticist