Crystallising Evaluation Designs – A Reality Check for Developing Digital Literacies

by Jay Dempster, JISC Evaluation Associate

DL evaluation visuals

The JISC Developing Digital Literacies Programme team has been supporting its institutional projects to design and undertake a holistic evaluation. Projects are thinking critically about the nature, types and value of evidence as early indicators of change and several projects now have some useful visual representations of their project’s sphere of influence and evaluation strategy. A summary produced this month is now available from the JISC Design Studio at:

Structuring & mapping evaluation

The point is to reach a stage in designing the evaluation where we can clearly articulate and plan an integrated methodology that informs and drives a project towards its intended outcomes. Projects that have achieved a clearly structured evaluation strategy have:

  1. defined the purpose and outputs of each component;
  2. considered their stakeholder interests & involvement at each stage;
  3. identified (often in consultation with stakeholders) some early indicators of progress towards intended outcomes as well as potential measures of impact/success;
  4. selected appropriate methods/timing for gathering and analysing data;
  5. integrated ways to capture unintended/unexpected outcomes;
  6. identified opportunities to act upon emerging findings (e.g. report/consult/revise), as well as to disseminate final outcomes.

Iterative, adaptive methodologies for evaluation are not easy, yet are a good fit for the these kinds of complex change initiatives. Approaches projects are taking in developing digital literacies across institutions include:

What is meant by ‘evidence’?

Building into the evaluation ways to capture evidence both from explicit, formal data-gathering activities with stakeholders and from informal, reflective practices on the project’s day-to-day activities can offer a continuous development & review cycle that is immensely beneficial to building an evidence base.

However, it can be unclear to projects what is meant by ‘evidence’ in the context of multi-directional interactions and diverse stakeholder interests. We have first considered who the evidence is aimed at and second, to clarify its specific value to them.

This is where evaluation can feed into dissemination, and vice versa, both being based upon an acute awareness of one’s target audience (direct and indirect beneficiaries/stakeholders) and leading to an appropriate and effective “message to market match” for dissemination.

In the recent evaluation support webinar for projects, we asked participants to consider the extent to which you can rehearse their ‘evidence interpretation’ BEFORE they collect it, for instance, by exploring:

Who are your different stakeholders and what are they most likely to be interested in?

What questions or concerns might they have?

What form of information/evidence is most likely to suit their needs?

An evaluation reality-check

We prefaced this with an  ‘evaluation reality-check’ questionnaire, which proved a useful tool both for projects’ self-reflection and for the support team to capture a snapshot of where projects are with an overall design for their evaluations. What can we learn from these collective strategies, how useful is the data that is being collected?

By projects sharing and discussing their evaluation strategies, we are developing a collective sense of how projects are identifying, using and refining their indicators and measures for the development of digital literacies in relation to intended aims . We are also conscious of the need to build in mechanisms for capturing unexpected outcomes.

Through representing evaluation designs visually and reflecting on useful indicators and measures of change, we are seeing greater clarity in how projects are implementing their evaluation plans. Working with the grain of those very processes they aim to support for developing digital literacies in their target groups, our intention is that:

What’s up with evaluation for developing digital literacies?

by Jay Dempster, JISC Evaluation Associate


Sound evaluation designs for developing digital literacies stem from projects achieving clarity in two aspects: first, having a strong sense of what they are trying to do, for whom (beneficiaries) and in what ways; and second, identifying relevant and valid ways of measuring outcomes related to those aims and activities.

Outcomes may be short term, tangible outputs and benefits within the project’s funding period, medium term indicators of impact during and beyond the project lifetime, or shared successes that make a difference over the long term to institutional strategies and practices.

In the context of the JISC Developing Digital Literacies programme, a review of project plans and discussion with project teams reveals aims and outcomes that span various levels and affect many different stakeholders, including:

With such scope and complexity come inevitable challenges to evaluation. As with any change initiative, it’s been important for projects to avoid trying to ‘change the world’; to resist biting off more than they can chew within the funded time frame and resource. Focusing early activities on baselining has been one way of identifying the relevant scope and parameters.

Right now, the synthesis and evaluation support role has been about helping projects to clarify, ratify and stratify the framework they are using for developing digital literacies, as well as to identify the practicalities of baselining methods and tools. Evaluation is supported by baselining, but it’s doing a different job. Support and guidance has centred on not seeing evaluation necessarily as separate and distinct to core proejct activities.

For many projects, baselining has helped kick start this more integrated approach to evaluation, one that involves key stakeholders in continuous data gathering and reflection. Some of the audits and surveys created for baselining may be reused or repurposed at key stages across the project lifecycle. Project plans have evolved as teams find opportunities to carry out evaluation tasks as part of the project’s development and consultative activities.

Projects will also need to reflect regularly on the effectiveness of their work processes and collaborations to maximise their short and medium term outcomes and bring about their organisational change objectives in the longer term. Projects are encouraged to develop their plans iteratively and transparently on the programme wiki, sharing the various ways in which they have been capturing evidence that is both credible and relevant.

We’ll be running an ‘evaluation’ webinar next month to talk through how projects have approached some of the ideas and challenges emerging from their plans and baseline reports. As we resolve some common challenges of evaluation collaboratively, we’ll be back to blog some more.

Image CC BY-NC-SA ecstaticist