Measuring Success in e‑Learning — A Multi‑Dimensional Approach


  • Malcolm Bell
  • Stephen Farrier


measuring, benchmarking, methodology


In 1999 Northumbria University published a strategy document entitled "Towards the web‑enabled University". This prefaced an assessment of need and of available platforms for developing online teaching and learning which, in turn, led in 2001 to the roll out and institution‑wide adoption of the Blackboard Virtual Learning Environment (VLE) now referred to as our eLearning Platform or eLP. Within a very few years we had over 90% take‑up by academic staff and the eLP had become integral to the learning of virtually all our students. What has always been relatively easy to measure has been the number of users, frequency of use, number of courses, levels of technological infrastructure, etc. However, with the publication of the Higher Education Funding Council for England (HEFCE) e learning strategy in 2005 it became apparent that such quantitative data was not particularly helpful in measuring how the university matched onto the 10‑year aspirations of that document and its measures of success. Consequently an on‑going exploration was embarked upon to try to measure where we were and what we should prioritise in order to embed e‑learning, as envisaged within the HEFCE strategy. This involved a number of key approaches: The measures were broken down into manageable sizes, creating sixteen measures in all with descriptors for "full achievement" through to "no progress to date" with suggested sources of information which would support the description. A series of interviews with key staff were set up in which they were asked to rank where they felt the university stood against each measure and what evidence would support their views. An academic staff survey was developed on‑line which invited staff to explore a number of statements based around the HEFCE criteria and express degrees of agreement. This was followed up by a range of face‑to‑face interviews. An online student survey was developed and students were asked to express degrees of agreement with these. Student responses were followed up with an independent student focus group exploring issues in greater depth. The outcomes of the three approaches were then combined and an interim report prepared which identified strengths and areas for further development. Some of the latter are already being addressed. Subsequently, the university joined phase 2 of a national benchmarking e‑learning in Higher Education exercise, running from May to December 2007, supported by the Higher Education Academy (HEA) and the Joint Information Systems Committee (JISC). During this exercise we engaged in a deeper exploration against a wider set of criteria, based upon the "Pick & Mix" (Bacsich, 2007) methodology. Pick&Mix comprises 20 core criteria and the option of a number of supplementary criteria. Through these approaches we will be able to set a baseline for where we currently are and it will allow us to revisit criteria later to measure our progress in those areas we identify for development. This paper shares methodologies used, identifies key outcomes and reflects upon those outcomes from both an institutional and sectoral perspective.



1 Apr 2008