Archive for category Publications

Learning analytics: where is the evidence?

IMG_3917.jpgOur main paper at the LAK conference looked at the state of evidence in the field. Drawing on the work collated in the LACE project Evidence Hub, it seems that there is, as yet, very little clear evidence that learning analytics improve learning or teaching. The paper concludes with a series of suggestions about how we can work as a community to improve the evidence base of the field.

The room was full to overflowing for our talk and for the other two talks in the session on the ethics of learning analytics. If you weren’t able to get in and you want to understand the links between jelly beans, a dead salmon, Bob Dylan, Buffy the Vampire Slayer and learning analytics, I shall share the link to the recorded session as soon as I have it.

Ferguson, Rebecca and Clow, Doug (2017). Where is the evidence? A call to action for learning analytics. In: LAK ’17 Proceedings of the Seventh International Learning Analytics & Knowledge Conference, ACM International Conference Proceeding Series, ACM, New York, USA, pp. 56–65.

Abstract

Where is the evidence for learning analytics? In particular, where is the evidence that it improves learning in practice? Can we rely on it? Currently, there are vigorous debates about the quality of research evidence in medicine and psychology, with particular issues around statistical good practice, the ‘file drawer effect’, and ways in which incentives for stakeholders in the research process reward the quantity of research produced rather than the quality. In this paper, we present the Learning Analytics Community Exchange (LACE) project’s Evidence Hub, an effort to relate research evidence in learning analytics to four propositions about learning analytics: whether they support learning, support teaching, are deployed widely, and are used ethically. Surprisingly little evidence in this strong, specific sense was found, and very little was negative (7%, N=123), suggesting that learning analytics is not immune from the pressures in other areas. We explore the evidence in one particular area in detail (whether learning analytics improve teaching and learners support in the university sector), and set out some of the weaknesses of the evidence available. We conclude that there is considerable scope for improving the evidence base for learning analytics, and set out some suggestions of ways for various stakeholders to achieve this.

Leave a comment

LAK17: Failathon

Screen Shot 2017-03-31 at 10.20.18Monday 13 March was the day of the second LAK Failathon, this time held at the LAK17 conference at Simon Fraser University in Vancouver. This year, we took the theme ‘Beyond Failure’ and the workshop led into a paper later in the conference and then to a crowd-sourced paper on how we can work to avoid failure both on individual projects and across the learning analytics community as a whole.

We also took a consciously international approach, and so workshop leaders included Doug Clow and I from Europe, Mike Sharkey from North America, Cecilia Aguerrebere from South AMerica, Kirsty Kitto from Australia and Yong-Sang Cho from Asia.

Clow, Doug; Ferguson, Rebecca; Kitto, Kirsty; Cho, Yong-Sang; Sharkey, Mike and Aguerrebere, Cecilia (2017). Beyond failure: the 2nd LAK Failathon. In: LAK ’17 Proceedings of the Seventh International Learning Analytics & Knowledge Conference, ACM International Conference Proceeding Series, ACM, New York, USA, pp. 504–505.

If you can’t access the workshop outline behind the paywall, contact me for a copy.

Abstract

The 2nd LAK Failathon will build on the successful event in 2016 and extend the workshop beyond discussing individual experiences of failure to exploring how the field can improve, particularly regarding the creation and use of evidence. Failure in research is an increasingly hot topic, with high-profile crises of confidence in the published research literature in medicine and psychology. Among the major factors in this research crisis are the many incentives to report and publish only positive findings. These incentives prevent the field in general from learning from negative findings, and almost entirely preclude the publication of mistakes and errors. Thus providing an alternative forum for practitioners and researchers to learn from each other’s failures can be very productive. The first LAK Failathon, held in 2016, provided just such an opportunity for researchers and practitioners to share their failures and negative findings in a lower-stakes environment, to help participants learn from each other’s mistakes. It was very successful, and there was strong support for running it as an annual event. This workshop will build on that success, with twin objectives to provide an environment for individuals to learn from each other’s failures, and also to co-develop plans for how we as a field can better build and deploy our evidence base.

,

Leave a comment

CBA: impact on student engagement

Our new paper is now – lead author Quan Nguyen – is available online in Computers in Human Behavior. It examines the designs of computer-based assessment and its impact on student engagement, student satisfaction and pass rates.

Computers in Behavior is locked behind a paywall, so contact me for a copy if you can’t get access to the paper.

Abstract

Many researchers who study the impact of computer-based assessment (CBA) focus on the affordances or complexities of CBA approaches in comparison to traditional assessment methods. This study examines how CBA approaches were configured within and between modules, and the impact of assessment design on students’ engagement, satisfaction, and pass rates. The analysis was conducted using a combination of longitudinal visualisations, correlational analysis, and fixed-effect models on 74 undergraduate modules and their 72,377 students. Our findings indicate that educators designed very different assessment strategies, which significantly influenced student engagement as measured by time spent in the virtual learning environment (VLE). Weekly analyses indicated that assessment activities were balanced with other learning activities, which suggests that educators tended to aim for a consistent workload when designing assessment strategies. Since most of the assessments were computer-based, students spent more time on the VLE during assessment weeks. By controlling for heterogeneity within and between modules, learning design could explain up to 69% of the variability in students’ time spent on the VLE. Furthermore, assessment activities were significantly related to pass rates, but no clear relation with satisfaction was found. Our findings highlight the importance of CBA and learning design to how students learn online.

Nguyen, Quan; Rienties, Bart; Toetenel, Lisette; Ferguson, Rebecca and Whitelock, Denise (2017). Examining the designs of computer-based assessment and its impact on student engagement, satisfaction, and pass rates. Computers in Human Behavior (Early access).

Leave a comment

Innovating pedagogy: China

The Innovating Pedagogy 2016 report. Now in Chinese.

First page in Chinese

Leave a comment

Dimensions of personalisation in TEL

Tweet about the paperNew paper out in the British Journal of Educational Technology, co-authored with a host of people. Lead author Liz FitzGerald plus Natalia Kucirkova, Ann Jones, Simon Cross, Thea Herodotou, Garron Hillaire and Eileen Scanlon.

The framework proposed in the paper has six dimensions:

  1. what is being personalised
  2. type of learning
  3. personal characteristics of the learner
  4. who/what is doing the personalisation
  5. how personalisation is carried out
  6. impact / beneficiaries

Abstract

Personalisation of learning is a recurring trend in our society, referred to in government speeches, popular media, conference and research papers and technological innovations. This latter aspect – of using personalisation in technology-enhanced learning (TEL) – has promised much but has not always lived up to the claims made. Personalisation is often perceived to be a positive phenomenon, but it is often difficult to know how to implement it effectively within educational technology.

In order to address this problem, we propose a framework for the analysis and creation of personalised TEL. This article outlines and explains this framework with examples from a series of case studies. The framework serves as a valuable resource in order to change or consolidate existing practice and suggests design guidelines for effective implementations of future personalised TEL.

FitzGerald, Elizabeth; Kucirkova, Natalia; Jones, Ann; Cross, Simon; Ferguson, Rebecca; Herodotou, Christothea; Hillaire, Garron and Scanlon, Eileen (2017). Dimensions of personalisation in technology-enhanced learning: a framework and implications for design. British Journal of Educational Technology (early view).

Leave a comment

MOOCs: What the UK research tells us

report coverOur latest quality enhancement report, MOOCs; What the Research of FutureLearn’s UK Partners Tells Us came out in late January 2017. The rport was co-authored with Tim Coughlan, Christothea Herodotou and Eileen Scanlon. It follows an earlier report on what MOOC research from The Open University tells us.

The report provides brief summaries of, and links to, all accessible publications stored in the repositories of  FutureLearn’s UK academic partners that deal with research on MOOCs. Where these publications made recommendations that could be taken up, these recommendations are highlighted within the report. Full references for all studies are provided in the bibliography.

Studies are divided thematically, and the report contains sections on MOOCs as a field, pedagogy and teaching, accessibility, retention, motivation and engagement, assessment and accreditation, study skills, MOOCs around the world, and sustainability.

The report contains 59 recommendations that have emerged from the publications and each of these is linked to the research study that generated it.

MOOC priority areas

1. Develop a strategic approach to learning at scale.

2. Develop appropriate pedagogy for learning at scale.

3. Identify and share effective learning designs.

4. Support discussion more effectively.

5. Clarify learner expectations.

6. Develop educator teams.

7. Widen access.

8. Develop new approaches to assessment and accreditation.

,

Leave a comment

Research Evidence on the Use of Learning Analytics: Implications for Education Policy

Report coverThe final report on our study of learning analytics for European educational policy (LAEP) is now out.

Research Evidence on the Use of Learning Analytics: Implications for Education Policy brings together the findings of a literature review; case studies; an inventory of tools, policies and practices; and an expert workshop.

The report also provides an Action List for policymakers, practitioners, researchers and industry members to guide work in Europe.

Learning Analytics: Action List

Policy leadership and governance practices

  • Develop common visions of learning analytics that address strategic objectives and priorities
  • Develop a roadmap for learning analytics within Europe
  • Align learning analytics work with different sectors of education
  • Develop frameworks that enable the development of analytics
  • Assign responsibility for the development of learning analytics within Europe
  • Continuously work on reaching common understanding and developing new priorities

Institutional leadership and governance practices

  • Create organisational structures to support the use of learning analytics and help educational leaders to implement these changes
  • Develop practices that are appropriate to different contexts
  • Develop and employ ethical standards, including data protection

Collaboration and networking

  • Identify and build on work in related areas and other countries
  • Engage stakeholders throughout the process to create learning analytics that have useful features
  • Support collaboration with commercial organisations

Teaching and learning practices

  • Develop learning analytics that makes good use of pedagogy
  • Align analytics with assessment practices

Quality assessment and assurance practices

  • Develop a robust quality assurance process to ensure the validity and reliability of tools
  • Develop evaluation checklists for learning analytics tools

Capacity building

  • Identify the skills required in different areas
  • Train and support researchers and developers to work in this field
  • Train and support educators to use analytics to support achievement

Infrastructure

  • Develop technologies that enable development of analytics
  • Adapt and employ interoperability standards

Other resources related to the LAEP project – including the LAEP Inventory of learning analytics tools, policies and practices – are available on Cloudworks.

, ,

Leave a comment