Archive for category Publications
The seventh Innovating Pedagogy report was published on 3 January 2019 and is free to download. This is the second of the reports with me as editor, produced by The Open University in collaboration this year with the Centre for the Science of Learning & Technology (SLATE), University of Bergen, Norway.
The reports are now published at the start of each year, which meant that Innovating Pedagogy 2019 was published in January 2019, at the start of The Open University’s 50th year. It follows the sixth report, Innovating Pedagogy 2017, which was published in December 2017.
The report, like previous ones, proposes ten innovations that are already in currency but have not had a profound influence on education in their current form.
- Playful learning
- Learning with robots
- Decolonising learning
- Drone-based learning
- Learning through wonder
- Action learning
- Virtual studios
- Place-based learning
- Making thinking visible
- Roots of Empathy
- Themes from previous reports
The report was written up in the Times Higher on the day it was launched, under the heading ‘Teaching with drones: coming to a classroom near you?’
The AACE Review for January 2019, published by the Association for the Advancement of Computing in Education included ‘Behind the Scenes of ‘Innovating Pedagogy 2019 – An Interview with Rebecca Ferguson’.
Ferguson, Rebecca; Coughlan, Tim; Egelandsdal, Kjetil; Gaved, Mark; Herodotou, Christothea; Hillaire, Garron; Jones, Derek; Jowers, Iestyn; Kukulska-Hulme, Agnes; McAndrew, Patrick; Misiejuk, Kamila; Ness, Ingun Johanna; Rienties, Bart; Scanlon, Eileen; Sharples, Mike; Wasson, Barbara; Weller, Martin and Whitelock, Denise (2019). Innovating Pedagogy 2019: Open University Innovation Report 7. The Open University, Milton Keynes.
Enhancing Learning and Teaching with Technology: What the Research Says was published by Institute of Education Press on 24 January 2018. The book was inspired by a seminar series run at the Institute that focused on research findings about educational technology. It was officially launched at the BETT Show in London.
Our chapter focuses on MOOCs and was based on the research-based publications of UK-based partners of the FutureLearn platform.
Introduction to the chapter
Free online courses that provide learning at scale have the potential to open up education around the world. MOOCs now engage millions of learners. For example, FutureLearn, the UK’s largest MOOC provider passed 6m registered learners in 2017, 75% of these outside the UK. Coursera, the world’s largest platform, claimed 24m learners worldwide in March 2017, of which over half a million were UK learners.
In this section, we explore what the research tells us about how MOOCs need to be developed in order to help provide education for all. This research was carried out at UK universities partnered with the FutureLearn MOOC platform. When it was carried out, FutureLearn had 64 university partners, including 29 within the UK, all linked by the FutureLearn Academic Network (FLAN).
Ferguson, Rebecca; Herodotou, Christothea; Coughlan, Tim; Scanlon, Eileen and Sharples, Mike (2018). MOOC development: priority areas. In: Luckin, Rosemary ed. Enhancing Learning and Teaching with Technology: What the Research Says. London: UCL IOE Press.
On 7 December 2018 we launched Innovating Pedagogy 2017. This is the sixth in a series of reports that explores new forms of teaching, learning and assessment. It is the first of the series on which I have been lead author, taking over from Mike Sharples who initiated the series and remains an author. This year, the report was produced by The Open University in collaboration with the Learning In a NetworKed Society (LINKS) Israeli Center of Research Excellence (I-CORE).
All the Innovating Pedagogy reports are released under a Creative Commons licence and can be downloaded free of charge.
The ten innovative pedagogies proposed in this year’s report are:
- Big-data inquiry: thinking with data
- Learners making science
- Navigating post-truth societies
- Immersive learning
- Learning with internal values
- Student-led analytics
- Intergroup empathy
- Humanistic knowledge-building communities
- Open textbooks
- Spaced Learning
Our fellow authors at LINKS worked on a translation, and a Hebrew version of the report is now available to download from the Innovating Pedagogy website.
Our main paper at the LAK conference looked at the state of evidence in the field. Drawing on the work collated in the LACE project Evidence Hub, it seems that there is, as yet, very little clear evidence that learning analytics improve learning or teaching. The paper concludes with a series of suggestions about how we can work as a community to improve the evidence base of the field.
The room was full to overflowing for our talk and for the other two talks in the session on the ethics of learning analytics. If you weren’t able to get in and you want to understand the links between jelly beans, a dead salmon, Bob Dylan, Buffy the Vampire Slayer and learning analytics, I shall share the link to the recorded session as soon as I have it.
Ferguson, Rebecca and Clow, Doug (2017). Where is the evidence? A call to action for learning analytics. In: LAK ’17 Proceedings of the Seventh International Learning Analytics & Knowledge Conference, ACM International Conference Proceeding Series, ACM, New York, USA, pp. 56–65.
Where is the evidence for learning analytics? In particular, where is the evidence that it improves learning in practice? Can we rely on it? Currently, there are vigorous debates about the quality of research evidence in medicine and psychology, with particular issues around statistical good practice, the ‘file drawer effect’, and ways in which incentives for stakeholders in the research process reward the quantity of research produced rather than the quality. In this paper, we present the Learning Analytics Community Exchange (LACE) project’s Evidence Hub, an effort to relate research evidence in learning analytics to four propositions about learning analytics: whether they support learning, support teaching, are deployed widely, and are used ethically. Surprisingly little evidence in this strong, specific sense was found, and very little was negative (7%, N=123), suggesting that learning analytics is not immune from the pressures in other areas. We explore the evidence in one particular area in detail (whether learning analytics improve teaching and learners support in the university sector), and set out some of the weaknesses of the evidence available. We conclude that there is considerable scope for improving the evidence base for learning analytics, and set out some suggestions of ways for various stakeholders to achieve this.
Monday 13 March was the day of the second LAK Failathon, this time held at the LAK17 conference at Simon Fraser University in Vancouver. This year, we took the theme ‘Beyond Failure’ and the workshop led into a paper later in the conference and then to a crowd-sourced paper on how we can work to avoid failure both on individual projects and across the learning analytics community as a whole.
We also took a consciously international approach, and so workshop leaders included Doug Clow and I from Europe, Mike Sharkey from North America, Cecilia Aguerrebere from South AMerica, Kirsty Kitto from Australia and Yong-Sang Cho from Asia.
Clow, Doug; Ferguson, Rebecca; Kitto, Kirsty; Cho, Yong-Sang; Sharkey, Mike and Aguerrebere, Cecilia (2017). Beyond failure: the 2nd LAK Failathon. In: LAK ’17 Proceedings of the Seventh International Learning Analytics & Knowledge Conference, ACM International Conference Proceeding Series, ACM, New York, USA, pp. 504–505.
If you can’t access the workshop outline behind the paywall, contact me for a copy.
The 2nd LAK Failathon will build on the successful event in 2016 and extend the workshop beyond discussing individual experiences of failure to exploring how the field can improve, particularly regarding the creation and use of evidence. Failure in research is an increasingly hot topic, with high-profile crises of confidence in the published research literature in medicine and psychology. Among the major factors in this research crisis are the many incentives to report and publish only positive findings. These incentives prevent the field in general from learning from negative findings, and almost entirely preclude the publication of mistakes and errors. Thus providing an alternative forum for practitioners and researchers to learn from each other’s failures can be very productive. The first LAK Failathon, held in 2016, provided just such an opportunity for researchers and practitioners to share their failures and negative findings in a lower-stakes environment, to help participants learn from each other’s mistakes. It was very successful, and there was strong support for running it as an annual event. This workshop will build on that success, with twin objectives to provide an environment for individuals to learn from each other’s failures, and also to co-develop plans for how we as a field can better build and deploy our evidence base.
Our new paper is now – lead author Quan Nguyen – is available online in Computers in Human Behavior. It examines the designs of computer-based assessment and its impact on student engagement, student satisfaction and pass rates.
Computers in Behavior is locked behind a paywall, so contact me for a copy if you can’t get access to the paper.
Many researchers who study the impact of computer-based assessment (CBA) focus on the affordances or complexities of CBA approaches in comparison to traditional assessment methods. This study examines how CBA approaches were configured within and between modules, and the impact of assessment design on students’ engagement, satisfaction, and pass rates. The analysis was conducted using a combination of longitudinal visualisations, correlational analysis, and fixed-effect models on 74 undergraduate modules and their 72,377 students. Our findings indicate that educators designed very different assessment strategies, which significantly influenced student engagement as measured by time spent in the virtual learning environment (VLE). Weekly analyses indicated that assessment activities were balanced with other learning activities, which suggests that educators tended to aim for a consistent workload when designing assessment strategies. Since most of the assessments were computer-based, students spent more time on the VLE during assessment weeks. By controlling for heterogeneity within and between modules, learning design could explain up to 69% of the variability in students’ time spent on the VLE. Furthermore, assessment activities were significantly related to pass rates, but no clear relation with satisfaction was found. Our findings highlight the importance of CBA and learning design to how students learn online.
Nguyen, Quan; Rienties, Bart; Toetenel, Lisette; Ferguson, Rebecca and Whitelock, Denise (2017). Examining the designs of computer-based assessment and its impact on student engagement, satisfaction, and pass rates. Computers in Human Behavior (Early access).