Archive for category Workshops
In early July, I was in Leicester at the Playful Learning conference with other members of the Rumpus research group, running a workshop to develop a typology of fun and learning. We used balloons to gather, group and shape ideas.
8. Framework of fun (90 minutes, outside)
The Rumpus Group
This will be a fun way to identify the elements of fun. Using the outside space we will use a variety of media (including balloons) to draw out people’s ideas, and develop a shared understanding of what fun is, and what contributes to it.
Monday 13 March was the day of the second LAK Failathon, this time held at the LAK17 conference at Simon Fraser University in Vancouver. This year, we took the theme ‘Beyond Failure’ and the workshop led into a paper later in the conference and then to a crowd-sourced paper on how we can work to avoid failure both on individual projects and across the learning analytics community as a whole.
We also took a consciously international approach, and so workshop leaders included Doug Clow and I from Europe, Mike Sharkey from North America, Cecilia Aguerrebere from South AMerica, Kirsty Kitto from Australia and Yong-Sang Cho from Asia.
Clow, Doug; Ferguson, Rebecca; Kitto, Kirsty; Cho, Yong-Sang; Sharkey, Mike and Aguerrebere, Cecilia (2017). Beyond failure: the 2nd LAK Failathon. In: LAK ’17 Proceedings of the Seventh International Learning Analytics & Knowledge Conference, ACM International Conference Proceeding Series, ACM, New York, USA, pp. 504–505.
If you can’t access the workshop outline behind the paywall, contact me for a copy.
The 2nd LAK Failathon will build on the successful event in 2016 and extend the workshop beyond discussing individual experiences of failure to exploring how the field can improve, particularly regarding the creation and use of evidence. Failure in research is an increasingly hot topic, with high-profile crises of confidence in the published research literature in medicine and psychology. Among the major factors in this research crisis are the many incentives to report and publish only positive findings. These incentives prevent the field in general from learning from negative findings, and almost entirely preclude the publication of mistakes and errors. Thus providing an alternative forum for practitioners and researchers to learn from each other’s failures can be very productive. The first LAK Failathon, held in 2016, provided just such an opportunity for researchers and practitioners to share their failures and negative findings in a lower-stakes environment, to help participants learn from each other’s mistakes. It was very successful, and there was strong support for running it as an annual event. This workshop will build on that success, with twin objectives to provide an environment for individuals to learn from each other’s failures, and also to co-develop plans for how we as a field can better build and deploy our evidence base.
A very busy week in Vancouver at the LAK17 (learning analytics and knowledge) conference kicked off with the all-day doctoral consortium on 14 March (funded by SoLAR and the NSF). I joined Bodong Chen and Ani Aghababyan as an organiser this year and we enjoyed working with the ten talented doctoral students from across the world who gained a place in the consortium.
- Alexander Whitelock-Wainwright: Students’ intentions to use technology in their learning: The effects of internal and external conditions
- Alisa Acosta: The design of learning analytics to support a knowledge community and inquiry approach to secondary science
- Daniele Di Mitri: Digital learning shadow: digital projection, state estimation and cognitive inference for the learning self
- Danielle Hagood: Learning analytics in non-cognitive domains
- Justian Knobbout: Designing a learning analytics capabilities model
- Leif Nelson: The purpose of higher education in the discourse of learning analytics
- Quan Nguyen: Unravelling the dynamics of learning design within and between disciplines in higher education using learning analytics
- Stijn Van Laer: Design guidelines for blended learning environments to support self-regulation: event sequence analysis for investigating learners’ self-regulatory behavior
- Tracie Farrell Frey: Seeking relevance: affordances of learning analytics for self-regulated learning
- Ye Xiong: Write-and-learn: promoting meaningful learning through concept map-based formative feedback on writing assignments
The intention of the doctoral consortium was to support and inspire doctoral students in their ongoing research efforts. The objectives were to:
- Provide a setting for mutual feedback on participants’ current research and guidance on future research directions from a mentor panel
- Create a forum for engaging in dialogue aimed at building capacity in the field with respect to current issues in learning analytics ranging from methods of gathering analytics, interpreting analytics with respect to learning issues, considering ethical issues, relaying the meaning of analytics to impact teaching and learning, etc.
- Develop a supportive, multidisciplinary community of learning analytics scholars
- Foster a spirit of collaborative research across countries, institutions and disciplinary background
- Enhance participating students’ conference experience by connecting participants to other LAK attendees
On 13 December, I joined a Foresight Workshop on Learning Technologies in Luxembourg. The workshop was designed to help the European Commission to set and define future European strategic research and innovation priorities.
The workshop began with a series of ‘Moonshots’. Individual experts presented ambitious, yet realistic, targets for EU-funded learning technology research and innovation up to 2025. For each of these, we considered: What is the problem? How is it dealt with now? What difference would it make if this problem were addressed successfully?
We went on to merge our individual Moonshots into Constellations and then into Galaxies. We made links between the different ideas, linking them with other international activities and trends, as well as to previous EU-funded work. I was interested to see that many of the experts from across Europe presented ideas associated with blockchain for learning, a pedagogy that was picked up in our recent Innovating Pedagogy report.
My moonshot focused on a series of problems: access to tertiary education is unequal, most people in Europe do not complete tertiary education and many people in Europe need to develop new skills. Massive open online courses (MOOCs) offer a potential solution, but these new approaches to learning require new approaches to teaching. Teachers need training and support to work effectively in these new environments. They also need proven models of good practice. Improving educator effectiveness on these courses has the potential to increase Europe’s capacity to respond to its priority areas. It also has the potential to open up education for millions by developing and sharing knowledge of how to teach at scale.
I visited the University of Deusto in Bilbao, Spain, to give a keynote at the learning analytics summer institute there (LASI Bilbao 2016) on 28 June 2016. The event brought people together from the Spanish Network of Learning Analytics (SNOLA), which was responsible for organising the event, in conjunction with the international Society for Learning Analytics Research (SoLAR).
What does the future hold for learning analytics? In terms of Europe’s priorities for learning and training, they will need to support relevant and high-quality knowledge, skills and competences developed throughout lifelong learning. More specifically, they should improve the quality and efficiency of education and training, enhance creativity and innovation, and focus on learning outcomes in areas such as employability, active-citizenship and well-being. This is a tall order and, in order to achieve it, we need to consider how our work fits into the larger picture. Drawing on the outcomes of two recent European studies, Rebecca will discuss how we can avoid potential pitfalls and develop an action plan that will drive the development of analytics that enhance both learning and teaching.
The series of LACE workshops on Ethics and Privacy in Learning Analytics (EP4LA) keeps expanding.
I worked with María Jésus Rodríguez-Triana on the programme for one of these events, which she ran with Denis Gillet at the 12th Joint European Summer School on Technology Enhanced Learning (JTEL Summer School) in Estonia, on 20 June.
This 90-minute workshop aims to give participants an overview of the ethical and privacy issues in Learning Analytics. Furthermore, the workshop allows the participants to increase the awareness about how to implement LA solutions either as researchers, practitioners or as developers. It will consist of three parts:
Part 1 – Introduction: presentation of LA frameworks and guidelines for Learning Analytics regarding ethics and privacy.
Part 2 – Framework analyses: participants will be grouped to work in a specific framework. The teams will categorise those ethical and privacy issues that the participants are currently addressing in their practice, those that could be covered with a low-medium effort, and those that constitute a challenge
Part 3 – Discussion: An open discussion will follow, exploring the complexity of each framework and looking for potential ways of addressing them.
Next stop after the LAK conference was Kuala Lumpur in Malaysia. There I took part in an expert workshop from 2-3 May 2016, organised by the Commonwealth of Learning, developing guidelines for the quality assurance and accreditation of massive open online courses.
The purpose of this review is to identify quality measures and to highlight some of the tensions surrounding notions of quality, as well as the need for new ways of thinking about and approaching quality in MOOCs. It draws on the literature on both MOOCs and quality in education more generally in order to provide a framework for thinking about quality and the different variables and questions that must be considered when conceptualising quality in MOOCs. The review adopts a relativist approach, positioning quality as a measure for a specific purpose. The review draws upon Biggs’s (1993) 3P model to explore notions and dimensions of quality in relation to MOOCs — presage, process and product variables — which correspond to an input–environment–output model. The review brings together literature examining how quality should be interpreted and assessed in MOOCs at a more general and theoretical level, as well as empirical research studies that explore how these ideas about quality can be operationalised, including the measures and instruments that can be employed. What emerges from the literature are the complexities involved in interpreting and measuring quality in MOOCs and the importance of both context and perspective to discussions of quality.
Australia: Adam Brimo
Japan: Paul Kawachi
New Zealand: Nina Hood