Half-way in to the slightly manic process of reorganising my Scrivener notes for my PhD thesis proposal, I wondered if I wasn’t using it to avoid to actual work. I was painstakingly working through a host of references (some with annotations – mainly from the abstract I suspect) – that I had added early on from my initial proposal and largely just dumped into my broad categories without much further thought. I haven’t since come back to them or considered how relevant or useful they are or what I plan to do with them.
My larger goal in this exercise was to try to find some kind of structure for my thinking – I’m increasingly aware that the Higher Education ecosystem is intricate and complex and most if not all of the moving parts impact on each other far more than the current literature seems to acknowledge. I’m still not sure what the best way to represent this is, but I’m hoping that creating some order will help me to place the myriad random thoughts and questions that I’ve come up with so far in something more manageable.
Which seems to be a point that I often reach in large projects (none as large as this, admittedly) before losing interest and moving on to something new. As I thought about this, I worried that I was doing this exact thing here.
Fortunately, I wasn’t. I eliminated a number of papers that seemed relevant on the surface but really weren’t, I found a few more that I’d completely forgotten that I have put into the high priority reading list and I think that now I have a place for everything and everything in its place. There’s a section for the actual writing (broken up by broad topics), a section for notes (broken up by broad topics which I’d say will get more and more subtopics), a section for quotes and references (with sub-sections for individual papers) and some general miscellaneous pages for ‘other’ stuff. What works best about this for me is the way that it lets me quickly jump around the proposal when something useful needs to be jotted down and the side-by-side structure of Scrivener lets me easily copy-paste chunks. It looks a bit like this.
The other part of this process that was useful was finding a brief paper that has the same focus as my research, which gave me some assurance that I’m on track with my ideas as well as check whether I’m missing anything vital. (Turns out that I think that they are missing a few things, which is obviously good for me. I’ll post about this one shortly)
Sometimes posting a research progress update can be like jumping on the scales after a weekend of eating cake – it’s important to do to maintain accountability but you know it’s not going to be pretty. This is one of those times.
As you can tell by my recent posting history, it’s been a while since I read and reflected upon anything. Since my last update however, I have had a PhD progress review where the panel was satisfied with how I’m going and took me off probation and I also attended the ePortforum conference in Sydney, two days of talking and learning about what is being done in Higher Ed. with ePortfolios.
I also read a chapter of a book my supervisor (hi Peter) wrote about teacher attitudes towards education technology which got me thinking much more about methodology than I have been to date. There’s a strangeness to reading (and particularly writing) about one’s supervisor’s writing – a lot of different conflicting feels. Am I obliged/expected to fall into line with his ideas and/or particular areas of interest? (I don’t think so – he’s been remarkably chilled about what I’m doing. Offering thoughts and suggestions, of course but I’ve never felt pressured). Is it ok if I disagree with something that he’s said in his writing? (Again, I think that if I was able to present a solid argument, it would be fine. That said, I’ve not come across anything yet that hasn’t been eye-opening, as you would hope for from a mentor/supervisor). If I read too much of his work, does it get weird or obsequious?
On the one (rational) hand, you approach a supervisor because you think that their interests/methods will inform yours and presumably align (or vice versa) so why wouldn’t you but on the other (emotional) hand, have I had some kind of need to explore the other literature first to come to some of my own conclusions before being shaped too much by his take on things? (In the same way that a filmmaker on a remake might go back to the initial novel but not watch the first film that came from it?). Even Peter said that I didn’t necessarily need to read this particular book as it’s from 2002 and attitudes to ed tech have no doubt shifted since then. He suggested more that I look at who has cited it.
I’m really glad that I did read it though as, as I mentioned, the methodological ideas gave me a lot to think about – largely in getting tutors to describe their grading process as almost as stream of consciousness in real time which was also recorded so that they could watch the recording and add a layer of reflection later. This may well be a common methodology but it’s not one that I’ve come across in the reading that I’ve done to date. I’ll post something about this chapter soon anyway.
I’ve also been working away on an application to upgrade myself from Associate Fellow of the HEA to Senior Fellow. This requires a lot of reflective writing (around 7000+ words) and has been useful in thinking in greater depth about my own professional practices and ‘learning journey’. (I always feel a little bit hippy using that expression but I haven’t come across a better one). So this application has taken up a decent chunk of my time as well.
I have also – because clearly I have a lot of free time on my hands – been slowly nudging forward the formation of a Special Interest Group through HERDSA (but not solely for HERDSA members I think) that is focused on Education Advisors. (a.k.a Education Support Staff – academic developers, ed designers, learning technologists etc). We had a great lunchtime conversation (vent?) about some of the issues that we face which aligned particularly with many of the papers that i have posted about here in the last couple of months. I suspect that one of the trickiest parts will be explaining to teaching academics that this isn’t a group for them. I guess this is one of the things that we’ll need to pin down in the formation of it. It’s far from a new idea – there are a range of city and state based parallels in varying states of activity – but having a national (transnational to include NZ) body isn’t something I’ve seen before. The funny thing is that while this is important to me, some of the issues/ideas that came up in the conversation yesterday, I felt like I have already moved on from in pivoting to research academic staff now and their issues and concerns. But I’m pretty sure I can walk and chew gum at the same time.
Clearly one of the key ingredients in enhancing teaching practice is teacher professional development and a vital element of deriving meaning from this is reflective practice.
It is at this point however that we need to be cautious of the evangelisers of reflective practice as a global solution. “Reflecting or Acting? Reflective Practice and Continuing Professional Development in UK Higher Education” by Sue Clegg, Jon Tan and Saeideh Saeidi (2002) takes a methodical look at the use of reflection and notes that current (at the time – not sure how much they have evolved) uses of reflective practice in CPD isn’t suited to all learners and needs to be anchored in actions taken to be particularly meaningful.
Reflective practice is valued for acknowledging “the importance of artistry in teaching” (p.3), which seems even more important in 2016 than it was in 2002 with the rise of big data and analytics in education sometimes seeming determined to quantify and KPI-ify every single facet of teaching and learning. (Can you tell that I’m more of a qual than a quant?)
Clegg et al investigated the use and value of reflective practice amongst academic staff in accredited CPD between 1995-1998. In broad terms (Spoiler alert) they tied it to four types of practices/behaviours that reflected the learning preferences and teaching circumstances of the teachers. These preferences – either for ‘writerly’ reflection or not – and the circumstances (which impacted their ability to act on new teaching knowledge) had a significant part to play on how valuable reflection was to them.
The ‘action’ part is at the core of the question that Clegg et al are pursuing. They draw on Tomlinson (1999) in assuming that “the relationship between reflection and action is transparent with reflection-on-action leading to improvement and change” (p.4). This idea has been of interest to me recently because I’ve been involved with the HEA fellowship scheme at my university which appears to have a different focus, seemingly sans action. (I’ll discuss this further in future posts as engaging Fellows seems as though it is going to be an important part of my ongoing quest/research)
As for the learning preference side of the equation, one of the simultaneous strengths and failings of the widely followed reflective practice approach is the emphasis on a very ‘writerly’ style of reflection. By which the paper refers to Bleakly (2000), who has “argued for greater attention to the form of writing and a greater self-awareness of literary accomplishments of narrating and confessional.” The authors note however that “our data suggested that some practitioners fail to write or only write as a form ex post facto justification for accreditation purposes”. Which, based on the feedback from some of the participants that struggled with the writing element of the task, can be linked in part to the disciplinary orientation of the learners (i.e. quant vs qual backgrounds) and in some cases to gender-role perceptions – “the feminine reflective side as opposed to the more active masculine doing side of practice” (p.18)
These key factors allowed the authors to sort participants into four groups, based on their practices.
Immediate action – participants put new ideas into practice directly after the CPD workshops (and before reflection) (more often novice practitioners)
Immediate reflection – participants reflected on their own practices directly after CPD workshops (more often experienced practitioners) – they also found less value in the workshops in terms of new knowledge
Deferred action – some participants were unable to act on knowledge gained in workshops due to organisational/time constraints (this limited their ability to reflect on the impact of new knowledge on their new actions/practices)
Deferred reflection – largely participants that struggled to engage with the reflection activity in its current format. Many only did it for accreditation purposes so saw little benefit in it.
Clegg et al take pains to emphasise that their research is about starting a conversation about the interrelationship between action and reflection and the need to maintain this link. They don’t draw any other conclusions but I think that even by simply looking at on-the-ground interaction with reflective practice, they have given us something to think about.
Reading this paper sparked a few random ideas for me:
Perhaps Design thinking might offer a way to bridge the gap between the ‘teaching as a craft’ and ‘teaching as an empirical science with hard data’ viewpoints by applying a more deliberate and structured way of thinking about pedagogy and course design
Are there ways that we can foster writing (and some reflection) as a part of every day ongoing CPD for academics? (Without it being seen as a burden? There probably needs to be a goal/outcome/reward that it leads to)
Decoupling reflection from action – particularly when action comes in the forms of making improvements to practice – gives people less to reflect on and might lead to too much navel gazing.
A large part of the work being done on reflective practice by one of my colleagues is focusing on the impact that it has on teacher self-efficacy. Tying it to professional recognition boosts confidence which is valuable but is there a risk that this can in turn lead to complacency or even over-estimation of one’s competence?
My personal philosophy when it comes to theory and practice is that none will ever hold all of the answers for all of the contexts. I believe that equipping ourselves with a toolbox of theories and practices that can be applied when needed is a more sustainable approach but I’m not sure how to describe this – one term that I’ve considered is multifocal – does this seem valid?
One concern that I have about this study is the large number of contextual factors that it tries to accommodate. These include : “how participants understood their activity including reflective practice, their motivations for joining the course, how they made sense of their decisions to complete or not complete, and whether they thought of this as a conscious decision” (p.7) On top of this there was the level at which the CPD was being conducted (novice teachers vs supervisors), disciplinary and gender differences as well as learning preferences. Maybe it’s enough to acknowledge these but it seems like a lot of variables.
Reflection shared with peers seems more valuable than simply submitted to assessors.
Even when reflective writing is a new, ‘out of character’ approach, it can be seen as valuable even though it can take learners time to ease into it. Supporting some warm up exercises seems like it would be important in this case.
It’s worth taking a harder look at exactly what forms meaningful reflections might take – is there just one ‘writerly’ way or should we support a broader range of forms of expression?
Audio? Video? Dank memes? “Virtually all the descriptions of keeping a journal or gather materials together suggested that they somehow felt they had not done it properly – qualifying their descriptions in terms of things being just scrappy notes, or jottings, or disorganised files, or annotated e-mail collections. Such descriptions suggest that participants had an ideal-typical method of what reflective practice should look like. While the overt message from both courses was that there was no one format, it appears that despite that, the tacit or underlying messages surrounding the idea of reflective practice is that there is a proper way of writing and that it constitutes a Foucauldian discipline with its own rules” (p.16-17)
Good reflection benefits from a modest mindset: “one sort of ethos of the course is it requires you to, I don’t know, be a bit humble. It requires you to take a step back and say perhaps I’m not doing things right or am I getting things right, and throw some doubt on your mastery…” (p.17)
This is more of a bigger picture question for my broader research – To what extent does the disciplinary background shape the success (or orientation) of uni executives in strategic thinking – qual (humanities) vs quant (STEM)?
It was always my intention that researching in the area that I work in would help me to shape my professional practice (and it is) but I’ve been surprised lately at how much things are flowing in the other direction. I’ve been thinking a lot lately about what is needed to make an educational project successful and how we know that learners have actually benefitted.
This is partially coming from the big picture work that I’m doing with my peers at the university looking at what we’re doing and why and partially from my own college, which has recently launched a Teaching and Learning Eminence Committee/project to look into what we’re doing with teaching and learning. I wasn’t initially invited onto the committee, (it’s all academics), which speaks to some of the ideas that have been emerging in some of my recent posts (the academic/professional divide) as well as the fact that I need to work on raising the profile of my team* and understanding of our* capacity and activities in the college.
Anyway, while trawling through the tweetstream of the recent (and alas final) OLT – Office of Learning and Teaching – conference at #OLTConf2016, I came across a couple of guides published recently by Ako Aotearoa, the New Zealand National Centre for Tertiary Teaching Excellence, that fit the bill perfectly.
One focusses on running effective projects in teaching and learning in tertiary education, it’s kind of project managementy, which isn’t always the most exciting area for me but it offers a comprehensive and particularly thoughtful overview of what we need to do to take an idea (which should always be driven by enhancing learning) through three key phases identified by Fullan (2007 – as cited in Akelma et al, 2011) in the process of driving educational change – initiation, implementation and institutionalisation. The guide – Creating sustainable change to improve outcomes for tertiary learners is freely available on the Ako Aotearoa website, which is nice.
I took pages and pages of notes and my mind wandered off into other thoughts about immediate and longer term things to do at work and in my research but the key themes running through the guide were treating change as a process rather than an event, being realistic, working collectively, being honest and communicating well. It breaks down each phases into a number of steps (informed by case studies) and prompts the reader with many pertinent questions to ask of themselves and the project along the way.
The focus of the guide is very much on innovation and change – I’m still thinking about what we do with the practices that are currently working well and how we can integrate the new with the old.
The second guide – A Tertiary practitioners guide to collecting evidence of learner benefit – drills down into useful research methodologies for ensuring that our projects and teaching practices are actually serving the learners’ needs. Again, these are informed by helpful case studies and showcase the many places and ways that we can collect data from and about our students throughout the teaching period and beyond.
It did make me wonder whether the research mindset of academics might conventionally be drawn from their discipline. Coming from an organisation with an education and social science orientation, one might expect an emphasis on the qualitative (and there are a lot of surveys suggested – which I wonder about as I have a feeling that students might be a little over-surveyed already) but the guide actually encourages a mixture of methodologies and makes a number of suggestions for merging data, as well as deciding how much is enough.
Definitely some great work from our colleagues across the ditch and well worth checking out.
There’s still much to talk about. Technology and what we need it to do, practical solutions both in place and under consideration / on the wishlist, further questions and a few stray ideas that were generated along the way.
Unsurprisingly, technology was a significant part of our conversation about what we can do in the education support/design/tech realm to help shape the future of our institutions. The core ideas that came up included what we are using it for and how we sell and instill confidence in it in our clients – teachers, students and the executive.
The ubiquity and variety of educational technologies means that they can be employed in all areas of the teaching and learning experience. It’s not just being able to watch a recording of the lecture you missed or to take a formative online quiz; it’s signing up for a course, finding your way to class, joining a Spanish conversation group, checking for plagiarism, sharing notes, keeping an eye on at-risk students and so much more.
It’s a fine distinction but Ed Tech is bigger than just “teaching and learning” – it’s also about supporting the job of being a teacher or a learner. I pointed out that the recent “What works and why?” report from the OLT here in Australia gives a strong indication that the tools most highly valued by students are the ones that they can use to organise their studies.
Amber Thomas highlighted that “…better pedagogy isn’t the only quality driver. Students expect convenience and flexibility from their courses” and went on to state that “We need to use digital approaches to support extra-curricular opportunities and richer personal tracking. Our “TEL” tools can enable faster feedback loops and personalised notifications”
Even this is just the tip of the iceberg – it’s not just tools for replicating or improving analog practices – the technology that we support and the work we do offers opportunities for new practices. In some ways this links back closely to the other themes that have emerged – how we can shape the culture of the organisation and how we ensure that we are part of the conversation. A shift in pedagogical approaches and philosophies is a much larger thing that determining the best LMS to use. (But at its best, a shift to a new tool can be a great foot in the door to discussing new pedagogical approaches)
“It is reimagining the pedagogy and understanding the ‘new’ possibilities digital technologies offer to the learning experience where the core issue is” (Caroline Kuhn)
Lesley Gourlay made a compelling argument for us to not throw out the baby with the bathwater when it comes to technology by automatically assuming that tech is good and “analogue” practices are bad. (I’d like to assume that any decent Ed Designer/Tech knows this but it bears repeating and I’m sure we’ve all encountered “thought leaders” with this take on things).
“we can find ourselves collapsing into a form of ‘digital dualism’ which assumes a clear binary between digital and analogue / print-based practices (?)…I would argue there are two problems with this. First, that it suggests educational and social practice can be unproblematically categorised as one or the other of these, where from a sociomaterial perspective I would contend that the material / embodied, the print-based / verbal and the digital are in constant and complex interplay. Secondly, there perhaps is a related risk of falling into a ‘digital = student-centred, inherently better for all purposes’, versus ‘non-digital = retrograde, teacher-centred, indicative of resistance, in need of remediation’.” (Lesley Gourlay)
Another very common theme in the technology realm was the absolute importance of having reliable technology (as well as the right technology.)
“Make technology not failing* a priority. All technology fails sometime, but it fails too often in HE institutions. Cash registers in supermarkets almost never fail, because that would be way too much of a risk.” (Sonia Grussendorf)
When it comes to how technology is selected for the institution, a number of people picked up on the the tension between having it selected centrally vs by lecturers.
“Decentralize – allow staff to make their own technology (software and hardware) choices” (Peter Bryant)
Infrastructure is also important in supporting technologies (Alex Chapman)
Personally I think that there must be a happy medium. There are a lot of practical reasons that major tools and systems need to be selected, implemented, managed and supported centrally – integration with other systems, economies of scale, security, user experience, accessibility etc. At the same time we also have to ensure that we are best meeting the needs of students and academics in a host of different disciplines. and are able to support innovation and agility. (When it comes to the selection of any tool I think that there still needs to be a process in place to ensure that the tool meets the needs identified – including those of various institutional stakeholders – and can be implemented and supported properly.)
Finally, Andrew Dixon framed his VC elevator pitch in terms of a list of clear goals describing the student experience with technology which I found to be an effective way of crafting a compelling narrative (or set of narratives) for a busy VC. Here are the first few:
They will never lose wifi signal on campus – their wifi will roam seemlessly with them
They will have digital access to lecture notes before the lectures, so that they can annotate them during the lecture.
They will also write down the time at which difficult sub-topics are explained in the lecture so that they can listen again to the captured lecture and compare it with their notes. (Andrew Dixon)
Some practical solutions
Scattered liberally amongst the discussions were descriptions of practical measures that people and institutions are putting in place. I’ll largely let what people said stand on its own – in some cases I’ve added my thoughts in italics afterwards. (Some of the solutions I think were a little more tongue in cheek – part of the fun of the discussion – but I’ll leave it to you to determine which)
Culture / organisation
Our legal team is developing a risk matrix for IT/compliance issues (me)
(We should identify our work) “not just as teaching enhancement but as core digital service delivery” (Amber Thomas)
“we should pitch ‘exposure therapy’ – come up with a whole programme that immerses teaching staff in educational technology, deny them the choice of “I want to do it the old fashioned way” so that they will realise the potential that technologies can have…” (Sonja Grussendorf)
“Lets look at recommendations from all “strategy development” consultations, do a map of the recommendations and see which ones always surface and are never tackled properly.” (Sheila MacNeill)
“Could this vision be something like this: a serendipitous hub of local, participatory, and interdisciplinary teaching and learning, a place of on-going, life-long engagement, where teaching and learning is tailored and curated according to the needs of users, local AND global, actual AND virtual, all underscored by data and analytics?” (Rainer Usselman)
“…build digital spaces to expand our reach and change the physical set up of our learning spaces to empower use of technology…enable more collaborative activities between disciplines” (Silke Lange)
“we need a centralised unit to support the transition and the evolution and persistence of the digital practice – putting the frontliners into forefront of the decision making. This unit requires champions throughout the institutions so that this is truly a peer-led initiative, and a flow of new blood through secondments. A unit that is actively engaging with practitioners and the strategic level of the university” (Peter Bryant)
In terms of metrics – “shift the focus from measuring contact time to more diverse evaluations of student engagement and student experience” (Silke Lange)
“Is there a metric that measures teaching excellence?… Should it be designed in such a way as to minimise gaming? … should we design metrics that are helpful and allow tools to be developed that support teaching quality enhancement?” (David Kernohan) How do we define or measure teaching excellence?
“the other thing that we need to emphasise about learning analytics is that if it produces actionable insights then the point is to act on the insights” (Amber Thomas) – this needs to be built into the plan for collecting and dealing with the data.
Talking about the NSS (National student survey) – “One approach is to build feel-good factor and explain use of NSS to students. Students need to be supported in order to provide qualitative feedback” (David Kernohan) (I’d suggest that feedback from students can be helpful but it needs to be weighted – I’ve seen FB posts from students discussing spite ratings)
“We should use the same metrics that the NSS will use at a more granular levels at the university to allow a more agile intervention to address any issues and learn from best practices. We need to allow flexibility for people to make changes during the year based on previous NSS” (Peter Bryant)
“Institutional structures need to be agile enough to facilitate action in real time on insights gained from data” (Rainer Usselmann) – in real time? What kind of action? What kind of insights? Seems optimistic
“Institutions need at the very least pockets of innovation /labs / discursive skunk works that have licence to fail, where it is safe to fail” (Rainer Usselmann)
“Teachers need more space to innovate their pedagogy and fail in safety” (Silke Lange)
“Is it unfair (or even unethical) to not give students the best possible learning experience that we can?…even if it was a matter of a control group receiving business-as-usual teaching while a test group got the new-and-improved model, aren’t we underserving the control group?” (me)
“I can share two examples from my own experiences
An institution who wanted to shift all their UG programmes from 3 year to 4 year degrees and to deliver an American style degree experience (UniMelb in the mid 2000s)
An institution who wanted to ensure that all degree programmes delivered employability outcomes and graduate attributes at a teaching, learning and assessment level
So those resulted in;
a) curriculum change
b) teaching practice change
c) assessment change
d) marketing change ” (Peter Bryant)
“One practical option that I’m thinking about is adjusting the types of research that academics can be permitted to do in their career path to include research into their own teaching practices. Action research.” (Me) I flagged this with our Associate Dean Education yesterday and was very happy to hear that she is currently working on a paper for an education focussed journal in her discipline and sees great value in supporting this activity in the college.
“I think policy is but one of the pillars that can reinforce organisational behaviour” (Peter Bryant)- yes, part of a carrot/stick approach, and sometimes we do need the stick. Peter also mentions budgets and strategies, I’d wonder if they don’t change behaviour but more support change already embarked upon.
“let’s court rich people and get some endowments. We can name the service accordingly: “kingmoneybags.universityhandle.ac.uk”. We do it with buildings, why not with services?” (Sonia Grussendorf) – selling naming rights for TELT systems just like buildings – intriguing
We need solid processes for evaluating and implementing Ed Tech and new practices (me)
“Could creating more ‘tailored’ learning experiences, which better fit the specific needs and learning styles of each individual learner be part of the new pedagogic paradigm?” (Rainer Usselman) (big question though around how this might be supported in terms of workload
“At Coventry, we may be piloting designing your own degree” (Sylvester Arnab)
“The challenge comes in designing the modules so as to minimise prerequisites, or make them explicit in certain recommended pathways” (Christopher Fryer)
I went on to suggest that digital badges and tools such as MyCourseMap might help to support this model. Sylvester noted that he is aware that “these learning experiences, paths, patterns, plans have to be validated somehow” Learner convenience over pedagogy – or is it part of pedagogy in line with adult learning principles of self-efficacy and motivation. In a design your own degree course, how do we ensure that learners don’t just choose the easiest subjects – how do we avoid the trap of having learners think they know enough to choose wisely?
“digital might be able to help with time-shifting slots to increase flexibility with more distributed collaboration, flipped teaching, online assessment” (George Roberts)
“At UCL we are in the midst of an institution-wide pedagogic redesign through the Connected Curriculum. This is our framework for research-based education which will see every student engaging in research and enquiry from the very start of their programme until they graduate (and beyond). More at http://www.ucl.ac.uk/teaching-learning/connected-curriculum
The connected bit involves students making connections with each other, with researchers, beyond modules and programmes, across years of study, across different disciplines, with alumni, employers, and showcase their work to the wider world…
There is strong top-down support, but also a middle-out approach with faculties having CC fellows on part time secondments to plan how introduce and embed the CC in their discipline.
From a TEL perspective we need to provide a digital infrastructure to support all of this connectivity – big project just getting going. Requirements gathering has been challenging… And we’re also running workshops to help programme and module teams to design curricula that support research-based and connected learning.” (Fiona Strawbridge) – liking this a lot, embedding practice. What relationship do these fellows have with lecturers?
“I am imagining that my research, personal learning environment would fit perfect with this approach as I am thinking the PLE as a toolbox to do research. There is also a potential there to engage student in open practice, etc.” Caroline Kuhn
“There may be a “metapedagogy” around the use of the VLE as a proxy for knowledge management systems in some broad fields of employment: consultancy, financial services, engineering…” (George Roberts) (which I’d tie to employability)
“We need to challenge the traditional model of teaching, namely didactic delivery of knowledge. The ways in which our learning spaces are currently designed -neat rows, whiteboard at front, affords specific behaviours in staff and students. At the moment virtual learning spaces replicate existing practices, rather than enabling a transformative learning experience. The way forward is to encourage a curricula founded on enquiry-based learning that utilise the digital space as professional practitioners would be expected to” (Silke Lange) – maybe but none of this describes where or how lecturers learn these new teaching skills. Do we need to figure out an evolutionary timeline to get to this place, where every year or semester, lecturers have to take one further step, add one new practice?
“Do not impose a pedagogy. Get rid of the curricula. Empower students to explore and to interact with one another. The role of the teacher is as expert, navigator, orienteer, editor, curator and contextualisor of the subject. Use heuristic, problem-based learning that is open and collaborative. Teach students why they need to learn” (Christopher Fryer)
This is but a cherry-picked selection of the ideas and actions that people raised in this hack but I think it gives a sense of some of the common themes that emerged and of the passion that people feel for our work in supporting innovation and good practices in our institutions. I jotted down a number of stray ideas for further action in my own workplace as well as broader areas to investigate in the pursuit of my own research.
As always, the biggest question for me is that of how we move the ideas from the screen into practice.
How are we defining pedagogical improvements – is it just strictly about teaching and learning principles (i.e. cognition, transfer etc) or is it broader – is the act of being a learner/teacher a part of this (and thus the “job” of being these people which includes a broader suite of tools) (me)
What if we can show how learning design/UX principles lead to better written papers by academics? – more value to them (secondary benefits) (me)
“how much extra resource is required to make really good use of technology, and where do we expect that resource to come from?” (Andrew Dixon)
Where will I put external factors like the TEF / NSS into my research? Is it still part of the organisation/institution? Because there are factors outside the institution like this that need to be considered – govt initiatives / laws / ???
Are MOOCs for recruitment? Marketing? (MOOCeting?)
“How do we demonstrate what we do will position the organisation more effectively? How do we make sure we stay in the conversation and not be relegated to simply providing services aligned with other people’s strategies” (arguably the latter is part of our job)
“How do we embed technology and innovative pedagogical practices within the strategic plans and processes at our institutions?” (Peter Bryant)
Psychology of academia and relationships between academic and professional staff. (Executive tends to come from academia)
Leadership and getting things done / implementing change, organisational change
How is organisational (particularly university) culture defined, formed and shaped?
Some ideas this generated for me
Instead of tech tool based workshops – or in addition at least – perhaps some learning theme based seminars/debates (with mini-presentations). Assessment / Deeper learning / Activities / Reflection
Innovation – can be an off-putting / scary term for academics with little faith in their own skills but it’s the buzzword of the day for leadership. How can we address this conflict? How can we even define innovation within the college?
What if we bring academics into a teaching and learning / Ed tech/design support team?
Telling the story of what we need by describing what it looks like and how students/academics use it in scenario / case study format offers a more engaging narrative
What is the role of professional bodies (E.g. unions like the NTEU) in these discussions?
Are well-off, “prestigious” universities the best places to try to innovate? Is there less of a driving urge, no pressing threat to survival? Perhaps this isn’t the best way to frame it – a better question to ask might be – if we’re so great, what should other universities be learning from us to improve their own practices? (And then, would we want to share that knowledge with our competitors)
“I was thinking about the power that could lie behind a social bookmarking tool when doing a dissertation, not only to be able to store and clasify a resource but also to share it with a group of likeminded researcher and also to see what other have found about the same topic.” (Caroline Kuhn) – kind of like sharing annotated bibliographies?
Bigger push for constructive alignment
I need to talk more about teaching and learning concepts in the college to be seen as the person that knows about it
I’d really like to thank the organisers of the Digital is not the future Hack for their efforts in bringing this all together and all of the people that participated and shared so many wonderful and varied perspectives and ideas. Conversation is still happening over there from what I can see and it’s well worth taking a look.
I’m on my weekly bus trip to Sydney – between 3.5 – 4 hours – to take a workshop on Thesis Proposal Writing (and also to get to know my scholarly colleagues) so it seems like a good time to do some reading and jot down some ideas. (The super chatty backpackers of last week are gone and the bus is basically a big moving quiet library – with wifi, which is great in itself)
So I’ve diligently downloaded some of the recommended readings – in this case
– and I start reading. Very quickly I realise that while it is an interesting enough chapter, focussing on the need for bigger picture research into the social and political contexts that surround the success or otherwise of education reform in “urban” American schools, it’s pretty well irrelevant to my own research.
This at least leads me to a few thoughts and ideas for TELT practices.
When teachers provide optional readings, it would be great if there was an option to
tag them (ideally by both the teacher and student)
support student recommendations/ratings
directly include in-line options for commenting
It would also be valuable if teachers (while I’m focussing on Higher Ed, I think I’ll go with the term teachers instead of lecturers/academics for now) provided a short abstract or even just a basic description.
This got me thinking further about the informal student recommendation/rating systems that are currently in use and what we need to learn from them. Students at my university, the ANU – I guess I need to add a disclaimer on this blog about all opinions etc being my own and I don’t speak for the ANU – have created a lively Facebook space where they share information and opinions (and cat/possum pictures). These discussions often include questions about which are good (easy) courses or what lecturer x is like. I suspect that the nature of these communities – particularly the student ownership – makes officially sanctioned groups/pages less appealing, so it isn’t necessarily a matter of aping these practices, rather looking for opportunities to learn from them in our TELT practices.
My own supervisor has written about the student experience of TELT practices – I’ll be curious to see whether this question is addressed. (Reading that book is high on my list, I’m just trying to get my head around what it means to be a PhD student and researcher currently so this is the leaning of my reading to date)
The chapter does finish with a quote that I did find relevant though:
Most educational research seeks to provide guidance into how to alter existing policies or practices deemed problematic, but the extent to which research findings effect change is small. The impotence of most research to alter established policy and practice is well recognized
So even when it doesn’t appear that a reading is going to be of value, I guess it can still trigger other ideas and offer more universal thoughts.
Post Script: Just looking at the citation above, it’s clear that I need to get a better grasp of how to use Zotero in the browser. Any and all advice most welcome.
It feels like I’ve been banging on about this subject forever now but with assessments now finalised, it seems like a good time for a final wrap up.
In broad terms, I was a little disappointed with my students. It might have been a bad time of year to run this subject, with its demanding workload, but the majority of them seem to have only put in the absolute barest effort needed to pass. Assessment instructions which I thought were pretty clear weren’t followed and most of the reflections lacked any great insight. I had to ask many of them to rework and resubmit their assessments just to meet the minimum requirements.
What this does make me ask is whether this is the fault of my students or me.
As I haven’t taught formal classes for more than a decade, there are a lot of things that I haven’t had to deal with in teaching an ongoing subject with rigorous formal assessment. I also have a tendency at times to over-complicate things because it just seems like it makes them better. This probably also extends to my communication with my students and my expectations of them.
Fortunately, I am still keen to try this again.
Even during the marking process, as I had to walk away from the computer and swear at the walls, I was constantly reshaping the course structure, the assessments and the class activities in my mind to help avoid some of the issues that were arising. The fact that a handful of the “good” students were able to understand and follow my instructions suggests that I’m on the right track at least and am not entirely to blame but the fact that more than a few got things quite wrong does tell me that there is more work to be done.
I need to make it clearer that when students are creating draft learning resources, they actually need to be resources – things, objects – rather than broad and loose activity plans for a class. I need to explain clearly that the final learning resources should be the same as the draft learning resources but improved based on testing and feedback. To be honest, these things seemed so self evident to me that I couldn’t conceive of anyone not getting it but there we are.
I tried to put into practice a number of ideas that I’ve encountered in the education design community about getting students more involved in designing parts of their own assessments but this really just confused more people than it helped. (Which was a shame as I do believe that it is a valid and valuable approach)
I tried to give my learners freedom to follow their particular learning needs and interests but for the most part this ended up just giving them the opportunity to follow the path of least resistance and allowed for some fairly lazy work. I also should’ve factored into my thinking that the first week of a class is often going to be plagued by technical (logins not working) and administrative hassles and try to make allowances for this in not expecting too much work to be achieved in the first week. (That said, we have a strong need to demonstrate engagement in class activities to receive funding for students that later drop out and I was certainly able to prove that)
I think next time around there will need to be a little less freedom, a bit more structure and lot more clarity and simplicity.
On the whole I am happy that I have managed to get these teachers doing things they haven’t done before and I think they have developed useful skills and knowledge. I’d just like to do more.
I’ve been a bit caught up preparing for this course and consequently this post has been sitting in the draft section for a while now. I ran the first class last Friday (17/10) and it seems like a good idea to share some reflections.
I’m going to leave the pre-class post up as an interesting contrast.
(Before running the class)
As I continue to work on the Design and Develop Learning Resources and Design and Develop eLearning Resources subject (can anyone explain why an eLearning resource should not just be folded into an expanded definition of Learning resource?), I am now at the point where I need to work out what we will do each week.
Previous work on this has led to the development – well, adaptation really – of an assessment structure that should hopefully work well. I’m trying to incorporate as much assessment into in-class activities as possible and also get the learners to take ownership of some of their assessment by having them design the assessment criteria (while still ensuring that all the necessary assessment items are addressed). This also lets us get a flying start on the process of learning about designing and developing resources by working together on one in class. I’m thinking that using a TPACK (Technology, Pedagogy and Content Knowledge) framework to evaluate learning resources seems like a solid base at this point.
The course as it has been delivered previously seems like a very rich opportunity for our teachers to learn about using our LMS (Moodle – called eLearn here) but the more I look at the elements of competency, the more I have to wonder how relevant some of the material really is. Refocusing the course on designing and developing learning resources will have to be a priority. Topics on designing assessments and forum activities and using our learning object repository are undoubtably valuable but not relevant in this specific instance.
(After the class)
One of the things about having a more theoretical approach to teaching is that it can be very easy to get excited about trying a load of new things and using a lot of ed tech (Moodle to be precise) without really thinking through the limitations of the class.
I spend a lot of time researching approaches to teaching with technology and providing 1-to-1 support for teachers at their desks. I also run semi-regular workshops for small groups of teachers about using specific tools. What I haven’t done is taught a full subject in a proper class setting over a number of weeks – well not in the last ten years anyway.
The first week is always going to be a little bumpy – learners turning up to class who haven’t enrolled yet (or properly) and thus have no access to our eLearning platform. The other thing I sometimes forget – but really shouldn’t – is that few teachers have the same level of skill, enthusiasm or experience in using our LMS as I do. So designing the lesson for Week 1 as primarily a series of sequential activities in Moodle in the first week is probably not the ideal approach. Actually, there’s no probably about that.
Furthermore, getting learners to use new online tools that seem perfectly straightforward (Padlet) can and will take much longer than anticipated.
On top of this, I decided that it would be fun to try to gamify the course. Not hugely but using a 12 sided die to randomise the process of calling on learners to answer questions and making use of the activity restriction function in Moodle (you can’t see one activity until you complete the previous one) really does complicate an already messy session unnecessarily.
Something else that I’d decided (based on sound pedagogical principles) was that getting the students to create a resource that can be used to identify criteria in their assessment would be a useful way to engage them with the content and get them to think more meaningfully about what is important in designing and developing learning resources. On reflection, I guess creating a resource that can be used to measure the quality of other created resources gets a little meta and might be overly complicated. I should’ve also considered that these teachers would be far more interested in developing workable resources for their own students and not for themselves and their classmates.
All in all, I think I tried to do too much, too cleverly and expected far more of the students than I should’ve. I should’ve made more allowances for lower levels of e-learning and digital literacy and factored in the necessary messiness of getting everyone started.
So now I need to simplify and streamline this course. Several of the activities were successful and we did have a reasonably meaningful and deep discussion about what is important to consider in the process of designing learning resources, so I don’t consider the class to be a total wash. We also were able to identify specific learning resources that the students are interested in learning about – several of which (marking rubrics) were nowhere on my list of things to cover in this course.
Today is all about pinning down the most appropriate types of assessments for this subject. Yesterday I think I got a little caught up in reviewing the principles of good assessment – which was valuable but it might also be better applied to reviewing and refining the ideas that I come up with.
For what it’s worth, these are the notes that I jotted down yesterday that I want to bear in mind with these assessments. DDLR Assessment ideas
Looking over the four versions of this subject that my colleague J has run in the last 2 years has been particularly enlightening – even if I’m not entirely clear on some of the directions taken. The course design changed quite substantially between the second and third iterations – from a heavily class-based activity and assessment focus to more of a project based structure. (For convenience I’ll refer to the subjects as DDLR 1, 2, 3 and 4)
DDLR 1 and 2 provide an incredibly rich resource for learning to use eLearn (our Moodle installation) and each week is heavily structured and scaffolded to guide learners through the process of developing their online courses. The various elements of the units of competency are tightly mapped to corresponding activities and assessment tasks – moreso in DDLR 2. (Image from the DDLR subject guide)
I have to wonder however whether the course provides too much extra information – given the relatively narrow focus on designing and developing learning resources. Getting teachers (the learner cohort for this subject) to learn about creating quizzes and assignments in Moodle is certainly valuable but are these truly learning resources? This may well be one of the points where my approach to this subject diverges.
The shift in approach in DDLR 3 and DDLR 4 is dramatic. (As far as a diploma level course about designing learning resources might be considered dramatic, at least.) The assessments link far more closely to the units of competency and all save the first one are due at the end of the subject. They are far more formally structured – template based analysis of the target audience/learners, design documents, prototypes and finished learning resources, as well as a reflective journal.
It does concern me slightly that this subject has a markedly lower rate of assessment submission/completion that the two preceding ones. That said, this subject is often taken by teachers more interested in the content than in completing the units of competency and that may just have been the nature of this particular cohort.
This new assessment approach also seems far more manageable from a teaching/admin perspective than the previous ones, which required constant grading and checking.
My feeling is that this is a more sustainable approach but I will still look for ways to streamline the amount of work that is required to be submitted.
The next step was to map the various elements of competency to assessment items. The elements for both units of competency are written differently enough to need to be considered separately (unfortunately) but they both still broadly sit within the ADDIE (Analyse, Design, Develop, Implement, Evaluate) framework. ADDIE seems like a useful way to structure both the course and the assessments so I have mapped the elements to this. I have also highlighted particular elements that are more indicative of outputs that might be assessed. Working through the analysis process will be quite dry (and could potentially come across as slightly patronising) so finding an engaging approach to this will be important.
(I’m also quite keen to bring digital badges into this process somehow, though that’s a lower priority at the moment)
Finally, I had a few ideas come to me as I worked through this process today that I might just add without further comment.
DDLR / DDeLR ideas
Get the class to design and develop a (print based? ) checklist / questionnaire resource that might be used to address DDLR 1 and DDeLR 1 UoCs. Get someone else in the class to use it to complete their Analysis phase.
Can I provide a range of options for the forms the assessment/resource pieces might take?
Try to develop a comprehensive checklist that teachers can use on the resources that they produce to raise the quality overall of resources at CIT. (Again, this could be a student led tool – the benefit of this is that it makes them think much more about what a good resource requires – does this meet any UoCs??)
Convert the print based Analysis document into a web resource – book tool or checklist maybe? Also possibly fix the print based one first – from a deliberately badly designed faulty version. (Lets me cover some readability / usability concepts early)
How much of this subject is leading the learners by the hand? How much is about teaching them how to use eLearn tools?
Could one of the learning resources be about developing something that teaches people how to use a particular eLearn tool???
Need to identify what kinds of resources teachers can make. Good brainstorm activity in week 1.
Think about the difference between creating a learning resource and finding one and adding it to your course. (Still important but tied to the UoC?)
If I give teachers the option to use previously developed resources (authenticity issues??), they should still provide some kind of explanatory document AND/OR edit the resource and discuss what changes they made and why.
Need to consider the relative strengths and weaknesses of the various types of tools.
In-class feedback of learning resources to better support the evaluation and implementation based elements of competency.
One activity (possible assessment) could be for learners to gather information needed to do an analysis from a partner in the group. (and vice versa) Might lead to a more critical examination of what information is being sought. Learner might even provide suggestions for design/development?