Is academia a workplace like any other? Going by the normalisation of academic staff attitudes towards organisational policies and initiatives displayed in this paper, it’s hard to believe so. As a professional staff member in a H.E institution it’s kind of fascinating to see a discussion of ignoring policy and procedures treated as a norm that management needs to work harder to mitigate – ideally by offering the staff greater incentives to comply. Maybe we also see it in the higher levels of the entertainment industry, where top stars are feted to keep the show running. If politics is showbiz for ugly people, is academia showbiz for clever people?
Brew, Boud et al explore these attitudes using the lens of Archer’s modes of reflexivity (2007) to try to better understand how mid-career academics’ preferences for reflecting on and responding to the world help to define the way they respond to policies and initiatives in their institutions. This is an interesting angle to take, particularly as they are able to use it to formulate some potential actions that management can take in the formulation of these policies etc to get greater buy in. The authors interviewed a diverse set of 27 mid-career (5-10 years experience) academics in research intensive universities in the UK and Australia and categorised their responses to policies/initiatives as aligning with one of the following four modes of reflexivity:
Communicative reflexivity: exhibited in people whose internal conversations require completion and confirmation by others before resulting in courses of action
Autonomous reflexivity: exhibited in those who sustain self-contained internal conversations, leading directly to action
Meta-reflexivity: characterised by internal conversations critical of one’s own internal conversations and on the look-out for difference in the social world around them
Fractured reflexivity: internal conversations intensify distress and disorientation rather than leading to purposeful courses of action (p.3)
A question that concerned me throughout however – and it was acknowledged at the end by the authors – was whether the authors identified these people as having one of these orientations before seeing if their attitudes or actions matched them. They did not – instead they mapped the individuals to these modes based on their attitudes and actions and accept that this is a relatively subjective approach to have taken. In the case of several participants, they even found that different things that they said in the course of their interview aligned to most or all of the four modes. As a series of signposts however, these modes generally appear to have stood up to scrutiny and reasonably reflect the set of different responses taken by the academics.
Some choice examples, including some transcripts:
A change that affected Shaun was degree accreditation by a professional body. This was deemed necessary to ensure continued student applications. His courses did not address the competencies needed in the degree. The consequence of this was that his teaching was taken away… Shaun describes this as a critical incident in his career:
[It was a] slap in the face, because an external accrediting body didn’t think my knowledge area was necessary to produce this… degree, as opposed to a university standing up and going, well no the tail doesn’t wag the dog, this is what we think is important to become a university graduate and that should inform what becomes a practitioner (Shaun, Aus, HS, SL, M, L.344-352) (P.5-6)
William refers to ‘red tape’ that surrounds teaching describing initiatives requiring writing learning outcomes and conforming to graduate outcome statements as ‘a fashion, a fad’ (L.257)
And Shaun again:
there are some faculty research priorities… which were suggested as being pillars that we had to try and perform under. I couldn’t tell you what they are, I haven’t paid attention to them because I remember looking at them and going, my area doesn’t fit under them. (p.8)
Now of course I’ve taken the more dramatic examples but there are many more that broadly paint a picture indicating that the academics in the study take a fairly self-centric viewpoint and few give much thought to bigger picture issues and needs in the institution. This isn’t to say that there aren’t also many instances of mystifying and seemingly counterproductive policies and procedures being put into place and the authors suggest that some academics would be better engaged if these were explained/justified more effectively.
Sensitivity to the ways in which those demonstrating communicative reflexivity work to maintain the status quo and the difficulties they appear to have in responding to change would suggest that attention needs to be paid to providing academics with thorough rationales for policy changes and that opportunities for these to be debated need to be provided. How such policies fit in with and/or enhance existing practice need careful consideration if they are to be implemented successfully. (p.11)
These people and those people who engage in meta-critical reflexivity, where they are able and willing to question their own internal conversations appear to be the easiest to work with in this space.
the people whose mode of reflexivity is meta-reflexivity could be the most helpful in policy implementation as their focus is likely to be on the smooth and equitable functioning of the university community as a whole. Harnessing the critical capacities of such academics and their concern for their fellow workers can be a useful asset for sensitive managers concerned to implement new initiatives (p.11)
When dealing with the autonomous reflexives, those people who – to paraphrase – pretty much just do whatever they feel is right – things get harder. (There is certainly never any question of entertaining the prospect that this behaviour is flawed)
for academics demonstrating autonomous reflexivity, teaching and learning policies are likely to pose the greatest challenges particularly if they are seen to take time away from research. For successful implementation, such people are likely to need incentives in terms of furthering their careers. (p.11)
The authors appear to largely give up on working with the final category, the fractured reflexives, those who struggle to deal with change at all
Academics whose mode of fractured reflexivity makes them unable to move forward may need professional counselling (p.11)
As a professional staff member – I would’ve said non-academic but have a particular dislike of defining things by what they are not – these descriptions do all ring true and something that I’ve been keenly aware of since I started this research (and long before, really) is that the question of culture in academia is a massive factor in the success or failure of innovation and change. In some ways this hangs on the question of whether academia is just another job – I’d be surprised to find anyone inside who would agree with that idea and maybe they’re right but maybe we also need to find a middle ground which recognises that complete autonomy and/or academic freedom simply isn’t a realistic expectation in the modern age – perhaps unless you’re working for and by yourself.
This chapter seemed to take forever to work through, possibly because a bit of life stuff came up in the interim but it’s also at the more complex end of the discussion. It concludes the overall examination of the dynamics of social practice and in some ways felt like a boss level that I’d been working up to. Most of the chapter made sense but I must confess that there is a page about cross-referencing practices as entities that I only wrote “wut?” on in my notes. Maybe it’ll make more sense down the road.
The good news is that there was more than enough more meaningful content in this chapter to illuminate my own exploration of practices in my research and it sparked a few new stray ideas for future directions. There’s a decent summary near the end of the chapter that I’ll start with and then expand upon. (The authors do great work with their summaries, bless them)
In this chapter we have built on the idea that if practices are to endure, the elements of which they are made need to be linked together consistently and recurrently over time (circuit 1). How this works out is, in turn, limited and shaped by the intended and unintended consequences of previous and co-existing configurations (circuit 2). Our third step has been to suggest that persistence and change depend upon feedback between one instance or moment of enactment and another, and on patterns of mutual influence between co-existing practices (circuit 3). It is through these three intersecting circuits that practices-as-entities come into being and through these that they are transformed. (p.93)
So let’s unpack this a little. There are a number of reciprocal relationships and cycles in the lives of practices. The authors discuss these in two main ways – through the impact of monitoring/feedback on practices (both as individual performances and larger entities) and also by cross-referencing different practices (again as performances and entities)
Monitoring practices as performances
A performance of a practice will generate data that can be monitored. It might be monitored by the practitioner (as a part of the practice) and it might also be monitored by an external actor that is assessing the performance or the results/outputs. (This might be in an education/training context or regulatory or something else). This monitoring then informs feedback which improves/modifies that performance and/or the next one/s and so the cycle continues. In some way, potentially, in every new performance, the history of past performances can help refine the practice over time.
This isn’t all that evolves the practice; materials, competences and meanings play their part too, but it is a significant factor.
Monitoring, whether instant or delayed, provides practitioners with feedback on the outcomes and qualities of past performance. To the extent that this feeds forward into what they do next it is significant for the persistence, transformation and decay of the practices concerned… self-monitoring or monitoring by others is part of, and not somehow outside, the enactment of a practice (what are the minimum conditions of the practice?) is, in a sense, integral to the performance. Amongst other things, this means that the instruments of recording (the body, the score sheet, the trainee’s CV) have a constitutive and not an innocent role. (p.83-4)
So far, so good. This also makes me think that many practices are made up of elements or units of practice – let’s call them steps for simplicity. The act of monitoring is just another step. (This does take us into the question of whether it is a practice or a complex/bundle of practices – like driving is made up of steering, accelerating, braking, signalling etc – but nobody says they’re going out accelerating)
Monitoring practices as entities
Looking at a practice as an entity is to look much more at the bigger picture of the practice.
the changing contours of practices-as-entities are shaped by the sum total of what practitioners do, by the variously faithful ways in which performances are enacted over time and by the scale and commitment of the cohorts involved. We also noticed that practices-as-entities develop as streams of consistently faithful and innovative performances intersect. This makes sense, but how are the transformative effects of such encounters played out? More specifically, how are definitions and understandings of what it is to do a practice mediated and shared, and how do such representations change over time (p.84)
An interesting side-note when considering the evolution of practice is the contribution of business. This example was in a discussion of snow-boarding
As the number of snowboarders rose, established commercial interests latched on to the opportunities this presented for product development and profit (p.85)
This ties back to the influence of material elements (new designs and products in this case) on shaping a practice.
Technologies are themselves important in stabilising and transforming the contours of a practice. In producing snowboards of different length, weight, width and form, the industry caters to – and in a sense produces – the increasingly diverse needs of a different types of user… Developments of this kind contribute to the ongoing setting and re-setting of conventions and standards. (p.85)
This in turn brings up back to one of the other key roles played by monitoring (and feedback) in terms of practices as entities, which is describing and defining them. The language that is used to describe a practice and its component parts and also to define what makes good practice is of vital importance in determining what a practice is and what it becomes.
…if we are to understand what snowboarding ‘is’ at any one moment, and if we are to figure out how the image and the substance of the sport evolves, we need to identify the means by which different versions of the practice-as-entity relate to each other over time. Methods of naming and recording constitute one form of connective tissue. In naming tricks like the ollie, the nollie, the rippey flip and the chicken salad, snowboarders recognise and temporarily stabilize specific moves. Such descriptions map onto templates of performance- to an idea of what it is to do an ollie, and what it means to do one well… in valuing certain skills and qualities above others, they define the present state of play and the direction in which techniques and technologies evolve. (p.85)
The final piece of the puzzle when it comes to monitoring – which ties back to our webinar nicely once more – is mediation.
Describing and materializing represent two modes of monitoring in the sense that they capture and to some extent formalize aspects of performance in terms of which subsequent enactments are defined and differentiated. A third mode lies in processes of mediation which also constitute channels of circulation. Within some snowboarding subcultures, making and sharing videos has become part of the experience. These films, along with magazines, websites and exhibitions, provide tangible records of individual performance and collectively reflect changing meanings of the sport within and between its various forms. Put simply, they allow actual and potential practitioners to ‘keep up’ with what is happening at the practice’s leading edge(s) (p.86)
I find the fact that monitoring/documenting and sharing the practice is considered an important part of practice quite interesting. Looking at teaching, I’ve tried to launch projects to support this in teaching but management levels have not seen value in this. (I’ll just have to persevere and keep making the argument).
There’s another nice description of the role of standards
…standards, in the form of rules, descriptions, materials and representations, constitute templates and benchmarks in terms of which present performances are evaluated and in relation to which future variants develop (p.86)
The discussion of the role of feedback notes that positive feedback can be self-perpetuating, in “what Latour and Woolgar refer to as ‘cycles of credibility’ (1986). Their study of laboratory life showed how the currencies of scientific research – citations, reputation, research funding – fuelled each other. In the situations they describe, research funding led to research papers that enhanced reputations in ways which made it easier to get more research funding and so on” (p.86)
Ultimately, feedback helps to sustain practices (as entities) by keeping practitioners motivated.
At a very basic level, it’s good to know you are doing well. Even the most casual forms of monitoring reveal (and in a sense constitute) levels of performance. In this role, signs of progress are often important in encouraging further effort and investment of time and energy (Sudnow, 1993). The details of how performances are evaluated (when how often, by whom) consequently structure the careers of individual practitioners and the career path that the practice itself affords. This internal structure is itself of relevance for the number of practitioners involved and the extent of their commitment (p.86)
The act of participating in a performance of a practice means that it has been prioritised over other practices – the time spent on this performance is not available to the others.
…some households deliberately rush parts of the day in order to create unhurried periods of ‘quality’ time elsewhere in their schedule. In effect, short-cuts and compromises in the performance of certain practices are accepted because they allow for the ‘proper’ enactment of others (p.87)
Shove et al examine the importance of time as a tool, a coordinating agent that helps in this process. In a nutshell, it is a vital element of every practice and shapes the interactions between practices (and also practitioners). They move on to explore the change from static ‘clock-time’ to a more flexible ‘mobile-time’. Their argument is essentially that our adoption of mobile communication technologies (i.e. smart phones) is giving us a more fluid relationship with time because we can now call people on the fly to tell them that we are running late.
However some commentators are interested in the ways in which mobile messaging (texting, phoning, mobile emailing) influences synchronous cross-referencing between practices (p.88)
I’ll accept that mobile communication is changing the way we live but I’m not convinced that it is having the impact on practices that the authors suggest. Letting someone know that you’re running late particularly doesn’t change what is to be done, it just pushes it back. Perhaps letting someone know of a change of venue has more impact, in that it would allow a practice that might not otherwise have occurred to do so, but this doesn’t strike me as something that would happen regularly enough to change our concepts of time or practice.
The authors express this somewhat more eloquently than I:
But is this of further significance for the ways in which practices shape each other? For example, does the possibility of instant adjustment increase the range of practices in which many people are involved? Does real-time coordination generate more or less leeway in the timing and sequencing of what people do? Are patterns of inattention and disengagement changing as practices are dropped or cut short in response to news that something else is going on? Equally, are novel practices developing as a result? In thinking about these questions it is important to consider how technologies of ‘micro’-coordination relate to those that operate on a global scale (p.89)
Another significant idea that this generated for me was that the things that we do shape our world because we design and modify our world to suit the things that we do. This then may change our ability to do those things and we enter a cycle where practice shapes environment shapes practice etc.
Or as the authors put it:
we have shown that moments of performance reproduce and reflect qualities of spacing and timing, some proximate, some remote. It is in this sense that individual practices ‘make’ the environments that others inhabit (p.89)
I guess then, the real question is how we as TEL edvisors can make this work for us.
This is the section that lost me a little but parts made sense. It’s something to do with the way that separate practices might be aggregated as part of a larger issue – such as eating and exercise both sit within this issue of obesity. Clearly obesity isn’t a practice but it does encompass both of these practices and creates linkages that wouldn’t necessarily otherwise be there. This happens in part by tying in monitoring and creating a discourse/meaning attached to them all.
The authors refer to this combination of the discourse and the monitoring via measurement technologies as “epistemic objects, in terms of which practices are conjoined and associated, one to another” (p.92). (And arguably monitoring and discourse create their own cyclical relationship)
They move on to expand the significance of the elements of a practice (material, competence and meaning),
this time viewing them as instruments of coordination. In their role as aggregators, accumulators, relays and vehicles, elements are more than necessary building blocks: they are also relevant for the manner in which practices relate to each other and for how these relations change over time. (p.92)
Writing and reading – as competences rather than practices I guess – occupy a vital space here in terms of the ways that they are vital in the dissemination of practices, meanings and techniques.
They discuss two competing ideas by other scholars in the field (Law and Latour) that posit that either practices need elements to remain stable for significant periods of time to allow practices to become entrenched or that they benefit from changes in the elements that enable practices to evolve. I don’t actually think that these positions are mutually exclusive.
I jotted down a number of stray thoughts as I read this chapter that don’t necessarily tie to specific sections, so I’ll just share them as is.
Is a technology a material or does it also carry meanings and competences?
Does research culture/practice negatively impact teaching practice? Isolated and competitive – essentially the antithesis of a good teaching culture.
Does imposter syndrome (in H.E. specifically) inhibit teachers from being monitored/observed for feedback? Does rewarding only teaching excellence inhibit academic professional development in teaching because it stops people from admitting that they could use help. Are teaching excellence awards a hangover from a research culture that is highly competitive? What if we could offer academics opportunities to anonymously and invisibly self assess their teaching and online course design?
Is Digigogy (digital pedagogy) the ‘wicked problem’ that I’m trying to resolve in my research – in the same way that ‘obesity’ is an aggregator for exercise and eating as practices? I do like ‘digigogy’ as an umbrella term for TEL practices.
Where do TEL edvisors sit in the ‘monitoring’ space of TEL practices?
This ‘epistemic objects – cycle of monitoring/feedback and discourse’ is probably going to play a part in my research somehow. Maybe in CoPs.
So what am I taking away from this?
I guess it’s mainly that there are a lot of different ways in which practices (and performances of practices) are connected which impact on how the evolve and spread. Monitoring and feedback – particularly when it is baked into the practice – is a big deal. The whole mobile time thing feels like an interesting diversion but the place of technology (and what exactly it is in practice element terms) will be a factor, given that I’m looking at Tech Enhanced Learning. (To be honest though, I think I’m really looking at Tech Enhanced Teaching)
In specific terms, it seems more and more like what I need to do is break down all of the individual tasks/activities that make up the practice of ‘teaching’ – or Tech Enhanced Learning and Teaching – and find the cross-over points with the activities that make up the practice of a TEL edvisor. I think there is also merit in looking at the competition between the practices of research for academics and teaching, which impacts their practice in significant ways.
On now to Chapter 7, where the authors promise to bring all the ideas together. Looking forward to it.
For people working in roles like mine in tertiary education – education designers, academic developers, learning technologists etc – one our greatest challenges is being listened to and having our skills and knowledge recognised.
I think that adopting an overarching term for our roles such as TELT (Technology Enhanced Learning & Teaching) Advisor might be one way to address this.
Celia Whitchurch (2008) describes a sector of the workforce in Higher Education whose day to day work overlaps the teaching and administration areas – the so-called ‘third space professionals’. She refers to a broader set of staff members than I am here – she includes curriculum developers, student study skills advisors and more – but people who support and advise academics/teachers about teaching practices without actually teaching themselves certainly fit well into the third space category.
I’ve been involved in many discussions trying to find an umbrella term for people in these roles – the academic developers (people who train academics in teaching and learning), learning technologists (people who support the use and implementation of educational technology) and education designers/developers (people who help to design and build courses and learning resources). All of these people do more than the minimal descriptions that I’ve offered and the vast majority tend to do all three of these things at different times.
In the course of discussions with my colleagues, we have settled (for now) on Education Advisor as an umbrella term for our roles. Using Advisor rather than Support person was an important distinction for more than a few people because they felt strongly that Advisor puts us on a more equal footing.
We are frequently (but not exclusively) professional staff members which means that while we may have extensive experience in teaching and learning and qualifications to match, in the academic-centric culture of universities, because we are not teaching (or researching), we are not part of the tribe, we are not peers to the teachers we work with. We are Other. Even the academics that move over to roles in this area are sometimes jokingly referred to as having ‘gone over to the dark side’.
On a personal level, none of this bothers me overly. The vast majority of academics that I work with are decent people that appreciate my support and I enjoy the work that I do. Teaching & Learning and Research are the core reasons for being of universities so I can understand how the culture of the institution tends to privilege the people working directly at the chalkface – or Screenface if you will. (And the research-face as well, of course. Yes, this term started well but…).
This culture also means that there is significant pressure on academics to demonstrate their value, both in their research and (to a lesser extent still, sadly) in their teaching practice. Knowledge is the currency of the academic. To admit that you don’t know something is therefore to make yourself vulnerable. It is assumed then that academics are experts in their field (reasonably so) and also in teaching.
The assumed expertise in teaching seems curious in some ways, given that teaching is a profession and a craft in its own right and people working in this area at any level other that higher education are mandated to have relevant qualifications. There are, of course, many fantastic teachers among academics, but it’s often more by luck than design. Some do choose to undertake teaching qualifications or training but in an institutional culture that strongly favours research over teaching, there is little incentive to do so.
Education Advisors however, do tend to have these qualifications and training, as well as years of experience in teaching and learning. In spite of this, there is an intense reluctance from academics to seek or take pedagogical advice from education advisors. I don’t understand why this is but I have some theories. Seeking or taking advice on teaching, I believe, is effectively seen as sending up a signal that they lack some of the core skills that define their value to the university. It might also come down to basic tribalism in some instances – education advisors aren’t in the teaching tribe, they’re professional staff (mostly) and therefore what could they really offer. I’m sure there are other factors and this may not mirror the experiences of all of my colleagues but I’ve had university leaders say to me directly “I’m going to hire an academic to support this project because they understand pedagogy”.
This is where being a TELT advisor is an advantage.
Yes, it grows a little tiresome being seen primarily as the first port of call for technical questions relating to the use of the LMS or the lecture capture system or any of the other institutional ed. tech tools when we know how much more we have to offer BUT academics are far more willing to admit that they need help with education technology than with education. They’re not expected to know the tech and this liberates them to be learners.
TELT knowledge is our ticket to the conversation about teaching and learning in our institutions. Rather than burning energy trying to demonstrate that we know more about teaching and learning than just the TELT side (which, can still be what we make it), we should make the most of our niche.
Another key reason to do this is that the higher up the chain you go in tertiary education institutions, the more excitement there is about ‘innovation’ and the promise of education technology. Sometimes the excitement is because the executive actually see the benefits in teaching and learning terms and sometimes it is because it represents ‘doing something’ (and being seen to be ‘doing something’) and sometimes it is even just a matter of keeping up with the Joneses – or one-upping them. Whatever the reasons, and I hope I’m being pragmatic rather than cynical, being the local ‘experts’ in ed tech and innovation in TELT practices gives us more perceived value in these terms than other teaching support areas and creates more opportunities to do good.
So in a nutshell, we’re better off self-identifying as TELT advisors because it creates a niche, academics are more open to seeking advice and support in areas tied to technology and we sit comfortably in the innovation space, which is so hot right now.
(I’ll concede that it’s a clunky term but I’m yet to hear a better one that truly reflects our knowledge, skills and practices and which keeps the focus on teaching and learning)
This is a big post because it is about a journal article that covers some of the core issues of my thesis in progress. I’ve spent far longer looking over, dissecting and running off on a dozen tangents with it than I had expected. My highlights and scrawled notes are testament to that.
In a nutshell, King and Boyatt attribute the success (or otherwise) of adoption of e-learning in their university to three key factors. Institutional infrastructure, teacher attitudes and knowledge and perceived student expectations. This seems like a reasonable argument to make and they back it up with some fairly compelling arguments that I’ll expand on and provide my own responses to shortly.
They use this to generate a proposed action plan which includes a coherent and detailed university level e-learning strategy – which includes adequate resourcing for technological and pedagogical support, academic development training, leadership, guidance, flexibility and local autonomy. Everything that they propose seems reasonable and sane yet (sadly) quite optimistic and ambitious. From their bios, I think that the authors aren’t teachers themselves but education advisors like myself but the perspective put forward in the article is very clearly from an academic’s perspective. (Well, 48 academics from a range of discplines, ages and years of teaching experience.) All the same, there were more than a few occasions when I read the paper and thought – “well it’s fine to suggest communities of practice (or whatever) but even when we do set them up, nobody comes more than once or twice”.
I guess the main difference between this paper and my line of thinking in my research is that I want to know what gets in the way, and I didn’t get enough of that here. I also found myself thinking a few times that this kind of research needs to avoid falling into the trap of forgetting that teaching is only one (often de-prioritised, depending on the uni culture) part of an academic’s practice and we need to factor in the impact that their research and service obligations have on their ability to find time to do this extra training. To be completely fair though, the authors did recognise and note this later in the paper, as well as the fact that the section on perceived student expectations was only that – perceptions – and not necessarily a true representation of what students think or want. So they propose extending the study to include students and the university leadership, which seems pretty solid to me and helps to strengthen my personal view that this is probably a thing I’ll need to do when I start my own research. (I’m still in proposal/literature review/exploration swampland for now). To this I would probably add the affordances of the technology itself and also the Education Advisor/Support staff that can and would help drive much of this.
This paper sparked a number of ideas for me but perhaps the most striking was the question of what are the real or main reasons for implementing e-learning and TELT? Is it simply because it can offer the students a richer and more flexible learning experience or is it because it makes a teacher’s life easier or brings some prestige to a university (e.g. MOOCs) or (in the worst and wrongest case) is perceived as a cost-saving measure. There is no reason that it can’t be all of these things (and more) and that makes a lot of sense but some of the quotes from teachers in the article do indicate that they are more motivated to adopt new tools and teaching approaches if they can see an immediate, basically cost-free benefit to themselves. Again, I’m not unsympathetic to this – everyone is busy and if you’re under pressure to output research above all else, it’s perfectly human to do this. But it speaks volumes firstly about the larger cultural questions that we must factor in to explorations of this nature and secondly about the strategic approaches that we might want to take in achieving the best buy in.
From here, I’ll include the notes that I took that go into more specifics and also include some quotes. They’re a little dot pointy but I think still valuable. This is most definitely a paper worth checking out though and I have found it incredibly useful, even if I was occasionally frustrated by the lack of practical detail about successfully implementing the strategies.
“In addition, the results suggest that underpinning staff motivation to adopt e-learning is their broader interest in teaching and learning. This implies a bigger challenge for the institution, balancing the priorities of research and teaching, which may require much more detailed exploration” (p.1278)
Glad to see this acknowledged.
This paper focuses on Adoption. What are the other two phases in the Ako paper?
Initiation (a.k.a adoption), Implementation and Institutionalisation
Getting people to start using something is a good start but without a long term plan and support structure, it’s easy for a project to collapse. The more projects collapse, the more dubious people will be when a new one comes along.
Feel like there are significant contradictions in this paper – need for central direction/strategy as well as academic autonomy. Providing people with a menu of options is good and makes sense but that makes for huge and disparate strategy.
The three core influencing factors identified. (How well are they defined?)
Includes: institutional strategy, sufficient resources (to do what?), guidance for effective implementation.
Question of academic development training is framed with limited understanding of the practicalities of implementation. Assumption that more resources can simply be found and allocated with no reciprocal responsibilities to participate.
Support needs identified:
Exploration of available tools and the development of the skills to use them
Creating resources/activities and piloting them
Developing student skills in using the tools
Engaging with students in synchronous and asynchronous activities
Monitoring and updating resources
Unclear over what time frame this support is envisioned. Presumably it should be ongoing, which would necessitate a reconsideration of current support practices.
“Participants suggested the need for a more coordinated approach. A starting point for this would be consideration of how available technologies might be effectively integrated with existing pedagogic practices and systems” (p.1275)
Issues basically boil down to leadership and time/resourcing. Teachers seem to want a lot in this space – “participants in this study reported the lack of a coherent institutional-wide approach offering the guidance, resources and recognition necessary to encourage and support staff.” At the same time, they expect “ongoing consultation and collaboration with staff to ensure a more coherent approach to meet institutional needs” (both p.1277).
If you want leadership but you also want to drive the process, what do you see leadership as providing? I do sympathise, this largely looks more like a reaction to not feeling adequately consulted with however my experience with many consultation attempts in this space is that very few people actually contribute or engage. (This could possibly be a good question to ask – phrased gently – what actions have you taken to participate in existing consultation and collaboration processes in ed tech)
“A further barrier to institutional adoption was the piecemeal approach to availability of technologies across the institution. Participants reported the need for a more coordinated approach to provision of technologies and their integration with existing systems and practices” (p.1277)
Probably right, clashes with their other requests for an approach that reflects the different disciplinary needs in the uni. How do we marry the two? How much flexibility is reasonable to ask of teachers?
Staff attitudes and skills
Is this where “culture” lives?
“including their skills and confidence in using the technology” (p.1275)
“A key step for broadening engagement is supporting staff to recognise the affordances of technology and how it might help them to maintain a high-quality learning experience for their students.
[teacher quote] There’s a lot of resistance to technology but if you can demonstrate something that’s going to reduce amount of time or genuinely going to make life easier then fine” (p.1275)
Want to know more about the tech can do – a question here is, for who. Making teaching easier or making learning better? Quote suggests the former.
What about their knowledge of ePedagogy? (I need to see what is in the Goodyear paper about competencies for teachers using eLearning. Be interesting to compare that to the Training Packages relating to eLearning too)
A big question I have, particularly when considering attitudes relating to insecurity and not knowing things – which some people will be reluctant to admit and instead find other excuses/reasons for avoiding Ed Tech (”it’s clunky” etc) – is how we can get past these and uncover peoples’ real reasons. It seems like a lot of this research is content to take what teachers say at face value and I suspect that this means that the genuine underlying issues are seldom addressed or resolved. There are also times when the attitudes can lead to poor behaviour – rudeness or abruptly dropping out of a discussion. (Most teachers are fine but it is a question of professionalism and entitlement, which can come back to culture)
In terms of addressing staff confidence, scaffolded academic dev training, with clear indicators of progress, might be valuable here. (Smart evidence – STELLAR eportfolios – Core competencies for e-teaching and some elective/specialisation units? This is basically rebuilding academic development at the ANU from the ground up)
“The findings highlighted the importance of a pedagogic-driven approach to implementation that supports staff in recognising the potential of technology to add value to students’ learning experiences. While staff recognised that support was available centrally, they suggested that it needed to be more closely tailored to the specific needs of staff and extended to include online guidance at point of need and communities of practice that facilitated sharing between colleagues” (p.1278)
These seems to strengthen the case for college/school level teams. I am well aware that teachers tend not to engage with academic development activities and resources outside their discipline area – which I think is partially tribal because the Bennett literature suggests that there are actually few differences in teaching design approaches from discipline to discipline. This seems like a good area for further investigation. What kind of research has been conducted into effectiveness (or desire for) centralised Academic Dev units vs those at a college level?
Perceived student expectations
Definition: Students expect their online learning world to match the rest of their online experiences.
“One student expectation reported was the availability of digital resources accessible anytime and anywhere: participants suggested that students expected to access all course materials online including resources used as part of face-to-face sessions and supplementary resources necessary to complete assignments.” (p.1276)
Seems like there are a lot of (admittedly informed) assumptions be made of what students actually want by the teachers in this section. Maybe it is reasonable to say that everyone wants everything to be easier. But when does it become too much easier? When they don’t need to learn how to research?
Student need to learn how to e-Learn
“These findings suggest that for successful implementation of e-learning, students need to be supported to develop realistic expectations, an understanding of the implications of learning with technology and skills for engaging in these new ways of learning and make the most out of the opportunities that they present” (p.1277)
Interestingly phrased outcome – DO students need to learn more about the challenges of teaching and/or the mechanisms behind it? Is this just about teachers avoiding responsibilities? It sounds a bit like being expected to study physics or road-building before going for a drive.
“However students confidence with online tools and resources was perceived to vary and the finding suggest that students need to be supported to develop skills to engage effectively with the opportunities that e-learning affords…
It is not clear whether this is an accurate portrayal of student views or whether staff attributed their own views to the students. It would be valuable to ascertain whether this perception is a true representation by repeating the study with students.” (p.1278)
Again, nice work by the authors in catching the difference between student perspectives and teacher assumptions. I guess the important part is that whether the students hold the views or not, the teachers believe they do and this motivates them to use the technology.
Students don’t want to lose F2F experiences and they don’t want eLearning forced upon them when it seems like a cost-cutting measure. They do want (and expect) resources to be available online.
Proposed elearning strategy
“Reflecting on the factors that influenced the adoption of e-learning, participants suggested the need for an institutional strategy that :
Provides a rationale for its use
Sets clear expectations for staff and students
Models the use of innovative teaching methods
Provides frameworks for implementation that recognise different disciplinary contexts
Demonstrates institutional investment for the development of e-learning
Offers staff appropriate support to develop their skills and understanding” (p.1277)
I’d add an additional item – Offers staff appropriate support to develop and deliver resources and learning activities in TELT systems.
I have a lot of questions about this strategy – what kinds of expectations are we talking about? Is this about the practical realities of implementing and supporting tools/systems which recognises limits to their affordances? Modelling the use of innovative teaching practices – just because something is new doesn’t mean that it is good. I’d avoid this term in favour of best practice and/or emerging. Is modelling really a valid part of a strategy or would it be more about including modelling/showcasing as one of the activities that will achieve the goals. The goals, incidentally, aren’t even referred to. (Other than the rationale but I suspect that isn’t the intent of that item)
Overall I think this strategy is an ok start but I would prefer a more holistic model that also factors in other areas of the academics responsibilities in research and service. The use of “e-learning” here is problematic and largely undefined. There’s just an assumption that everyone knows what it is and takes a common view. (Which is why TELT is perhaps a better term – though I still need to spend some time explaining what I – and the literature – see TELT as)
Face to face support complemented by online guidance (in what form?)
Facilitated CoPs to support academics sharing their experiences. (Can we anonymise these?? – visible only to teachers (not even exec). If one of our problems is that people don’t like to admit that they don’t know something, let them do it without people knowing. )
Wider marketing of support services in this space to academics. (I don’t buy this – I think that teachers get over marketed to now by all sections of the university and I’ve sent out a lot of info about training and support opportunities that get no response at all)
Faculty or departmental e-learning champion (Is that me or does it need to be an academic? Should we put the entire focus onto one person or have a community. Maybe a community with identifiable (and searchable) areas of expertise
Big question – how many people use the support that is currently available and why/why not?
My questions and ideas about the paper:
Demographics of the sample reasonably well spread – even genders, every faculty, wide distribution of age and teaching experience as well as use of TELT. No mention of whether any of the participants are casual staff members, which seems an important factor.
It’s fine to look at teaching practices but teaching doesn’t exist in a vacuum for academics. They also have research and service responsibilities and I think it would be valuable to factor the importance of these things in the research. The fact that nobody mentions them – or time constraints – suggests that they weren’t part of the focus group or interview discussions.
My overall take on this – the authors expand on previous work by Hardaker and Singh 2011 by adding student expectations to the mix. I’d think there is also a need to consider the affordances of existing technology (and pedagogy?) and perhaps also a more holistic view of the other pressure factors impacting teachers and the university.
“The findings highlighted the importance of a pedagogic-driven approach to implementation that supports staff in recognising the potential of technology to add value to students’ learning experiences.” (p.1278)
There are a lot of reasons that TELT is actually implemented in unis and while this might be the claim as the highest priority, I would be surprised if it made the top 5. Making life easier for the uni and for teachers, compliance, cost-cutting, prestige/keeping-up-with-the-Joneses and canny vendors all seem quite influential in this space as well. Understanding how the decisions driving TELT implementations are made seems really important.
King, E., & Boyatt, R. (2015). Exploring factors that influence adoption of e-learning within higher education: Factors that influence adoption of e-learning. British Journal of Educational Technology, 46(6), 1272–1280. https://doi.org/10.1111/bjet.12195
I was recently invited by @UQKelly – Kelly Matthews of the University of Queensland – to attend the National Students as Partners Roundtable on a glorious Brisbane Spring day. (For which I am grateful almost as much for the chance to escape a particularly bleak Canberra day as for the exposure to some interesting ideas and wonderful people working in this space). This isn’t an area that I’ve had much to do with and I was invited to bring a critical friend/outsider perspective to proceedings as much as anything.
Students as Partners (which I’ll shorten to SaP because I’ll be saying it a lot) more than anything represents a philosophical shift in our approach to Higher Education, it doesn’t seem like too great a stretch to suggest that it almost has political undertones. These aren’t overt or necessarily conventional Left vs Right politics but more of a push-back against a consumerist approach to education that sees students as passive recipients in favour of the development of a wider community of scholarship that sees students as active co-constructors of their learning.
It involves having genuine input from students in a range of aspects of university life, from assessment design to course and programme design and even aspects of university governance and policy. SaP is described as more of a process than a product – which is probably the first place that it bumps up against the more managerialist model. How do you attach a KPI to SaP engagement? What are the measurable outcomes in a change of culture?
The event itself walked the walk. Attendance was an even mixture of professional education advisor staff and academics and I’d say around 40% students. Students also featured prominently as speakers though academics did still tend to take more of the time as they had perhaps more to say in terms of underlying theory and describing implementations. I’m not positive but I think that this event was academic initiated and I’m curious what a student initiated and owned event might have looked like. None of this is to downplay the valuable contributions of the students, it’s more of an observation perhaps about the unavoidable power dynamics in a situation such as this.
From what I can see, while these projects are about breaking down barriers, they often tend to be initiated by academics – presumably because students might struggle to get traction in implementing change of this kind without their support and students might not feel that they have the right to ask. Clearly many students feel comfortable raising complaints with their lecturers about specific issues in their courses but suggesting a formalised process for change and enhancements is much bigger step to take.
The benefits of an SaP approach are many and varied. It can help students to better understand what they are doing and what they should be doing in Higher Education. It can give them new insights into how H.E. works (be careful what you wish for) and help to humanise both the institution and the teachers. SaP offers contribution over participation and can lead to greater engagement and the design of better assessment. After all, students will generally have more of a whole of program/degree perspective than most of their lecturers and a greater understanding of what they want to get out of their studies. (The question of whether this is the same as what they need to get out of their studies is not one to ignore however and I’ll come back to this). For the students that are less engaged in this process, at the very least the extra time spent discussing their assessments will help them to understand the assessments better. A final benefit of actively participating in the SaP process for students is the extra skills that they might develop. Mick Healey developed this map of different facets of teaching and learning that it enables students to engage with. A suggestion was made that this could be mapped to more tangible general workplace skills, which I think has some merit.
As with all things, there are also risks in SaP that should be considered. How do we know that the students that participate in the process are representative? Several of the students present came from student politics, which doesn’t diminish their interest or contribution but I’d say that it’s reasonable to note that they are probably more self-motivated and also driven by a range of factors than some of their peers. When advocating for a particular approach in the classroom or assessment, will they unconsciously lean towards something that works best for them? (Which everyone does at some level in life). Will their expectations or timelines be practical? Another big question is what happens when students engage in the process but then have their contributions rejected – might this contribute to disillusionment and disengagement? (Presumably not if the process is managed well but people are complicated and there are many sensitivities in Higher Ed)
To return to my earlier point, while students might know what they want in teaching and learning, is it always what they need? Higher Ed can be a significant change from secondary education, with new freedoms and responsibility and new approaches to scholarship. Many students (and some academics) aren’t trained in pedagogy and don’t always know why some teaching approaches are valuable or what options are on the table. From a teaching perspective, questions of resistance from the university and extra time and effort being spent for unknown and unknowable outcomes should also be considered. None of these issues are insurmountable but need to be considered in planning to implement this approach.
Implementation was perhaps my biggest question when I came along to the Roundtable. How does this work in practice and what are the pitfalls to look out for. Fortunately there was a lot of experience in the room and some rich discussion about a range of projects that have been run at UQ, UTS, Deakin, UoW and other universities. At UoW, all education development grants must now include a SaP component. In terms of getting started, it can be worth looking at the practices that are already in place and what the next phase might be. Most if not all universities have some form of student evaluation survey. (This survey is, interestingly, an important part of the student/teacher power dynamic, with teachers giving students impactful marks on assessments and students reciprocating with course evaluations, which are taken very seriously by universities, particularly when they are bad).
A range of suggestions and observations for SaP implementations were offered, including:
Trust is vital, keep your promises
Different attitudes towards students as emerging professionals exist in different disciplines – implementing SaP in Law was challenging because content is more prescribed
Try to avoid discussing SaP in ‘teacher-speak’ too much – use accessible, jargon-free language
Uni policies will mean that some things are non negotiable
Starting a discussion by focusing on what is working well and why is a good way to build trust that makes discussion of problems easier
Ask the question of your students – what are you doing to maximise your learning
These images showcase a few more tips and a process for negotiated assessment.
There was a lot of energy and good will in the room as we discussed ideas and issues with SaP. The room was set up with a dozen large round tables holding 8-10 people each and there were frequent breaks for table discussions during the morning and then a series of ‘world cafe’ style discussions at tables in the afternoon. On a few occasions I was mindful that some teachers at the tables got slightly carried away in discussing what students want when there were actual, real students sitting relatively quietly at the same table, so I did what I could to ask the students themselves to share their thoughts on the matters. On the whole I felt a small degree of scepticism from some of the students present about the reality vs the ideology of the movement. Catching a taxi to the airport with a group of students afterwards was enlightening – they were in favour of SaP overall but wondered how supportive university executives truly were and how far they would let it go. One quote that stayed with me during the day as Eimear Enright shared her experiences was a cheeky comment she’d had from one of her students – “Miss, what are you going to be doing while we’re doing your job”
On the whole, I think that a Students as Partners approach to education has a lot to offer and it certainly aligns with my own views on transparency and inclusion in Higher Ed. I think there are still quite a few questions to be answered in terms of whether it is adequately representative and how much weighting the views of students (who are not trained either in the discipline or in education) should have. Clearly a reasonable amount but students study because they don’t know things and, particularly with undergraduate students, they don’t necessarily want to know what’s behind the curtain. The only way to resolve these questions is by putting things into practice and the work that is being done in this space is being done particularly well.
For a few extra resources, you might find these interesting.
One constant in my experience as an education support person over 13 years is that generating excitement about professional development activities relating to teaching and learning can be a challenge. I don’t think this is because teachers aren’t interested in their teaching practice or that they believe that there is nothing more to know (well, in most cases), it’s often just another activity competing for scarce time. Calculations have to be made about the effort vs the reward and often the reward simply isn’t sufficient unless it has been mandated in some way (or offers some kind of formal accreditation – or sandwiches and cake)
Gamification (if you don’t already know) is the practice of using game elements (rules, competition, challenges, winning, points, prizes, badges etc) to motivate behaviour in non-game contexts. It’s been used in commerce for decades (consider frequent flyer programs where you earn points towards rewards and level up to better perks) and it has been explored actively in education for about a decade. (This is separate in some ways to the use of play and games in education, which arguably has been happening for as long as we have had education)
I’ve had an interest in game based learning and gamification for a while now – my previous blog was called Gamerlearner and this is still my “brand” in educational social media. (I switched over to Screenface to be able to focus on wider TELT issues).
I’ve been conscious of the fact that while I’ve been doing pretty good work in supporting TELT in my college, there hasn’t been as much happening in the professional development / academic development space as I would’ve liked. (As a one man team, I’m not going to be too hard on myself about this but it still bugged me).
So a couple of weeks ago, I spoke to our Associate Dean (Education) and launched STELLAR as a pilot. A very very beta-y pilot with a lot of elements really not worked out at all. (This was made clear to participants). The plan is to run the pilot over September and use this experience to design a full scale version to run in Semester 1, 2017. Participants earn points for engaging in a range of professional development activities and the winners get a fancy dinner out.
STELLAR stands for Scholarship of Technology Enhanced Learning, Leadership And Research. To be honest, it’s a slightly clunky backronym designed to work with a stars theme. Because I think people like to be seen as stars, its a nice, easy visual theme and putting stars into teams (which was a goal – even small teams) lets us start talking about constellations. I also like that it means that I get to call myself Starlord in my daily STELLAR emails.
At the half way mark, I’ve got a set of activities in place that academics can use to earn points.
(At some point I want to cluster these to enable collection type activities and rewards. I also plan to map them to Bartle’s player types and a few other things to check that there is a good spread of kinds of activities). These can be found in this Google Doc as well as in a page in the Moodle course that I’m using to house resources, organise groups and track activities.
I’ve been trying to encourage spot activities – e.g. you have 24 hours to upload a scholarly selfie to the Gallery – but so far there hasn’t been much engagement. I’ve been lucky that our central TEL team has been running a “coffee course” over the last week relating to the Flipped Classroom. This involves short learning chunks posted on a blog that take around 10 minutes to complete and include the option to leave a comment. (This idea draws from work by Sarah Thorneycroft at UNE). I’ve been pushing this hard and offering generous points for attending and commenting. I’m happy to say that of the 17 participants in STELLAR, at least six that I know of have signed up and five have been the main posters in the coffee course.
Now that the coffee course is over, I’m mindful of the need to maintain momentum so really have to come up with some further activities to encourage people to engage in. We ran a small (2 people) session on Thursday last week about the new ePortfolio tool that the university has introduced and one of our lecturers that is currently using it was generous enough with her time to share her experiences. Hearing “on the ground” stories from peers makes a huge difference.
In terms of the site itself, I’ve been strongly encouraging team play which requires the use of groups (Constellations) to make the most out of the Moodle functionality. This has been much harder than expected, with most people preferring to play solo. I’ve been asking them to join one person groups and now half of the course is in groups. A major reason for trying to encourage group play (ideally 2-4 max) is to foster greater collaboration and discussion in the schools of the college. I appreciate that academic research can be a very solitary pursuit but teaching doesn’t need to be. For all that I read about Communities of Practice in teaching, the culture in my college just doesn’t seem interested yet – particularly at any kind of scale. (As the old saying goes, our university is 70 schools united by a common parking problem)
I’ve set up a leaderboard which is group based only and also set up visible topics that are only accessible by group members but the hold-outs haven’t budged. (These are also the people that have tended to engage less with the course in these first two weeks – in fairness, this has also been the mid-semester break when a lot of marking is done as well as organising applications for research grants). I’m a little conflicted about what to do with this – I’ve made it clear that if people want to play solo it’s fine but it would help if they were attached to a team. As an admin I can just put them in teams but given that “play is a voluntary activity” (Whitton, 2014, p.113), I’m hesitant to force behaviour. (Which isn’t to say that I’m not using game based strategies – fear of missing out and nagging/feedback – to encourage it)
One lecturer – who generally has been engaging – mentioned to me last week that he wasn’t sure what he is meant to be doing. While I’ve been sending out regular emails, they have perhaps been less succinct than I’d like and more fixated on the set up and mechanics of the game rather than the professional development activities that I’m trying to promote. This is definitely a thing to improve quickly.
I’ve been thinking about the games that I enjoy playing – particularly video games – and there is certainly much more direction given, particularly early on. At the same time, these tend to be much more narratively oriented and I don’t have a story running in STELLAR yet. I toyed with the idea of everyone being astronauts and needing to build their ship by earning points which buy parts etc etc but have serious questions about whether this is going too far off track for people in a college of economics and business.
One thing I would dearly like to achieve is to start building a rich collection of learning resources – including case studies/exemplars of good practice locally and research papers into various topics. Having this created collectively would be a fantastic outcome.
I’ve also been making limited use of the idea of random drops. These are unexpected prizes that a player sporadically wins/gets in video games for no particular reason but the possibility that it might happen is used as a motivator. I got 10 coffee vouchers from our local cafe and have been giving Shooting Star spot prizes when people do something new mostly – first suggestion for an improvement, first addition to the glossary, first person to attend a face to face event etc. This system needs some refinement and will benefit from being less arbitrary. My hope is that by announcing the random drops in the daily emails, it is maintaining interest from the people that haven’t yet won one. Maybe a thing to do will be to highlight that these are being won for being the first to do something.
The scoring system is something of a chore – I’m using the gradebook system in Moodle which has meant creating a separate assessment item for each individual activity that people can participate in. I’m keeping a separate Excel spreadsheet because it’s easier to track (in some ways) and need to manually update both. I’ve asked people to claim points in a discussion forum post but am aware that this is entering an unfun grey area of administrivia. What I really want is for people to be sharing what they’ve done in professional development and sharing their learning with the group and I should find a way to reframe it as such. Or automate it more. I can grade some items that are done in Moodle activities but mostly things have been happening externally that I’m tracking. I’m also fairly conflicted about this tracking – for example, I’ve seen people posting in the coffee course and I’ve been giving them the points that they’ve been earning for this. Many of them haven’t been claiming these points through the forum – at least not after the first day. It’s no secret that I’m also in the coffee course because I’m posting comments there as well but if people are earning points for this kind of activity that I’ve seen them doing, is it a little weird?
Digital badges is something that I’m keen to explore and I’ve created some tied to the random drop prizes but we have massive institutional hurdles with badges and our Moodle instance doesn’t support them yet.
I’ve had several other grand ideas that I simply haven’t had time to implement yet. For the groups/constellations, I’d like to have a star field present that grows as they earn more points/stars. So they begin with just their constellation on a black background but a small star appears when they get 10 points or a new constellation when they complete a cluster of activities. Again, when it is a matter of manual handling, it’s a labour intensive activity.
Anyway, that’s the broad strokes of STELLAR, there are twice as many participants as I was expecting (and this is in a time when many people are away) so I’m quietly pleased with our progress but I’m also well aware that sustaining interest and activity is going to be a challenge when semester resumes on Monday.
More than anything though, it’s nice to finally be walking the walk after talking the talk for such a very long time.
Academic development refers to the professional development of academics – which makes sense when you think about it. Evidently I hadn’t thought about that a lot because until I skim read these five papers, I had put academic developers in the same broad (and perhaps vague) category as education designers and learning technologists. People working with teachers/academics to support teaching and learning and developing resources.
I had just assumed that given that the terminology hasn’t really been settled yet (consider blended/flexible/online/technology-enhanced/e-learning), people have been using the terms that they prefer. (I’ve been toying with Director of Education Innovation as a new title but apparently that will upset the Directors of our schools, so that won’t fly).
Anyway, this was the first of a few realisations that I’ve had in the last week of trying to get my research back on track – ironically enough perhaps while I’ve been in the midst of a major academic development project of my own. (STELLAR – which will get its own post shortly).
Recognising that I need to move on to a new topic of exploration in my holistic overview of the central elements in supporting TELT practices in Higher Ed. but also feeling that I haven’t yet covered Education Support Staff (ESS) adequately, I decided to take the temperature of ESS research via five papers. (I’ve also been concerned that while the deep reading that I’ve been doing has been valuable, I’m spending too long on individual papers and chapters in the process.) I allocated a single 25 min pomodoro period to each of these new papers, including writing notes. Admittedly, four of the five papers I’ve decided that I still need to read in full and may well come back to them in the next topic anyway. (However, I changed my initially planned ‘next topic’ from Universities as Organisations to Teachers as a result of these papers and some other thinking recently, so this still feels like progress)
In a nutshell, as I’ve been looking at research relating to education support staff over the last couple of months, I’ve probably been in my own tribal mindset. I do still believe that there are significant cultural factors at play in higher ed. that mean that knowledge and experience aren’t always appropriately used or recognised if you’re not in the academic tribe and this is an area to work on. There are also an incredibly diverse range of reasons for this, some more understandable than others. I have to admit that I’ve not been as open to the more understandable (and valid) ones as I should’ve and that empathy is always an important part of communication and collaboration.
So after this post on the matter, I’m going to take a first pass at my lit review relating to ESSes and focus on the academic/teacher side. (Ultimately people that teach are teachers and this is the side of the academics’ work that I’m looking at – it’s also a more meaningful term in this context – but I realise that terminology is perhaps more important than I thought.
These are my quick responses to the papers that I skimmed
This is a particularly insightful paper that uses “the discourse analytic method of “interpretative repertoires (Potter & Wetherall, 1987)” (p.15) to consider issues in academic development with a particular focus on education technology and changing teaching practices.
Hannon essentially distills the approaches into ‘enabling’ and ‘guiding’ and interviews 25 individuals working with education technology (including academics and ESSes) about their experiences in one university in this space.
He identifies four main differences in the ways that practice is organised:
Developing staff or developing courses (p.19)
Implementing or adapting institutional strategy (p.20)
Drawing together – systems or community (p.22)
Reframing technology or reframing the user (p.23)
Ultimately, Hannon finds that:
it is neither institutional strategy nor learning technologies that impose these constraints, rather the discourse or repertoires associated with their operationalisation (p.27)
I’ll certainly be coming back to this paper in the future.
Hicks looked at issues more in relation to the role of Academic Developers – and people working in Education Support units – as ‘change agents’, caught between the strategic requirements and priorities of the university executive and the needs of teachers and learners.
She felt that the voice of academic developers is seldom heard in research in this field and takes time to address this within a Bourdieuian framework emphasising social systems by inviting developers to participate in a number of focus groups.
Hicks’ paper sits well alongside most of the other papers that I have looked at already, with a focus on the tensions between academic and professional staff as well as academic staff and ‘management’ – with the ESSes torn between the two and underutilised.
This paper may be a useful source of additional supporting quotes and could also be worth reviewing when I get to university as an organisation.
David Boud is a major figure in research into Higher Education in Australia, (Angela Brew presumably is as well but it’s Boud that I’ve heard more about to date), so I was keen to read this one.
The idea of practice theory (Kemmis) is something that I keep coming across (and has also been suggested by my supervisor) and it’s at the heart of this paper. In a nutshell, it’s about framing academic work as practice and considering three key foci
practice development, fostering learning-conducive work and deliberately locating activity within practice. It also suggests that academic development be viewed as a practice (p.208)
Given that my new area of exploration is teachers/teaching/academics, this is a timely examination of academic practice that I will absolutely be delving into in far greater depth. It also offers a nice bridge between these two areas and I think it will also help me to inform my other (professional) work.
This paper presents a solid overview of tribalism in academia and the emergence of Higher Education as a field of study in its own right that needs to be claimed by academic developers. (I’d wonder whether an idea of “academy developers” is more fitting here).
One thing that I’ve come to realise in this sector is that trying to take on organisational cultural issues directly is unproductive, so while I’d prefer tribalism to be replaced with the embrace of a broader notion of being part of a collaborative community of scholars, I realise that it won’t happen any time soon. I guess the real questions are; do the members of a tribe respect the knowledge of another tribe and is teaching and learning in Higher Education something that can be owned by one tribe? Perhaps something more along the lines of tribal elders – strictly in the H.E T&L discipline area, never the ‘academy’ itself – could work?
When it comes to the role of ESS, I note that the authors quote Rowland et al (1998), which has popped up in most of these papers and is high on my list of future reading. It’s a fairly brutal quote however.
[a]t best, they [i.e. academics] view those in these [academic development] units as providing a service to help them teach. At worst, they ignore them as lacking academic credibility and being irrelevant to the real intellectual tasks of academic life. (Rowland, Byron, Furedi, Padfield & Smyth, 1998, p.134) (p.10)
This is certainly another paper to read in full as I explore the idea of academic work and teaching.
This final paper by Lee and McWilliam leans heavily on Foucault and “games of truth and error” and a fairly specific idea of irony. It again explores the tensions that academic developers encounter in the space between executive/management priorities and teacher needs. As someone that hasn’t yet explored Foucault, I imagine it might be of value if this is theoretical direction that I choose but for the most part I just felt that I didn’t get the joke.
Ok, so hopefully this give me a decent starting point for writing something about the literature as it relates to education support staff (obviously there is always more to explore but the best writing is the writing that you’ve actually done and having something to show will make it easier to find the gaps – both in ideas covered in the research as well as in what I’ve been reading and not reading.
According to my frequently revised project plan for my thesis proposal, I should now move on to my next topic for exploration, which was initially the University as Organisation but based on recent readings and discussions, it makes more sense to shift across to academics/teachers.
While I still feel that I haven’t read enough – but am assured that this feeling never goes away – I think it’s time to write up what I have found in the literature so far, understanding that this is the first of many drafts. Because I’ve been feeling that I’m not reading enough – or quickly enough – I got five more papers relating to academic development with the intention of skim reading them to identify core ideas and see which ones I should come back to in greater depth. I dedicated a 25 min pomodoro to each paper which generally included note taking.
I think I’ll actually put these into a separate post but my main outcome was that my understanding of the term “academic developer” and academic development seems to differ somewhat from the community. To be honest, I’ve not really given the different terms a lot of thought, assuming that as a nascent field, eLearning is yet to settle on broadly accepted language for people in education support roles and education designer / learning technologist / academic developer are all fairly interchangeable. As it turns out, an academic developer actually develops academics – which is to say, provides training and advice in teaching and learning to lecturers. There was little assumption in the literature that they have anything to do with making things, building course resources or taking a larger view of education technology. (Well, that’s an oversimplification)
In conjunction with a presentation from the always astute Professor Sue Bennett (University of Wollongong) at a local teaching and learning day on Monday, where she made a strong point that academics/teachers need to own education design rather than being “designed at” by education support types, I’ve realised that much of my focus over the last month or two has been from the education support perspective (with a lengthy detour into academic / professional divide territory) and shifting my frame to teachers makes a lot of sense.
In broad terms, I’m well aware that there are a great many factors at play in the success of TELT practices in Higher Ed – I’ve not even gone near the pedagogy, theory or material aspects yet – but I guess my personal experiences have led me to a point where the key seems to be the human elements. We can create the optimal environment with the most supportive conditions for success in the world, but if the people (university managers, academics, students and professionals) don’t engage or even actively resist (for a host of not always rational reasons), very little will be achieved. For me, it seems that understanding why people hold the attitudes that they do and what the best approaches are to work with these offers the greatest chance of successful change.
The question of change itself is an interesting one – it’s basically assumed that this is needed and desirable, presumably because we are in the middle of an incredible period of change (information revolution etc). The missing part of this discussion I suggest is looking at how we can support and disseminate (and strengthen I guess, which is a milder form of change) the practices that are successful already. Continuity and change, to borrow a cheeky political term. Everyone seems so fixated on on change that they forget that not everything is terrible. I’ll certainly be keeping an eye out for this in the literature as I go.
At this stage of looking at the matter of professional staff and academic staff in Higher Education, I feel that I’m somewhat flogging a dead horse and everything that needs to be said, has been said. So why am I still looking at this paper? Initially I was concerned that it grated on me because it doesn’t fit with my current narrative that there are significant cultural factors in universities that make it unnecessarily difficult for professional staff – particularly those in education support roles – to be heard when it comes to discussing teaching and learning.
If this was the case, I’d clearly not being doing my best work as a scholar – open to new information and willing to reconsider my world view in the face of it. Having looked over the paper a few times now though, I have to say that I think it’s just not that great a piece of research. A number of assertions are made that simply aren’t supported by the evidence presented and some of the reasoning seems specious. Events from four years prior to the publication date are referred to in the future tense but there is no discussion of whether they happened or what the consequences were.
Assuming that this is poor research – or perhaps poor analysis – it makes me happy that I’ve reached a point where I can identify bad work but also a little concerned that I’m wrong or I’m missing something because this was still published in a peer reviewed journal that I’ve found a lot of good work in previously. (Then again, I assume that most journals have their own favoured perspectives and maybe this was well aligned with it). I searched in vain to find other writing by the author but she appears to be a ghost, with no publications or notable online presence since the paper came out.
In a nutshell, based on an anonymous online survey of 29% of all staff – academic and professional at her institution, which included questions about demographics, perceptions of the nature of their roles, the ‘divide’ and the value of different types of staff in relation to strategic priorities, the author concludes that there is minimal dissension between academic and “allied” staff and most of what little there is, is felt by the allied staff.
Now it’s entirely reasonable that this may well be the case but there are a few elements of the paper that seem to undermine the authors argument. Wohlmuther asks survey participants about their perceptions of a divide but doesn’t dig directly into attitudes towards other kinds of staff, which McInnis (1998), Dobson (2000) and Szekeres (2004) all identified as central factors. She looks at the perceptions of contributions of academic and allied staff members to the strategic goals of the organisation which obliquely explores their ‘value’ within the organisation but it seems limited. Given the ambiguous value of some higher level strategic goals (Winslett, 2016), this would seem to tell an incomplete story.
The greatest weakness of the paper to my mind is that ‘allied’ and ‘academic’ work roles are unclear.
Survey respondents were asked what percentage of their time they spent on allied work and what percentage of their time they should spend on allied work. The term ‘allied work’ was not defined. It was left to the respondent to interpret what they meant by allied work (p.330)
With no further examination of the responses via focus groups or interviews, this alone (to me anyway) seems to make the findings murky.
She found that only 29% of staff – all staff? that is unclear – felt that there was “good understanding and respect for the significance of each others roles and all staff work well together” (p.331) across the institute, however doesn’t take this to be an indicator of division.
Looking over the paper again, these are probably my main quibbles and perhaps they aren’t so dramatic. This tells me that I still have a way to go before I can truly ‘read’ a paper properly but I’m on the way
I also read another paper – yes I’m really trying to move on – about the professional /academic divide. This time about research into it in a particular institute in NZ. I’m not sure whether it is a bad paper or it’s just that I disagree with the findings but I’m almost sure that it is just bad. There’ll be more on this soon. I note that the author doesn’t appear to have written any other papers and that one was 8 years ago.
There have been a few big work things relating to the governance of TEL systems that I’ve been working in which I think will inform my research and I’m also cobbling together a gamified approach to academic staff PD that I think should be fun. I just really hope that people play. If I can get 4 teams of 2, I’ll consider it a win. More on this soon too – I’m calling it STELLAR – Scholarship of Technology Enhanced Learning, Leadership And Research, which is a tortured but valid acronym.