Category Archives: assessment

Thoughts on: Reflecting or Acting? Reflective practice and continuing professional development in UK Higher Education (Clegg, Tan and Saeidi, 2002)

Clearly one of the key ingredients in enhancing teaching practice is teacher professional development and a vital element of deriving meaning from this is reflective practice.

It is at this point however that we need to be cautious of the evangelisers of reflective practice as a global solution. “Reflecting or Acting? Reflective Practice and Continuing Professional Development in UK Higher Education” by Sue Clegg, Jon Tan and Saeideh Saeidi (2002) takes a methodical look at the use of reflection and notes that current (at the time – not sure how much they have evolved) uses of reflective practice in CPD isn’t suited to all learners and needs to be anchored in actions taken to be particularly meaningful.

Reflective practice is valued for acknowledging “the importance of artistry in teaching” (p.3), which seems even more important in 2016 than it was in 2002 with the rise of big data and analytics in education sometimes seeming determined to quantify and KPI-ify every single facet of teaching and learning. (Can you tell that I’m more of a qual than a quant?)

Clegg et al investigated the use and value of reflective practice amongst academic staff in accredited CPD between 1995-1998. In broad terms (Spoiler alert) they tied it to four types of practices/behaviours that reflected the learning preferences and teaching circumstances of the teachers. These preferences – either for ‘writerly’ reflection or not – and the circumstances (which impacted their ability to act on new teaching knowledge) had a significant part to play on how valuable reflection was to them.

The ‘action’ part is at the core of the question that Clegg et al are pursuing. They draw on Tomlinson (1999) in assuming that “the relationship between reflection and action is transparent with reflection-on-action leading to improvement and change” (p.4). This idea has been of interest to me recently because I’ve been involved with the HEA fellowship scheme at my university which appears to have a different focus, seemingly sans action. (I’ll discuss this further in future posts as engaging Fellows seems as though it is going to be an important part of my ongoing quest/research)

As for the learning preference side of the equation, one of the simultaneous strengths and failings of the widely followed reflective practice approach is the emphasis on a very ‘writerly’ style of reflection. By which the paper refers to Bleakly (2000), who has “argued for greater attention to the form of writing and a greater self-awareness of literary accomplishments of narrating and confessional.” The authors note however that “our data suggested that some practitioners fail to write or only write as a form ex post facto justification for accreditation purposes”. Which, based on the feedback from some of the participants that struggled with the writing element of the task, can be linked in part to the disciplinary orientation of the learners (i.e. quant vs qual backgrounds) and in some cases to gender-role perceptions – “the feminine reflective side as opposed to the more active masculine doing side of practice” (p.18)

These key factors allowed the authors to sort participants into four groups, based on their practices.

  • Immediate action – participants put new ideas into practice directly after the CPD workshops (and before reflection) (more often novice practitioners)
  • Immediate reflection – participants reflected on their own practices directly after CPD workshops (more often experienced practitioners) – they also found less value in the workshops  in terms of new knowledge
  • Deferred action – some participants were unable to act on knowledge gained in workshops due to organisational/time constraints (this limited their ability to reflect on the impact of new knowledge on their new actions/practices)
  • Deferred reflection – largely participants that struggled to engage with the reflection activity in its current format. Many only did it for accreditation purposes so saw little benefit in it.

Clegg et al take pains to emphasise that their research is about starting a conversation about the interrelationship between action and reflection and the need to maintain this link. They don’t draw any other conclusions but I think that even by simply looking at on-the-ground interaction with reflective practice, they have given us something to think about.

Reading this paper sparked a few random ideas for me:

  • Perhaps Design thinking might offer a way to bridge the gap between the ‘teaching as a craft’ and ‘teaching as an empirical science with hard data’ viewpoints by applying a more deliberate and structured way of thinking about pedagogy and course design
  • Are there ways that we can foster writing (and some reflection) as a part of every day ongoing CPD for academics? (Without it being seen as a burden? There probably needs to be a goal/outcome/reward that it leads to)
  • Decoupling reflection from action – particularly when action comes in the forms of making improvements to practice – gives people less to reflect on and might lead to too much navel gazing.
  • A large part of the work being done on reflective practice by one of my colleagues is focusing on the impact that it has on teacher self-efficacy. Tying it to professional recognition boosts confidence which is valuable but is there a risk that this can in turn lead to complacency or even over-estimation of one’s competence?
  • My personal philosophy when it comes to theory and practice is that none will ever hold all of the answers for all of the contexts. I believe that equipping ourselves with a toolbox of theories and practices that can be applied when needed is a more sustainable approach but I’m not sure how to describe this – one term that I’ve considered is multifocal – does this seem valid?
  • One concern that I have about this study is the large number of contextual factors that it tries to accommodate. These include : “how participants understood their activity including reflective practice, their motivations for joining the course, how they made sense of their decisions to complete or not complete, and whether they thought of this as a conscious decision” (p.7) On top of this there was the level at which the CPD was being conducted (novice teachers vs supervisors), disciplinary and gender differences as well as learning preferences. Maybe it’s enough to acknowledge these but it seems like a lot of variables.
  • Reflection shared with peers seems more valuable than simply submitted to assessors.
  • Even when reflective writing is a new, ‘out of character’ approach, it can be seen as valuable even though it can take learners time to ease into it. Supporting some warm up exercises seems like it would be important in this case.
  • It’s worth taking a harder look at exactly what forms meaningful reflections might take – is there just one ‘writerly’ way or should we support a broader range of forms of expression?
    Audio? Video? Dank memes?
    “Virtually all the descriptions of keeping a journal or gather materials together suggested that they somehow felt they had not done it properly – qualifying their descriptions in terms of things being just scrappy notes, or jottings, or disorganised files, or annotated e-mail collections. Such descriptions suggest that participants had an ideal-typical method of what reflective practice should look like. While the overt message from both courses was that there was no one format, it appears that despite that, the tacit or underlying messages surrounding the idea of reflective practice is that there is a proper way of writing and that it constitutes a Foucauldian discipline with its own rules” (p.16-17)
  • Good reflection benefits from a modest mindset: “one sort of ethos of the course is it requires you to, I don’t know, be a bit humble. It requires you to take a step back and say perhaps I’m not doing things right or am I getting things right, and throw some doubt on your mastery…” (p.17)
  • This is more of a bigger picture question for my broader research – To what extent does the disciplinary background shape the success (or orientation) of uni executives in strategic thinking – qual (humanities) vs quant (STEM)?

 

 

 

 

The Try-a-Tool-a-Week Challenge: Week 1 – Socrative (vs Kahoot)

Kelly Walsh over at EmergingEdTech seems like quite the Ed Tech advocate and he has started an ongoing series of posts for the next three months focusing on a range of tools.

He has asked people to try the tool and post some comments on his blog. So, what the hell, I’m happy to see where this might go. First up is a basic classroom quiz tool called Socrative.

At first glance, this reminds me of Kahoot, which I’ve looked at before. Socrative appears to use a more serious design style, eschewing the bright colours and shapes of Kahoot for more muted tones. Overall, the Socrative interface is a little more user friendly for both the student and teacher, with a clean, simple and logical design.

Creating a basic quiz in Socrative was a very straight-forward process and it was nice to be able to create all of the questions on the same page. I did encounter some problems with creating a multichoice question – for some reason it took repeated clicks (and some swearing) in the answer field before I was able to add answers. Editing the name of the quiz wasn’t intuitive either but overall, the process was simpler than with Kahoot.

Running the quiz went reasonably well however I did encounter a number of bugs, related to network connectivity (3G) and an initially buggy version of the quiz that seemed to crash the entire system. (I had inadvertently added a true/false question twice, once with no correct answer identified. Clumsy perhaps on my part but I would kind of expect this to be picked up by the tool itself).

I liked the fact that the student sees both the questions and the answers on their phone and that the feedback appears there as well. Socrates gives three options for running the quiz – Student paced with immediate feedback (correct answers shown on device upon answering), Student paced – student navigation (student works through all questions and clicks submit at the end) and Teacher paced where the teacher takes students through question by question. In the final two options, feedback appears only on the teacher’s computer (presumably connected to a data project / smart board).

Overall I’d say I rate the overall usability, look and feel of Socrative above Kahoot but the connectivity issues are a concern and I’d say that Kahoot offers a slightly more fun experience for learners by playing up the gamified experience, with timers and scoring.

 

A hierarchy of digital badges – Level 1 Accredited

Part of me thinks it’s a really dumb idea to try to identify a hierarchy for digital badges and particular to try to name them. Because the people out there that don’t get badges are often the same kinds of people that get fixated on names for things and let the names blind them to the function or purpose of the thing. (This is why we start getting things called micro-credentials and nano-degrees. Personally I would’ve called them chazzwozzers but that’s just me)

Maybe hierarchy isn’t even the right term – taxonomy could work just as well but I do actually believe that some badges have greater value than others – determined by the depth and rigour of their metadata and their credibility with an audience. (Which isn’t to say that some educators mightn’t find classroom/gamified badges far more valuable in their own practice).

In the discussions that I’ve seen of digital badges, advocates tend to focus on the kinds of badges that suit their own needs. Quite understandable of course but it does feel as though this might be slowing down progress by setting up distracting side-debates about what a valid badge even is.

Here is a quick overview of the badge types that I have come across so far. If I’ve missed something, please let me know.

Level 1 – Accredited 

Accredited badges recognise the attainment of specific skills and/or knowledge that has the highest level of accountability. The required elements of these skills are identified in fine detail, multiple auditable assessments are conducted (and ideally reviewed) and supporting evidence of the badge recipient’s skill/knowledge is readily available.

I work in the Vocational Education and Training (VET) sector in Australia, where every single qualification offered is built on a competency based education framework. Each qualification is comprised of at least 8 different Units of Competency, which are generally broken down into 4 or 5 elements that describe highly specific job skills.

VET is a national system meaning that a person undertaking a Certificate Level 4 in Hairdressing is required to demonstrate the same competence in a specific set of skills anywhere in the country. The system is very tightly regulated and the standards for evidence of competence are high. Obviously, other education sectors have similarly high standards attached to their formal qualifications.

Tying the attainment of a Level 1 badge to an existing accredited education/training program seems like a no-brainer really. The question of trust in the badge is addressed by incorporating the rigour applied to the attainment of the existing qualification and having a very clearly defined set of achieved skills/knowledge offers the badge reader clarity about the badge earner’s abilities.

E.G. A badge for Apply graduated haircut structures could easily be awarded to a hairdressing apprentice on completion of that Unit of Competency in the Certificate III in Hairdressing. It would include the full details of the Unit of Competency in the badge metadata, which could also include a link to evidence (photos/video/teacher reports) in the learner’s ePortfolio.

I use a VET example because that’s what I know best (and because it seems a natural fit for badges) but obviously, any unit in a formal qualification would work just as well

Next post, I’ll look at Level 2 – Work skills

 

ePortfolio grading rubric

Here’s a useful assessment rubric created by the University of Wisconsin – Stout that can be applied to ePortfolios. I would consider adding links within the criteria to exemplars of best practice but I think it provides a solid basis for evaluating student work.

https://www2.uwstout.edu/content/profdev/rubrics/eportfoliorubric.html

screenshot of eportfolio assessing rubric

Final thoughts on DDLR / DDeLR

It feels like I’ve been banging on about this subject forever now but with assessments now finalised, it seems like a good time for a final wrap up.

In broad terms, I was a little disappointed with my students. It might have been a bad time of year to run this subject, with its demanding workload, but the majority of them seem to have only put in the absolute barest effort needed to pass. Assessment instructions which I thought were pretty clear weren’t followed and most of the reflections lacked any great insight. I had to ask many of them to rework and resubmit their assessments just to meet the minimum requirements.

What this does make me ask is whether this is the fault of my students or me.

As I haven’t taught formal classes for more than a decade, there are a lot of things that I haven’t had to deal with in teaching an ongoing subject with rigorous formal assessment. I also have a tendency at times to over-complicate things because it just seems like it makes them better. This probably also extends to my communication with my students and my expectations of them.

Fortunately, I am still keen to try this again.

Even during the marking process, as I had to walk away from the computer and swear at the walls, I was constantly reshaping the course structure, the assessments and the class activities in my mind to help avoid some of the issues that were arising. The fact that a handful of the “good” students were able to understand and follow my instructions suggests that I’m on the right track at least and am not entirely to blame but the fact that more than a few got things quite wrong does tell me that there is more work to be done.

I need to make it clearer that when students are creating draft learning resources, they actually need to be resources – things, objects – rather than broad and loose activity plans for a class. I need to explain clearly that the final learning resources should be the same as the draft learning resources but improved based on testing and feedback.  To be honest, these things seemed so self evident to me that I couldn’t conceive of anyone not getting it but there we are.

I tried to put into practice a number of ideas that I’ve encountered in the education design community about getting students more involved in designing parts of their own assessments but this really just confused more people than it helped. (Which was a shame as I do believe that it is a valid and valuable approach)

I tried to give my learners freedom to follow their particular learning needs and interests but for the most part this ended up just giving them the opportunity to follow the path of least resistance and allowed for some fairly lazy work. I also should’ve factored into my thinking that the first week of a class is often going to be plagued by technical (logins not working) and administrative hassles and try to make allowances for this in not expecting too much work to be achieved in the first week. (That said, we have a strong need to demonstrate engagement in class activities to receive funding for students that later drop out and I was certainly able to prove that)

I think next time around there will need to be a little less freedom, a bit more structure and lot more clarity and simplicity.

On the whole I am happy that I have managed to get these teachers doing things they haven’t done before and I think they have developed useful skills and knowledge. I’d just like to do more.

Designing DDLR – More work on assessment

Now the focus of this project on Designing the Design & Develop Learning Resources course is on pinning down the assessments. J’s assessments for DDLR 3&4 seem strong but I just want to see whether it’s possible to streamline them slightly – largely to allow learners to knock over the analysis (and design) components quickly. (Given that they should presumably have a decent idea what their students are already like and already design resources with this in mind)

After a couple of hours of looking over this, I’m wondering whether it mightn’t have been better to try to write up my own assessment ideas first and then look at J’s for additional inspiration. It’s quite difficult to look past the solid work that has already been done. I’m still mindful of the fact that the amount of documenting and reporting seems a little high and am trying to find ways to reduce this while still ensuring that the learner addresses all of the elements of competency.

One of the bigger hurdles I face with this combined subject is that the elements of the units of competency are similar but not the same. For the analysis and design sections, they match up fairly well, with only mild changes in phrasing but the development, implementation and evaluation components start to differ more significantly. Broadly speaking, both of these units of competency appear to be targeted more at freelance education designers than practicing teachers – the emphasis on talking to the client and checking designs with the client (when the teacher would clearly be their own client) requires some potentially unnecessary busy work for the teacher wanting to be deemed competent here.

I’ve tried to address the differences between the elements of competency by clustering them with loosely matching ones from the other unit of competency in this fairly scrappy looking document. I’ve also highlighted phrases that look more like deliverable items.

document listing elements of competencyThis made it much easier to look over the existing assessment documents and resources to firstly check that all of the elements were addressed and secondly to feel confident that I am sufficiently across what is required in this subject.

Broadly speaking, the existing assessment items cover these elements of competency pretty well, I only needed to add a few extra questions to the design document template to address some aspects that it might be possible for learners to overlook.

These questions are:

  • How does the learning resource address the element or unit of competency?
  • What equipment, time and materials will you need to develop your learning resource?
  • Where will you source content for your learning resource?
  • Who can/will you contact for support in developing your resource?
  • How will you review your work as it progresses?
  • Describe the type of learning design that your learning resource uses

So as it stands, I think I’ll be largely sticking to the existing assessment plan with only a few minor changes. (Largely because my predecessor knows her stuff, which has been tremendously helpful). I am still keen to find ways to address as much of this assessment as possible in class activities – being mindful of the fact that learners may not make every class and there needs to be a certain amount of flexibility.

Overall though – and clearly the dates will need to be changed, this is what the assessments look like.

assessment documentThe next step is to update the subject guide and add my amendments to the existing documents.  I do also need to devise a marking guide for the learning resources themselves – there is something appealing in the idea of having the learners create this as one of their draft resources as the unit of competency document does stretch to define learning resources as including assessment resources too. This seems like a great opportunity to get the learners thinking more critically about what makes a good learning resource.

Designing DDLR & DDeLR – Assessments

Today is all about pinning down the most appropriate types of assessments for this subject. Yesterday I think I got a little caught up in reviewing the principles of good assessment – which was valuable but it might also be better applied to reviewing and refining the ideas that I come up with.

For what it’s worth, these are the notes that I jotted down yesterday that I want to bear in mind with these assessments. DDLR Assessment ideas

Looking over the four versions of this subject that my colleague J has run in the last 2 years has been particularly enlightening – even if I’m not entirely clear on some of the directions taken. The course design changed quite substantially between the second and third iterations – from a heavily class-based activity and assessment focus to more of a project based structure. (For convenience I’ll refer to the subjects as DDLR 1, 2, 3 and 4)

DDLR 1 and 2 provide an incredibly rich resource for learning to use eLearn (our Moodle installation) and each week is heavily structured and scaffolded to guide learners through the process of developing their online courses. The various elements of the units of competency are tightly mapped to corresponding activities and assessment tasks – moreso in DDLR 2. (Image from the DDLR subject guide)

I have to wonder however whether the course provides too much extra information – given the relatively narrow focus on designing and developing learning resources. Getting teachers (the learner cohort for this subject) to learn about creating quizzes and assignments in Moodle is certainly valuable but are these truly learning resources? This may well be one of the points where my approach to this subject diverges.

The shift in approach in DDLR 3 and DDLR 4 is dramatic. (As far as a diploma level course about designing learning resources might be considered dramatic, at least.) The assessments link far more closely to the units of competency and all save the first one are due at the end of the subject. They are far more formally structured – template based analysis of the target audience/learners, design documents, prototypes and finished learning resources, as well as a reflective journal.

It does concern me slightly that this subject has a markedly lower rate of assessment submission/completion that the two preceding ones. That said, this subject is often taken by teachers more interested in the content than in completing the units of competency and that may just have been the nature of this particular cohort.

This new assessment approach also seems far more manageable from a teaching/admin perspective than the previous ones, which required constant grading and checking.

My feeling is that this is a more sustainable approach but I will still look for ways to streamline the amount of work that is required to be submitted.

The next step was to map the various elements of competency to assessment items. The elements for both units of competency are written differently enough to need to be considered separately (unfortunately) but they both still broadly sit within the ADDIE (Analyse, Design, Develop, Implement, Evaluate) framework. ADDIE seems like a useful way to structure both the course and the assessments so I have mapped the elements to this. I have also highlighted particular elements that are more indicative of outputs that might be assessed. Working through the analysis process will be quite dry (and could potentially come across as slightly patronising) so finding an engaging approach to this will be important.

Photo of elements of competency mapped to ADDIE elements (I’m also quite keen to bring digital badges into this process somehow, though that’s a lower priority at the moment)

Finally, I had a few ideas come to me as I worked through this process today that I might just add without further comment.

DDLR / DDeLR ideas

Get the class to design and develop a (print based? ) checklist / questionnaire resource that might be used to address DDLR 1 and DDeLR 1 UoCs. Get someone else in the class to use it to complete their Analysis phase.

Can I provide a range of options for the forms the assessment/resource pieces might take?

Try to develop a comprehensive checklist that teachers can use on the resources that they produce to raise the quality overall of resources at CIT. (Again, this could be a student led tool – the benefit of this is that it makes them think much more about what a good resource requires – does this meet any UoCs??)

Convert the print based Analysis document into a web resource – book tool or checklist maybe? Also possibly fix the print based one first – from a deliberately badly designed faulty version. (Lets me cover some readability / usability concepts early)

How much of this subject is leading the learners by the hand? How much is about teaching them how to use eLearn tools?

Could one of the learning resources be about developing something that teaches people how to use a particular eLearn tool???

Need to identify what kinds of resources teachers can make. Good brainstorm activity in week 1.

Think about the difference between creating a learning resource and finding one and adding it to your course. (Still important but tied to the UoC?)

If I give teachers the option to use previously developed resources (authenticity issues??), they should still provide some kind of explanatory document AND/OR edit the resource and discuss what changes they made and why.

Need to consider the relative strengths and weaknesses of the various types of tools.

In-class feedback of learning resources to better support the evaluation and implementation based elements of competency.

One activity (possible assessment) could be for learners to gather information needed to do an analysis from a partner in the group. (and vice versa) Might lead to a more critical examination of what information is being sought. Learner might even provide suggestions for design/development?

Sample resources?

 

 

 

Using Feedback in Moodle for more than student evaluations & Using Padlet

The first in our series of CITFLN TeacherNet Show and tell sessions went well with Jo Whitfield sharing some ideas about using the Feedback tool for more than student evaluations and I presented Padlet, an embeddable interactive wall.

Here are the videos from these presentations

Learning design: Why you want to lead with the scenario

http://blog.cathy-moore.com/2013/08/why-you-want-to-put-the-activity-first/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+cathy-moore%2FLPhE+%28Cathy+Moore%29

This post by Cathy Moore (and another that I came across not too long ago here at Computing Education Blog ) struck a chord with me. In essence, they are both saying that learners can benefit by having their skills and knowledge tested right from the beginning of a subject. Whether it involves participating in a scenario and completing some kind of formative assessment, putting this activity up front lets your learners see what they are expected to know, what they don’t currently know and why this is a relevant and worthwhile part of their studies. The odds are pretty good that they will fail the scenario or quiz or whatever the first time around but as long as we make it clear that this is OK and that it’s just a part of learning, the memories of this experience will give context and meaning to everything else that they learn afterwards. I took this approach perhaps a little inadvertently in a digital literacy course that I trialled last year. I wanted to test the value of a particular quiz

via Delicious (via IFTTT)