Speculation is rampant that a major upgrade to GPT, the language learning model behind ChatGPT, will be released this week following a comment at an event for partners and prospective clients of Microsoft Germany last week. No firm details are confirmed but it is believed that the update will increase the number of ‘parameters’ used by the tool from ~175 billion to 10-100 trillion and it may add multi-modal inputs and outputs (text, images, audio and possibly video). How will this impact learning and teaching? Outputs will probably be better, but it shouldn’t really alter the changes that are already occurring. The associated discussion on Reddit adds some useful surround details, including the fact that Microsoft does have an AI focused event scheduled for Thursday.
Coming back to Earth a little, student perspectives when it comes to the responsible use of GenAI tools like ChatGPT have been light on the ground in all the wider discussion. This twitter thread from Amanda White (UTS) captures the process she worked through with her students in deciding what usage is reasonable. Additionally, the Educational Innovation team at Sydney Uni recently held a couple of panel discussions with students covering their perspectives and the recordings are quite illuminating. While a certain type of student commonly appears in these sessions, it was interesting to note that they didn’t want to let the tools weaken their own writing skills.
While learning content and activities may be vital elements in good online learning courses, the visual and structural design (the User Experience or UX) has a massive impact on their efficacy. This valuable research from Adams, Miller-Lewis and Tieman of UniSA and CQU compared the ability of Learning Designers, healthcare professionals and end-users to identify UX problems in resources based on previously identified end-user errors. They observed that Learning Designers correctly identified nearly three times as many design issues as the other evaluators, highlighting their value in assisting the development of these resources.
There is a cohort in any discussion about the AIpocalypse in Higher Ed whose first question is some variation on ‘how can we detect AI generated writing?’ Given the change that is needed in teaching practice to respond to these tools, it is understandable that a first response might be in the ‘shut it down’ vein. As with most things in the ed tech space though, there is no silver bullet, as this set of basic tests conducted by Armin Alimardani (UoW) and Emma Jane (UNSW) indicate. Detecting AI content is unlikely to ever be reliable and clever users will usually be able to find a workaround.
This paper from Elaine Huber and a cadre of other heavy hitters in business education at USyd and UTS describes some very thoughtful work to develop an overarching framework for online assessment that holistically addresses learner, educator, institutional and disciplinary needs. While different discipline areas clearly have their own needs, the big picture takeaways from this work should be applicable to most educators, ranging across (but not limited to) authenticity, scale, quality feedback, resourcing, and accreditation
It has been interesting to see how all the GenAI talk recently has sucked the air out of a range of other important discussions in the technology enhanced learning space. I am not unhappy that the torrent of publications about remote emergency teaching has slowed to a trickle but things have also been quiet in micro-credentialling space. Happily this paper covers some rich work underway in Ireland, proposing some sensible models and describing some practical examples.
I group these works together – a doctoral thesis from Abilene Christian University and an article from the Australasian Journal of Educational Technology – because they share some interesting overlaps from rather different perspectives. Both relate broadly to effective use of learning technologies by educators and the growing contribution that ‘Third Space’ workers in Higher Ed can/should make to this. The Australians (Tay et al.) note concerns about centralisation, surveillance, institutional homogenisation, responsibility and efficiency when it comes to the use and support of ed tech and both they and January flag a need for greater awareness of support from learning designers (and education technologists) and institutional supports for collaboration between them and educators.
As semester kicks into gear, the perennial cry of students about the high price of textbooks can once again be heard throughout the land. Happily, institutional librarians are at least able to reduce the overall burden of supplementary readings through the use of digital reading list systems. This article from Kumara et al. explores current attitudes toward these platforms, notes different levels of use based on discipline area and the need to improve ease of use.
Good teaching has always been challenging for individual practitioners and as technology and pedagogy grow more sophisticated, this is evermore the case. Neil Mosley discusses the growth of specialist advisors in Learning Design needed to support the evolution of teaching as a design process. Entry paths into this field are still poorly defined, with a smattering of post-grad qualifications emerging but nothing at the undergraduate level yet.
Jenny Pesina reflects on the nature of working relationships between learning designers (and peers) and educators in Higher Education, considering some of the organisational structures that influence how these people can contribute to better learning and teaching. The way that relationships vary based on central vs faculty units and what might be done to strengthen bonds is noteworthy.
This is American news, but these broad policy changes do seem to tend to flow on eventually. In a nutshell it sees third party providers of services to universities that are tied to recruitment and delivery of online programs facing great accountability in their activities. In Australia, this would include Online Program Managers (OPMs) like OES and Keypath, who operate online only programs in many Australian universities.
Two very interesting looking AI webinars on Wednesday this week, with CRADLE/TEQSA continuing their great series of deep dives with Margaret Bearman, Rola Ajjawi, Lucinda McKnight (Deakin), Simon Buckingham Shum (UTS), and Sarah Howard (UoW) considering educator responses and the Education Innovation team at USyd creating much needed space for the student voice in this discussion. (The recording of last week’s TELedvisors Webinar – the Two AIs – is now available on YouTube as well)
With the myriad changes looming in the ed tech space, this insightful piece of crystal-ball gazing from Michael Sankey (CDU) and Stephen Marshall (VUW) about the current and likely future states of the LMS is well worth your time. The authors follow the steady progress of the LMS from single source of learning to the heart of a complex ed tech ecosystem. Along the way they raise interesting ideas about whether the future may look more like MS Teams or Slack (I’m unconvinced for now) and touch on necessary changes to teaching practice wrought by AI that these systems will need to accommodate.
I stumbled up this 2018 article recently and with the discussion of ‘cognitive offloading’ and the need for new approaches to assessment that is occurring in the AI space, it seems like something worth revisiting. Kirschner et al. expand previous work on cognitive load in learners to collaborative learning activities, seeking to understand why some collaborative activities succeed while others fail. Broadly, they find that the transactional nature of collaborative learning and group dynamics should be considered in designing these kinds of tasks.
Privacy is often discussed more in principle than practice in education but it is worth being aware that the Australian government is currently reviewing the 1988 Privacy act and institutions and individual educators will need to consider how they treat student data once the work is done. A closer alignment with the very user-centred EU GDPR model, which gives rights to be deleted from systems, appears likely.
Another day, another test of GenAI tools to see whether they could technically qualify as professional practitioners. This time we see ChatGPT making it into the 99th percentile for macroeconomists via the US Test of Understanding in College Economics.
Following on from the wildly popular AI (ChatGPT) Future webinar in early Feb, the TELedvisors Network presents another in the series, with a stronger focus on assessment and academic integrity questions. Alex Sims from Uni Auckland Business School explores these key issues in the first half, with a open discussion in the second.
One of the ‘tells’ that people have been noticing in ChatGPT (or ChattieG as Anitra Nottingham calls it) is a tendency to make up citations when you ask for references. Many of these seem plausible, with known authors or journals, and they are often correctly formatted, but on investigation, they are simply untrue.
I put a call out for examples of this on Twitter and you did not disappoint.
A few more people sent me examples by DM and email that were equally entertaining
From Michael Larkin:
Böttiger, B. W., Böhrer, H., Bach, A., Brokmann, J. C., & Motsch, J. (2018). Fibrinolysis in acute myocardial infarction: A review of the current status and future perspectives. International Journal of Cardiology, 254, 1-7. doi: 10.1016/j.ijcard.2017.10.051 O’Gara, P. T., & Antman, E. M. (2016). ST-Elevation Myocardial Infarction: Management. In J. Loscalzo & J. T. Higgings (Eds.), The Brigham Intensive Review of Internal Medicine (pp. 319- 328). Oxford University Press.
The authors and journal is correct but the articles don’t exist and the numbering is made up.
Probably my favourite though came from Brenna Clarke Gray, who coincidentally was in the middle of compiling a presentation and had asked ChattieG about her own publications – figuring she knows her own work the best. Like a gaslighting alpha male, it made a lot of assertions about her. They were not remotely true.
She went on to ask it five times about work that she had written and it kept making things up. Each time she pointed this out and it apologised and produced a new list. Several times it simply repeated the previous list.
Finally, she asked ChattieG to summarise one of her made up works – it continued to double down.
I asked a few times for examples where ChatGPT actually found and shared real papers. Crickets. So I gave it a try myself.
I used the prompt “Annotated reading list of confirmed academic articles relating to relationships between educators, learning designers, education technologists, academic developers and leaders (and relevant synonyms of these roles) in higher education”
I have to admit, it did reasonably well. (It wasn’t quite on target with my question but certainly in the neighbourhood.) The first paper was out by 1 year, the next two were fine, the Mullen paper (as far as I can see) doesn’t exist, the Brown & Warschauer paper weirdly gives D Brown the initials MW but is otherwise correct, the Lajoie (2008) paper gets half the title, the year and the author correct but the journal wrong and the Siemens & Tittenberger book also appears to be correct. (I can’t speak for the annotations).
So there are ways to get correct citations but checking seems vital.
Here is the output, for what it is worth.
Garrison, D. R., Anderson, T., & Archer, W. (2000). Critical inquiry in a text-based environment: Computer conferencing in higher education. The Internet and Higher Education, 2(2-3), 87-105.
This article explores the role of computer conferencing in facilitating critical inquiry in higher education, and highlights the importance of collaborative learning environments and effective facilitation by educators and learning designers.
Beetham, H., & Sharpe, R. (2013). Rethinking pedagogy for a digital age: Designing for 21st century learning. Routledge.
This book presents a framework for designing effective learning experiences in the digital age, and emphasizes the need for collaboration and co-creation among educators, learning designers, and academic leaders in higher education.
Koehler, M. J., & Mishra, P. (2009). What is technological pedagogical content knowledge? Contemporary Issues in Technology and Teacher Education, 9(1), 60-70.
This article introduces the concept of technological pedagogical content knowledge (TPACK) and argues for the importance of developing this knowledge among educators and learning designers in order to effectively integrate technology into teaching and learning.
Mullen, C. A., & Wedwick, L. (2008). Building a network of expert teachers: The role of teacher leaders in technology integration. TechTrends, 52(4), 34-37.
This article discusses the importance of teacher leaders in promoting technology integration in higher education, and highlights the role of academic developers in supporting the development of these leaders.
Brown, M. W., & Warschauer, M. (2006). From the university to the elementary classroom: Students’ experiences in learning to integrate technology in instruction. Journal of Technology and Teacher Education, 14(3), 599-621.
This article explores the challenges and opportunities of integrating technology into instruction in higher education, and emphasizes the importance of collaboration among educators, learning designers, and academic developers in preparing students for this task.
Dron, J. (2007). Control and constraint in e-learning: Choosing when to choose. Hershey, PA: Information Science Publishing.
This book explores the role of control and constraint in e-learning, and argues for a more collaborative and flexible approach to learning design that empowers educators and learning designers to make informed choices about how to structure and facilitate learning experiences.
Lajoie, S. P. (2008). Metacognition, self-regulation, and self-regulated learning: Research recommendations. Educational Technology Research and Development, 56(1), 47-64.
This article discusses the importance of metacognition and self-regulated learning in higher education, and highlights the role of educators, learning designers, and academic developers in promoting these skills among students.
Siemens, G., & Tittenberger, P. (2009). Handbook of emerging technologies for learning. University of Manitoba.
This handbook provides an overview of emerging technologies for learning and emphasizes the need for collaboration and co-creation among educators, learning designers, and academic leaders in exploring and implementing these technologies in higher education.
Something that is become more and more apparent as the discourse around the generative AI revolution continues is the virtual absence of student voices. We are making a great many assumptions about what they think and what they will do, but are they valid? Emily Pitts Donahoe from the English department at Notre Dame university (US) runs through a recent discussion she had with her first year writing students, touching on academic integrity and the quality of AI outputs.
Looking at teaching from another angle, instructional (learning) designers sometimes use ‘personas’ in a design process to centre their work around the learner need. A persona might include demographic information alongside personalities and learner motivations. Rebecca Hogue explores using ChatGPT to have (basic) conversations with personas that might be interested in shifting careers to the instructional (learning) design discipline. As with many AI outputs, it can be a little basic but also offers a convenient framework to refine.
Without realising it, I appear to have hit upon a student centred theme this week. This article summarises some interesting discussions relating to what students want and need from the various education technologies in their institutions – and particularly the way their educators choose them and use them. Key ideas include academic freedom vs consistency, support & training, and cognitive overload from too many new tools.
Catalyst is a major host and developer for the Moodle Learning Management System (LMS), so it is always worth being mindful of your source with these posts. As someone who has worked with Moodle in a number of institutions of the years though, the key ideas here certainly resonated. What do you do when a perception emerges that your institutional system seems ‘tired’ or ‘dated’? Is this valid or are there other factors influencing this? How much can and will recent updates to Moodle change this?
Developing good learning and teaching practices in educators in Higher Ed has its challenges. This short interactive game developed by the Wharton Business School at the University of Pennsylvania does a lot of heavy lifting in showcasing some key concepts as players work through a fun 10 min teaching scenario.
First published in Campus Morning Mail 7th Feb, 2023
Hello colleagues – I must assure you that I am aware that there is a wider technology enhanced learning universe beyond AI/ChatGPT but at the moment it is hard to find anything else.
There is a concept in Internet research referred to as participation inequality or the 90/9/1 rule. This essentially states that 90% of people involved in an online community don’t participate much, preferring to ‘lurk’. 9% contribute from time to time and 1% does the majority of the talking. This paper from Zhu and Dawson explores the differences in informal learning outcomes between members of these groups in popular education communities on Reddit. While ‘lurkers’ and posters report that they learn from the community at roughly equal levels, the authors note that posters apply and analyse what they have learned more frequently.
As the discussion about the practical use of generative AI tools moves forward, the importance of designing good prompts to get the most from the technology becomes increasingly apparent. This in depth post from Philippa Hardman describes her process for designing a rich prompt to generate a learning activity centred around the educational strategy of “Undoing”. She explains seven key elements of her prompts and offers practical suggestions.
I haven’t used this service and in no way endorse it – I simply present it as an interesting example of the ways that ed tech companies are starting to monetise this space. I guess it is Prompts As A Service. From what I can make out, it is essential a set of prompt templates tied to specific learning and teaching needs. It ranges from generating a title for your new course to generating a presentation task to assess learning.
Sometimes when there is a world of content out there about a new topic, the easiest thing to do is to listen to some experts – as much as anyone can be an expert currently – talk through the issues. The first of these is one that I organised last week and the second features our panellist Anna Mills and the great Maha Bali.
This is just a quick update on this webinar that I mentioned last week. Due to higher than expected interest, we have moved it to a Zoom webinar platform kindly provided by Monash. If you previously registered, you should have received a new invitation with the updated details. The session will be recorded and a link to the recording provided here in the near future.
In news that should surprise nobody, Microsoft last week significantly lifted their investment in the OpenAI organisation. Why does this matter? There is reasonable speculation that they plan to integrate generative AI functionality into the Office suite of software by the end of the year. For those institutions still straddling the fence between block and control in terms of these tools, that would make it virtually inescapable. Are we ready for SuperClippy?
Some of the most passionate arguments that you will hear against these technologies come from creatives, particularly visual artists who raise valid questions about the extent to which AI generated works informed by libraries of billions of images may infringe copyright or at least moral rights. Some have been able to point to elements of images that directly match their own – and sometimes even find their signatures. The rise of these tools has already had a chilling effect on work for copywriters and artists. I sympathise greatly but suspect that the genie is truly out of the bottle. This piece explores the current legal landscape in the US.
As the potential impact of generative AI tools like ChatGPT has become clearer, some people’s hopes have turned to detection tools. This space seems to be the second new goldrush in education, as I see wild claims and huge promises by the day. This piece from David Gerwitz tests three leading detection tools, with fairly unconvincing results.
First published in Campus Morning Mail 24th Jan, 2023
Well, it appears that 2023 is to be the year of Generative AI in education. In much the same way that we were swamped with newfound experts on epidemiology and public health policy at the start of the pandemic, it is hard to scroll through online content without finding a dozen hot takes on ChatGPT. As someone whose job it is to make sense of this brave new world and contribute to an institutional response, I must admit that it is hard to keep up and separate the wheat from the chaff. Hot tip though – if your piece is partly written by an AI app, it’s more likely chaff. Just stop it, please. We get it.
This collection of resources from Anna Mills, a writing teacher at the College of Marin in California has been one of the best I have found so far. I don’t necessarily agree with all her ideas but her tweets have consistently been some of the most thoughtful and practical that I have come across in the maelstrom of discourse. She is also hosting a webinar on the topic on Sat 28th Jan at 9am (AEDT).
If you prefer something at a more sociable time, I have worked with Prof. George Siemens (UniSA) to organise a 1 hr panel discussion with a focus on practical next steps for educators, leaders and edvisors (learning designers, education technologists, academic developers etc). On the panel we have:
Prof. George Siemens – Director: C3L UniSA Education Futures
Ass. Prof Trish McCluskey – Director Digital Learning Deakin University
Aneesha Bakharia – Manager Learning Analytics and Learning Technologies University of Queensland
Anna Mills – Writing teacher at College of Marin
Colin Simpson – Education Technologist Monash University, Convenor TELedvisors Network
And now for something completely different. This article from Robin Kay of Ontario Tech University explores what I feel to be an underexamined aspect of education, the impact of emotions on learning. Specifically, learning with technology. Kay surveyed 220 pre-service teachers, gathering data about their emotional responses to learning strategies tied to learning technologies. Social interaction based approaches correlated to anxiety while experimental and authentic strategies were most strongly associated with happiness.
Fair warning – this is again very much just a post for me and is more about how I store my notes to search later than publicly explaining this highly regarded book about qual data analysis.
On first glance at this chapter, I thought it was going to be one that I could skim very quickly before getting on to the meatier next chapters that outline the many many different approaches that one can take to coding qualitative data. A big chunk seemed to be centred around analysing images, which is not remotely on my radar.
Once again, I was wrong. In a handful of pages about writing analytic memos, the other shoe in this entire process seems to have dropped. We have the process of coding the text and here we have a range of approaches that I can take to making sense of it – the analysis – as part of working towards some conclusions.
More than anything in my research, I have struggled with my theoretical lens and how this aligns with the questions that I am asking and the data that I am collecting. (Clearly it has helped shape this process in some ways but I have never been overly comfortable about limiting my exploration to the boundaries of the world that the theoreticians have laid out).
Saldaña seems to lean in to the Grounded Theory approach to things – essentially having a strong methodological approach to interrogating the data that leads you to your own theories. Given that I am mixing methods, with a detailed quant survey and my qual interviews, I’m not sure how this might work for me but I think it might be something that I need to explore further.
The other breakthrough (or part breakthrough) in thinking came from something that turns out not to be rooted in specific theory at all but which Saldaña and colleagues have concocted from a broader understanding of ideas in this sociological space. (More of this shortly)
Anyway so these are my notes as I read this chapter – a big chunk includes actions for myself to take.
“Analytic memo writing documents reflections on your coding processes and code choices; how the process of enquiry is taking shape and the emergent patterns, categories and subcategories, themes and concepts in your data, all leading toward theory” (P.44)
Analytic memos are comparable to researcher journal entries or blogs – brain dump about participants, phenomena and process under investigation
Should be concurrent with coding – it is about sense making so it doesn’t need to be written formally.
Add dates to them to track the evolution of my thinking
Write a new memo anytime something significant comes to mind about the coding or analysis of data. Create a new section in Scrivener for this
There are a number (12) of different kinds of memos – I think I will create subfolders for each of these and set up an A-L numbering system (e.g. E5) along with dates.
A) Reflect on and write about how you personally relate to the participant or phenomenon
B) Reflect on and write about your code choices and their operational definitions
[Did I put my survey 1 analyses into Scrivener?]
C) Reflection on participants routines, rituals, rules, roles and relationships It is hard to understate how big an impact this approach had on me as I read it. My broad theoretical framework to date has been Social Practice Theory, which also touches a lot of the things that people do. I have always felt that this didn’t adequately balance with the impact of doings on relationships, so seeing this got me excited that there was a better suited theoretical lens to explore. Saldaña mentions Social Action nearby this list and references a bigger discussion of the 5 Rs in another book.
I actually emailed Saldaña and got a response back in under 2 hours. (I do love the scholarly community at times) – as it turns out, there isn’t a formal theory that the 5 Rs come from – it is more just an interpretation of a range of concepts in the sociological field – but I might look for a way to draw on this regardless as it feels so much more like what I need than anything else that I have seen so far in the theory space.
“Routines are those repetitive and sometimes mundane matters human do for the business of daily working and living. Rituals are significant actions and events interpreted to hold special meaning for the participant. Rules, broadly, refer to socialised behaviour and the parameters of conduct empower or restrict human action. Roles (parent, favored son, victim etc) refer to the personas we assume or attribute to others and the characteristic qualities that compose one’s identity and perceived status. Relationships refer to the forms of reactions and interactions of people in their roles as they undertake their routines and rituals through frames of rules” (p.47)
D) Reflect on and write about emergent patterns, categories, themes, concepts and assertions (As part of understanding and tracking my thinking in developing them) This includes comparison of things that I am seeing between interviewees – more overview thinking
E) Reflect on and write about the possible networks and processes (links, connections, overlaps, flows) among the codes, patterns, categories, themes, concepts and assertions – Interpreting how individual components of the study weave together – Consider making Scapple maps of relationships between concepts – hierarchies, chronological flows, influences and affects
F) Reflect on and write about an emergent or existing theory How my observations might be generalised or applied to other populations Set up subfolders in Scrivener in the Analytic Memos folder to capture all of these varieties. Write an initial memo in each before the analysis to capture my assumptions and understanding to date.
G) Reflect on and write about any problems with the study – can raise provocative questions for further research or just help untangle things
H) Reflect on and write about any personal or ethical dilemmas with the study – Am I seeing things that counter my values or belief systems?
I) Reflect on and write about future directions for the study – Is there where I explore the things that I am currently trying out in TELedvisors to solve problems that I believe exist in this space? – What are the missing elements that I am uncovering or the additional data that is needed?
J) Reflect on and write about the analytic memos generated thus far (meta-memos) – General sense checking on the progress these memos represent – Are these memos like unpublished blog posts? Can/should I transfer my previous blog posts or reflective notes from prior analysis here? (Should I separate survey and interview analysis? Where best to keep the survey 1 reflections?)
K) Reflect on and write about tentative answers to your study’s research questions -To keep on target and on track
L) Reflect on and write about the final report for the study – In some ways this can become early drafts of chunks for the thesis – Developing thinking about the thesis structure
Analytical memos can serve as important additional sources of codes and categories
“Open” coding, which I have been talking about with Jess, is the approach now referred to as “Initial” coding in this book. This can be part of Grounded Theory coding in the first pass, with maybe Axial coding in the second?
Saldaña says 15 interviews was sufficient for him (p.55)
First stage coding methods – InVivo | Process | Initial – split the data into individually coded segments Second stage/cycle methods – Focused | Axial | Theoretical – compare and reorganise codes into categories, priorities these categories into “axis” categories that other categories orbit around and then synthesise them to formulate a central or core category that becomes the foundation for explication of a grounded theory
Big question for me to consider – Is Grounded Theory an approach for me to consider given that I’m not entirely sold with my existing lenses? The fact that I’m doing mixed methods – with quant surveys – seems to count against this but maybe I can justify some kind of Frankensteining of the whole thing??
“It is memo-writing that is the engine of Grounded Theory, not coding” P.55 (quote via Gordon-Finlayson)
“Categories also have properties and dimensions- variable qualities that display the range of distribution within similarly coded data” P.55
If I do go down the GT path, this looks like a handy map p.56
Part of me wishes that I had read this book 6+ years ago when it was initially recommended to me by Inger Mewburn but to be honest, I don’t think I was ready for it then. Research certainly seems a little daunting as a result of reading it now though.
You must be logged in to post a comment.