In Cormierâs blog âOur schools arenât broken, theyâre hardâ he identified one of the initial barriers to overcome is the history of teachers being promised a âfix allâ technology answer for more than 20 years to no avail, so any new technology implementation would be met with skepticism regarding its possible success. Dave needed to rebuild trust in the tech with quick wins that were relevant and immediately implementable. In other words, he needed to understand how looking at tech through the lense of a teacher with a long history of high hopes and low results would color the acceptance and willingness to accept the new technology, thereby affecting the possibility of the tech being accepted and used. This is not simply an issue within the educational community, many end users outside of academics (organizations, government, and students for example) have also experienced expectations for a new piece of tech that didnât live up to its hype â and as the saying goes âonce bitten, twice shyâ. Past experiences may colour our view of new ones when we have experienced disappointment. When planning to implement a large technology change with a group that has experienced failures with roll outs in the past, using change models that anticipate, acknowledge, and address users concerns such as Anderson and Ackerman Andersonâs focus on content, people, and process (Al-Haddad and Kotnour, 2015) which identifies that while âtechnology is key to drive changeâ(p. 244), equally as important to success is the âhumans involved in the change, and their behavior when implementing changeâ (p. 244).
References
Al-Haddad, S., & Kotnour, T. (2015). Integrating the organizational change literature: a model for successful change. Journal of organizational change management, 28(2), 234-262. Integrating the organizational change literature: a model for successful change | Emerald Insight
Cormier, D. (2017, December 8th) . Our schools aren’t broken, they’re hard. Dave’s Educational Blog. Our schools arenât broken, theyâre hard â Daveâs Educational Blog (davecormier.com)
Hi Jessica, I considered your post for my response because I think this is a really common issue that a lot in the Educational Technology space have. Ed Tech companies often promise the world will shift with their solutions, that they will solve all staffing problems, revolutionize learning, but I find the results are often much slower to materialize. All we are trying to do is optimize learning and streamline our processes as learning experts. Why does it have to be so hard?
I think part of the reason is that it’s so hard to measure shifts in learning effectiveness in a objective way which is promised by learning analytics, a field which promises to help us measure “data about learners and their contexts, for purposes of understanding and optimising learning and the environments in which it occursâ (Siemens & Long, 2011, p. 34, as cited in Prinsloo & Slade, 2014, p. 310). As you mention above, learners are also “once bitten, twice shy” in their sentiment that these learning technologies will make a difference, but I think some do – it’s just hard to measure. For example, in their article Sclater et al. (2016) argue that learning data used for analytics can include how often students interact with a virtual learning environment, how often they complete assessments (not just how they score on either objective or subjective questions), or how often they post on discussion boards as potential indicators of academic success (p. 15). I wonder how often a newly-implemented technology might have a confusing interface, causing students to have to log-in multiple times in order to find what they’re looking for? Or, how often students use multiple devices to access learning software, requiring them to log out of their other device before they are permitted to log in with their new one (I’ll admit, this happens to me in Moodle at least a couple of times a week). When I was reading the Prinsloo & Slade article I was also wondering how often student demographic information (such as age, sex, gender, socio-economic status) might be misused either due to conscious or unconscious bias when analyzing learning outcomes? Or how often instructional strategies, such as relying to heavily on synchronous online sessions to the exclusion of students who may not have availability at that time of day, may affect our learning data? I believe the conclusion I’m coming to is that – like with many things – it’s important to view learning analytics in context of the specific technology or learning problem one is attempting to solve, ask questions about who the data might be excluding, and who the conclusions from the data benfits. Based on your blog post above I would assume you agree we should be hesitant to accept the analytical conclusions of Ed Tech companies at face-value.
– Andrea Evans Smith
References
Prinsloo, P., & Slade, S. (2014). Educational triage in open distance learning: Walking a moral tightrope. The International Review of Research in Open and Distributed Learning, 15(4). https://doi.org/10.19173/irrodl.v15i4.1881
Sclater, N., Peasgood, A., & Mullen, J. (2016). Learning analytics in higher education: a review of UK and international practice | VOCEDplus, the international tertiary education and research database. https://www.voced.edu.au/content/ngv%3A83377
Thanks for the reply, Andrea. I agree with your statement that we should be hesitant to accept face-value data without context or understanding of the full picture. KelloggInsight’s paper speaks to the issue of data gathering without understanding the reason or intended purpose of the data, and how this poses challenges to filtering good vs. bad data (2015). I personally deal with this often when Admins are charged with sourcing data from the organizational field without understanding what the data will be used for or even what the numbers mean. This has resulted in many frustrating and unnecessary follow up emails when the Admins are just chasing numbers and can’t recognize they have the info they need via a different source already – wasting everyone’s time!
I think this translates to data analytics in DLE as well – if the users do not understand the reason behind data gathering they may be reticent to provide information, or possibly provide ‘bad-info’ by attempting to provide what they feel is wanted vs. what is true. Lots to consider – thanks for commenting!
reference for above:
KelloggInsight. (2015, May 1). A Leaderâs Guide to Data Analysis: A working knowledge of data science can help you lead with confidence. [Blog post]. https://insight.kellogg.northwestern.edu/article/a-leaders-guide-to-data-analytics/