Missed Janet Clarey’s great interactive talk this Wednesday but caught up with the recording. I think it was a great session on many accounts. Janet brings her great experience in Corporate Learning Development research at Brandon Hall into the session she leads. Thanks, Janet!
The main questions that she addressed were:
- What are Web 1.0/2.0 learning models/trends? Which theories are they informed by? What data do they collect and manage?
- How can innovations like Augmented Reality and Foursquare be used to support learning?
- Can informal learning really work in the face of regulatory requirements or mission critical situations?
- Take a social learning and networking enabled LMS like SABA. How is it really different from what we are doing in the open MOOCs?
- Can there be a hybrid model spanning eLearning 1.0 and 2.0?
Very interesting questions and even more interesting responses from participants. Let’s back up a bit. Responding to a July 2010 discussion around Critical Literacies and the eXtended Web, I looked at what my starting points for a PLE would be and why we need to closely look at what the PLE Architecture should be based upon. More recently, as George mentioned, there is an extremely interesting discussion going on in the Learning Analytics Google Group – I do recommend that you go through the bibliography and Rebecca’s summary of discussions.
As background, as well, there is an interesting discussion I had with Rob Wilkins and Janet Clarey on LMSs, Assessments and RoI early last year after Janet’s set of great interviews with leading LMS providers, where I argue that LMSs can’t be social as an add-on (keep-up-with-the-trends thought or to do eLearning 2.0 the eLearning 1.0 way) and why current LMS metrics are woefully inadequate to provide us any strong indicator of learning or performance.
Back to Janet’s talk and the first question. Her slide on eLearning 1.0 emphasizes technology as a support for most of the eLearning dimensions that are in use today – courses (self-paced and virtual instructor-led), certification, LMS/LCMS, Authoring tools etc. Participants responded to her “Informed by what theory?” question by evoking concepts and theories such as cognitivism, constructivism and constructionism and characterizing eLearning 1.0 as “sage-on-stage”, body of knowledge etc.
I have made this point before, but it is hard for me to think of LMSs in the 1.0 era as anything but tools for learning automation, which was the pressing need then as organizations started adopting technology to manage learning. Because of this reason, it is also a little superficial to ask what theories informed eLearning 1.0 supportive technology. The theories influenced the way content was designed and instruction delivered rather than how the LMS or Virtual Classroom was built. I would put LMSs such as Moodle and LAMS and platforms such Mzinga’s Omnisocial in the eLearning 2.0 category instead as supportive tools informed by theory. Janet’s consequent question of what data are we collecting, reporting and analyzing in the 1.0 world, evoked the standard responses – time spent, scores etc.
elearning 2.0. I had problems with putting disruptive technology as the core behind eLearning 2.0. While that may be an important factor, it can’t be only thing at the core. I am not sure that blended learning, mobile learning,P2P, 3D-immersive environments and “search learning” (whatever that is) would fall under eLearning 2.0 which she also characterizes as “Self-serve. Social. Mobile” - at least not the way we have been talking about it.
What theories inform eLearning 2.0? To my utter surprise, nobody put up Connectivism up there (connectionist was the closest)! I think the data aspect, where I did get to see artifacts and relationships, would have benefited from some discussion around intelligent data (George went to it later in the session).
Next were a few slides on network maps, augmented reality, location aware apps. I thought it was a good idea to provoke thought of how these tools could be used as part of the learning process. There are perhaps hundreds of ways to do that and to conjoin these with existing approaches/theories and design approaches is not very difficult. In my belief, Linked Data will play a massive role in terms of distributed connective knowledge (but that is another story) as will serious gaming and simulation combined with these new technologies. Obviously, data acquisition and capture will also be enhanced (and there are privacy and ethical concerns around this).
George referred to the Semantic Web and Web 3.0. It is interesting to note the title of a post that Stephen wrote about three years back “Why the semantic web will fail“. But of what theories inform the eXtended Web, participant responses included marketing, monetization model, authority, self watching vs crowdsourcing, surveillance (someone suggested sousvelliance) and personal learning. Steve LeBlanc asked for a list of differentiating characteristics, I would respond that these are the subjects of the PLENK2010 discussions – PLE, MOOCs, Connectivism, Intelligent Data, semantic web, Linked Data, extension of the Internet as a Internet of things. Again, I think Connectivism would form a important influencing theory of the eXtended Web.
For me there are two important aspects of the data aspect of the eXtended Web – data trails (George) and sliced PLEs, and, new forms of collaboration leading to new learning analytics (like Connection Holes) that can replace the traditional 1.0 methods and tools.
Can informal learning work in mission-critical situations or in situations that demand proof of regulatory compliance? For the former, yes, absolutely. Where informal learning and connective learning models for learning and performance really succeed is because they realize that knowledge (and expertise) is distributed and problem solving is essentially a process of connection-making.
For both, there is a larger question – what are we measuring? Regulatory compliance – that organizations prove that employees spent time and obtained passing scores on key topic such as sexual harassment at the workplace – is built at cross purposes with the aim of the regulations (say, employees reflect and practice sensitivity to and abstinence from sexual harassment at the workplace and companies don’t have to submit proof of deviation just like you have to let a software vendor know if you are not license-compliant). Maybe the parochial measures prescribed by the legislations need to change rather than stating that traditional formal elearning does provide an accurate measure and meets the objectives of the legislation.
The argument is carefully articulated by Stephen in his post Having Reasons where he states:
The whole concept of ‘having reasons’ is probably the deepest challenge there is for connectivism, or for any theory of learning. We don’t want people to simply to react instinctively to events, we want them to react on a reasonable (and hopefully rational) basis. At the same time, we are hoping to develop a degree of expertise so natural and effortless that it seems intuitive.
I think the question, although someone did answer it from one perspective, “will the ability to repair a nuclear reactor emerge from the water cooler”, is a horrifying and irresponsible one intended to discredit the concept of informal learning. What if I flipped the question and asked “will the ability to repair a nuclear reactor come from learning online at your own pace” – which discredits WBTs as a possible solution altogether. It is not a new question and I think Jay Cross has defended it somewhere too. It trivializes the problem and the solution.
Janet also showed a learner home page in SABA and immediately compared the “technology” to the “technology” in the MOOC saying how is this really different. I think that is where the disconnect is – you cannot put technology and affordances of tools at the core, whether disruptive or not. It is also the reason I continuously state that current LMSs are building social learning add-ons, not rethinking from the ground up. Theory will inform not only how the technology will work but also how learning will happen. I know Stephen would have a mouthful to say on this as well (pity he was not there).
On the discussion whether the two generations can give rise to a hybrid, there are mixed opinions. Connectivism is a very young theory. Even before it started, the challenge was still to put an implementation (practice) face to the theory. These pressures to generate a pedagogy or instructional design approach or practical guidance among other pressures, may prompt us to jump to a hybridization of the concepts.
But in a sense, we need to let this discussion evolve – the debate on my earlier post around constructivist and connectivist PLEs generated show us a healthy state on the road to resolving these practice challenges. Like in the response on sense-making among other comments on the PLE post (which I still have to respond to), Stephen is perhaps correct in assuming a pure unadulterated stance on what connectivism and connective knowledge are and how they can change what we believe and practice in learning.
I struggle with it all the time, but I think a pure stance is much-needed with occasional intolerance to evolve to a state where it can widely inform practice.
Read Full Post »