Posts Tagged ‘plenk2010’


Ben Clark from Project TinCan reached out and responded to my last post on SCORM. They have an amazing platform – not only have they been chartered with researching what the next generation of eLearning runtime communication should look like, but they have also employed a cool tool called UserVoice to crowd source ideas and opinions. TinCan is managed by Rustici Software under a Broad Agency Announcement (BAA) from ADL.

ADL’s Future Learning Experience Project, launching soon, is focused on exploring an Experience API (with TinCan and Letsi) and a harmonized CMI data model (with AICC). Letsi seems to be  a really interesting initiative too – they started looking at SCORM 2.0 in 2008 and have a Wiki documenting the open effort of a large number of experts worldwide. 

This is exciting work which has a direct linkage (or should have) with Personal Learning Environments that we have been discussing in PLENK. The reason why PLENK-ers should look at it is that it is trying to establish some general frameworks for formal AND informal, linear and non-linear, distributed, web X.0, personal and community, and mobile learning. Our current work in this area can easily influence evolution into the future landscape. I personally need to really deep-dive into the amazing mass of materials and ideas, to get a proper appreciation of the good work these folks are doing. Kudos!

Read Full Post »

By far the most definitive week of discussions around PLENK for me, this week’s Friday discussion is worth multiple rounds of further investigation and discussion. I cam in a trifle late and it was difficult to catch up without a starting context, so I caught up with the recording later. Here are my notes and deductions – my version of the conversation between George and Stephen. As always, please feel free to add or correct me in case of any thing I missed or misinterpreted.

Stephen Downes

  • There is a Tool selection or Dashboard approach to classifying technology experience
  • There is no great application out there that allows me to read and write like gRSSHopper. This is the workflow approach. We need to model workflow, provide end-to-end functionality and that is the most daunting piece.
  • Should we be looking at a theory of everything (like Atlas in geography or Set Theory of everything)? Technology will evolve over time, but the core patterns of use may not (in fact, they may).
  • Is there was a way to hide the modalities, so that we focus on the core?What are these core ideas? Personal autonomy, distributed knowledge and social learning. There are frameworks like the 21st century skills frameworks. These are very widely  fragmented. I would add pattern recognition as a fundamental skill – is the optimal tool one that would be based on network theory and pattern recognition?
  • Machine analysis can give us a syntax. The human side would give us semantics.
  • Can we figure out, in technological terms, how humans do it – derive meaning? From the neurological sense, it is a very organic process that evolves over time, not intentional or deliberate, each new experience creating more understanding.
  • Is the tool of everything going to be a pattern recognition tool?

George Siemens

  • First time adoption of tools is difficult, not because of the tools, but because of concepts.This is where companies like MS or Facebook helped by aggregating functionality and establishing common ways of completing standard tasks
  • Tools that are available but the level of integration is too low at this point. With connective specialization, it is an each to her own preference. Also at the point of adoption, it adds to the confusion.
  • Do we need a tool of everything or do we need a way to build capacity?
  • The theory of everything: maybe with a combination of critical literacies and attributes or ideas of the disciplines?
  • The hiding of modalities is important.
  • There are two dimensions to pattern recognition – technological and human. The technological example would be reading through a mass of data vs. navigating a structured analysis of the mass of data. On the human side, Learning Analytics tools provide valuable patterns of use. That is what computing can do and visualization is going to be very important.
  • That does not mean that technology will be able to model personal or network use of the resources, but technology can help.
  • We need to have a balance between what a computer does well and a human does well (form vs. meaning).
  • Experts and novices think differently – experts think in patterns and novices think sequentially, or (Cris2B) plan ahead vs plan backwards. Conceptually, once some patterns are built up, some context, we are able to recognize more complex patterns.

My 2 cents.

I think that we must first start by presentation and analysis (as best as the computer can visualize in a simple way) and let humans and our networks derive the meaning. This is what I hope an NBT will achieve.

Maybe at some point, the insight from how humans use that information for semantics, through reflection and practice,  will start becoming progressively templatized as we understand or build tools and processes that can model how humans function – how we evolve from novices to experts in an area. I call this Native Collaboration and see it permeating every function in learning.

The discussions are fast evolving to a stage where some formal models of Native Collaboration (which attempts to model, functionally and technologically how we learn) and NBT (my terminology for Network Based Training – an evolution from Web Based Training or WBT) will emerge where the NBT environment encapsulates the modalities in a fairly standardized manner while allowing personal autonomy and includes specific connectivist techniques for Native Collaboration. This is really exciting!

Read Full Post »

PLE/N Tools

Really nice collection of links for this week’s #PLENK2010 discussions. I especially liked Patterns of personal learning environments, Wilson. Wilson looks at patterns of use of and activity in personal learning tools and learning networks, revising a previous approach which was very functional and tool-specific.

One of the ongoing challenges I have is with the constant comparison between LMS and PLE, which I happen to think, is an apples to oranges comparison. They serve different needs and are differently located on the spectrum between self-learning and managed-learning (if there is such a phrase). The MOOC and the LMS are comparable, as are NBTs (which I define as Network Based Training, the natural networked learning successors to WBTs) and PLEs.

Let us picture this. The LMS is used to launch a WBT course. The course pops up in a player which is really a navigation shell that acts as a data conduit between the WBT and the LMS. Suppose the LMS is learning network and personal learning tools aware (with blog, wiki, Flickr, connection hub-bing, discourse monitoring etc. affordances being provided by whatever mechanism – web services, XCRI…) and the WBT is just base reference material not quite unlike the Daily in this MOOC.

The player could then be technically programmed to act as a conduit between the WBT and the network or personal learning tool (people, resources, context, conversation, bookmarking service).  Sort of a SCORM for networked learning environments.

What would you call the WBT then? A NBT.

Would the PLE look similar to a NBT. Yes, it would resemble a slice of the PLE, a workspace which we organize around a theme that interests us. Similarly, the NBT could be conceived of as a combination of slices of many different PLEs, in fact as many as those learners enrolled in the NBT.

But the NBT would necessarily be a more constrained, parameterized environment, designed or seeded by a network weaver, an atelier – the new avatar of the teacher, and adapted and grown by the learners, present and past. The PLE would grow unfettered, the whole being greater than the sum of individual slices.

Most of the discussion, even in Wilson’s paper, focuses around the tools in the end. What can tools do to present the solution to a pattern? In fact, almost every solution is expressed in technological terms (notice how many times the word “tool” appears in the first line of the solution).

It is almost as if technology is the master problem solver for every pattern of learning, but that may just be me.

I would rather focus on Critical Literacies. On having reasons. Just like I would not count an NBT operating in an LMS environment to be a true NBT – as in truly architected as a networked learning aware solution from the ground up, rather than pasted on a WBT as a quickfix.

And that is perhaps why I would choose to take a radical stand – PLE/N tools do not yet exist. I would like to take you back to how PLEs were defined in Week 1:

A Personal Learning Environment is more of a concept than a particular toolset – it doesn’t require the use of any particular tool and is defined by general concepts like: distributed, personal, open, learner autonomy.

and for PLNs:

A PLN is a reflection of social and information networks (and analysis methods).

We are confusing our current lists of PLE/N tools with the concept or the method, like trying to measure the length of a moving snake by a tape measure or measuring the volume of a gust of air with a sudden clenching of our fist.

By far the most important attribute of the toolset, if you can call it that, for a PLE/N would be its complete invisibility. It would be implicit for learners in the way it has been designed. It is then that we will be able to project our personal aura on it and make it personal, as open as we are, as connectedly aware as we want to be (or can be) and as autonomous as we will allow it to be.

And that will also take a fundamental rearchitecture of the way we conceive of learning resources, away from resources as objects or networks, to living and dynamic forms that reflect our social and information networks.

More of a hard left than a gentle meandering this one, would you say?

Read Full Post »

New Literacies

Listening to Will Richardson’s session on PLENK2010 this past Wednesday. He brought up NCTE‘s definition of critical literacies:

  • Develop proficiency with the tools of technology
  • Build relationships with others to pose and solve problems collaboratively and cross-culturally
  • Design and share information for global communities to meet a variety of purposes
  • Manage, analyze and synthesize multiple streams of simultaneous information
  • Create, critique, analyze, and evaluate multi-media texts
  • Attend to the ethical responsibilities required by these complex environments

He said that tools become the central point of the conversation without worrying about the context. And in fact, he did not think the high school graduates he knew would make the grade if these were the certifying literacies.

Reminds me of a vision statement that I read recently. This is from the NAE (Committee on the Engineer of 2020, Committee on Engineering Education, National Academy of Engineering). Describing the Engineer of 2020, the enlightened authors write:

What attributes will the engineer of 2020 have? He or she will aspire to have the ingenuity of Lillian Gilbreth, the problem-solving capabilities of Gordon Moore, the scientific insight of Albert Einstein, the creativity of Pablo Picasso, the determination of the Wright brothers, the leadership abilities of Bill Gates, the conscience of Eleanor Roosevelt, the vision of Martin Luther King, and the curiosity and wonder of our grandchildren.

Yeah. Right.

Jenny started the discussion on ethical responsibilities. I think it is important to also evaluate ethical responsibilities in the traditional context as well, not just for issues that tools such as Facebook are creating in the context of the Internet. The traditional system throws up as many horrific examples of violation of ethical responsibilities.

Read Full Post »

The debate at Oxford Union this Wednesday on informal learning was very interesting, more so because some wonderful people on Twitter were actually relaying it blow-by-blow and also because I was testing my multi-tasking skills by juggling between the Twitter conversation and the PLENK session!

The motion was: The House believes that technology based informal learning is more style than substance.

Dr. Allison Rossett, Nancy Lewis, Mark Doughty argued for:

Informal learning is appealing and exciting but it has no authoritative voices, no assessments or guidance, and therefore no substance. The motion isn’t about us and how we like to learn. It’s about our need to know that the organisations and people we trust know what they are doing. Informal learning doesn’t provide that. It has no thermostat or control. We all love technology, but on the scale of substance and style, it’s still all about style. If you care about organisations, be they of pilots, doctors or cooks, if you care about performance then we urge you, support the motion.

Prof. Dutton, Jay Cross and David Wilson argued against:

Informal learning is not trivial; it is in every corner of institutions. People in the room are using technology to check facts as we speak. Technology-based informal learning enhances information and reach. It makes experts more accountable and raises the bar. And for parts of the developing world it is the only learning available. Therefore, we urge you to vote against the motion.

The main arguments were:

For the motion (more style than substance):

  • gets viewed as a cheaper alternative by managers, but no measure of formal effectiveness for learning managers
  • need assurance that our doctors are medical experts and our pilots can fly a plane
  • formal gets things done
  • not well-researched
  • no north star to guide, no common understanding of what it is
  • does not work when failure is not an option (mission critical)

Against the motion (substance and style):

  • Internet has become a first recourse for information
  • institutions need to learn to harness the network
  • (Cross) co-exists with formal learning on a continuum, only visible separation inside school learning
  • Not the tools but the collaborative activities that will sustain and evolve
  • it is part of work, we do not need to separate it

The ones against won comprehensively and there is an online vote if you want to add your weight. I think there are some important pieces to this debate.

One, learning does occur informally, whether with the use of technology or without it.

Two, by definition it is informal, loosely (if at all) structured, not pre-meditated or goal driven (let me go to the water cooler to get agreement on the next strategic shift in business). It is a space where data is not as important as the intelligence in the conversation, as the alignment between connections.

It is a space where in principle decisions may occur or new ideas may emerge or new connections may be made. It is a space that can trigger a lot of formal work. And since it is informal it may not always be serious.

Three, the separation categories of formal and informal make sense when one is trying to push out the other as being an equally or more effective way of learning. To make that claim, informal learning will have to defend itself against vague arguments of mission-criticality, dissipated theorizing and non-existent assessment methods. 

I say vague arguments because saying a doctor trained by informal methods (if any are identified) will fail to become a medical expert (or succeed) is an improperly framed, populist argument.

It assumes a distinct category for formal and informal. It assumes that informal learning is all about informal or non-serious, undirected chatter which depends on serendipitous events to become or be considered meaningful. It assumes, on the other side, that formal learning undisputedly generates medical experts or pilots. That every site of formal learning is serious, directed and purposeful.

It also throws out any chances of even considering that informal learning plays a huge role in the organization or in school learning.  In fact the argument that informal learning does not work when failure is not an option precludes the very idea of allowing mistakes to happen during formal learning (as Sir Ken Robinson argues in his TEDTalk – How Schools kill Creativity).

I would vote against the House on this one and also chasten it for selecting the motion the way it stands, more to provoke extreme reactions than to promote constructive debate.

Read Full Post »

With a little help from Jatinder, a kindred soul in the making of simulators that happen to attract Brandon Hall Awards, I tried to visualize a model of PLEs operating in a connective environment. It started with a reply I made to Janet and Carmen on what I think should be:

…let us contrast the MOOC environment with an LMS. Can we think of this environment as self configuring instead of being configured by an administrator. How about when a person wants to join a “course”, she gives rights to the MOOC server to “pull” content she wants to share in context of the course into the course environment…the content stays with her, but instead of (or in addition to) the LMS “pushing” some general stuff, it configures a learning space based on the expertise and contributions of its members?

Like if I join a space or a conversation, I bring not only my personal self but also my blog, my Zotero collection, my Diigo links, my tweets, my network etc., but also decide to bring in a relevant “slice” of these and other influences to the course or research I am taking on. Maybe such environments understand a shared semantic vocabulary for the subject so that they can quickly organize the combined course network without my explicit instructions. Wouldn’t this be a self-organizing, emergent ecology more in line with Connectivism and a way to differentiate against an LMS?

The first visualization I thought of was that of puddles and rain. Simply put, when the rain falls, puddles of water form. Some puddles mix with other puddles, self-organizing, to form streams, some stay quietly content to stay aloof and disconnected. Depending upon how much it rains and what the surfaces are that receive the rainfall, we will see patterns. There may be a point of bifurcation when the entire surface gets covered. When rain stops, and puddles start drying, a pattern of decay forms quite unlike the pattern of growth which was an emergent, complex pattern to start with.

So replace puddles with PLEs, the surface and environment with the network (a super-PLE?) ecology and the rain with a certain eventedness (a MOOC?) and you have my picture of what goes on in connective learning. Weird idea? I sincerely hope not.

So I thought I would bring about a better visualization with Jatinder’s kind help. Picture this (disclaimer: not to suggest any connection between the names of various people in my network on the visual and social connotations of the word butterfly, more from the effect of a butterfly flapping its wings….):

(Images courtesy various artistes on the web, but in particular for the incredible post here – did you know the Fibonnacci Sequence appears in the sunflower!)

This could be an environment unlike the above, with cacti and barren deserts instead, a metaphor perhaps for rigid institutional environments. The point is that each of the elements will feed on each other in complex ways, uncontrollable, still with distinct patterns. Of course, Stephen invoked that knowledge as a plant, meant to be grown metaphor when talking about connectionist networks. I am not suggesting that one plant is altogether separate from the other and knowledge is silo-ed, they will have dependencies and some common roots. But each plant will have a tapestry of complex patterns to reveal, strands of knowledge and butterflies will cross-pollinate.

But it is a picture where PLEs are an extension of the self, disembodied but in many ways a natural extension, making us a distributed entity operating as a singularity(?). I like this way of thinking (although the quickly engineered visual may not make the grade). And I think this way of visualizing gives us credible alternatives to the way LMSs are built today.

As always, would love to know what you think!

Read Full Post »

Missed Janet Clarey’s great interactive talk this Wednesday but caught up with the recording. I think it was a great session on many accounts. Janet brings her great experience in Corporate Learning Development research at Brandon Hall into the session she leads. Thanks, Janet!

The main questions that she addressed were:

  1. What are Web 1.0/2.0 learning  models/trends? Which theories are they informed by? What data do they collect and manage?
  2. How can innovations like Augmented Reality and Foursquare be used to support learning?
  3. Can informal learning really work in the face of regulatory requirements or mission critical situations?
  4. Take a social learning and networking enabled LMS like SABA. How is it really different from what we are doing in the open MOOCs?
  5. Can there be a hybrid model spanning eLearning 1.0 and 2.0?

Very interesting questions and even more interesting responses from participants. Let’s back up a bit. Responding to a July 2010 discussion around Critical Literacies and the eXtended Web, I looked at what my starting points for a PLE would be and why we need to closely look at what the PLE Architecture should be based upon. More recently, as George mentioned, there is an extremely interesting discussion going on in the Learning Analytics Google Group – I do recommend that you go through the bibliography and Rebecca’s summary of  discussions.

As background, as well, there is an interesting discussion I had with Rob Wilkins and Janet Clarey on LMSs, Assessments and RoI early last year after Janet’s set of great interviews with leading LMS providers, where I argue that LMSs can’t be social as an add-on (keep-up-with-the-trends thought or to do eLearning 2.0 the eLearning 1.0 way) and why current LMS metrics are woefully inadequate to provide us any strong indicator of learning or performance.

Back to Janet’s talk and the first question. Her slide on eLearning 1.0 emphasizes technology as a support for most of the eLearning dimensions that are in use today – courses (self-paced and virtual instructor-led), certification, LMS/LCMS, Authoring tools etc. Participants responded to her “Informed by what theory?” question by evoking concepts and theories such as cognitivism, constructivism and constructionism and characterizing eLearning 1.0 as “sage-on-stage”, body of knowledge etc.

I have made this point before, but it is hard for me to think of LMSs in the 1.0 era as anything but tools for learning automation, which was the pressing need then as organizations started adopting technology to manage learning. Because of this reason, it is also a little superficial to ask what theories informed eLearning 1.0 supportive technology. The theories influenced the way content was designed and instruction delivered rather than how the LMS or Virtual Classroom was built. I would put LMSs such as Moodle and LAMS and platforms such Mzinga’s Omnisocial in the eLearning 2.0 category instead as supportive tools informed by theory. Janet’s consequent question of what data are we collecting, reporting and analyzing in the 1.0 world, evoked the standard responses – time spent, scores etc.

elearning 2.0. I had problems with putting disruptive technology as the core behind eLearning 2.0. While that may be an important factor, it can’t be only thing at the core. I am not sure that blended learning, mobile learning,P2P, 3D-immersive environments and “search learning” (whatever that is) would fall under eLearning 2.0 which she also characterizes as “Self-serve. Social. Mobile” – at least not the way we have been talking about it.

What theories inform eLearning 2.0? To my utter surprise, nobody put up Connectivism up there (connectionist was the closest)! I think the data aspect, where I did get to see artifacts and relationships, would have benefited from some discussion around intelligent data (George went to it later in  the session).

Next were a few slides on network maps, augmented reality, location aware apps. I thought it was a good idea to provoke thought of how these tools could be used as part of the learning process. There are perhaps hundreds of ways to do that and to conjoin these with existing approaches/theories and design approaches is not very difficult. In my belief, Linked Data will play a massive role in terms of distributed connective knowledge (but that is another story) as will serious gaming and simulation combined with these new technologies. Obviously, data acquisition and capture will also be enhanced (and there are privacy and ethical concerns around this).

George referred to the Semantic Web and Web 3.0. It is interesting to note the title of a post that Stephen wrote about three years back “Why the semantic web will fail“. But of what theories inform the eXtended Web, participant responses included marketing, monetization model, authority, self watching vs crowdsourcing, surveillance (someone suggested sousvelliance) and personal learning. Steve LeBlanc asked for a list of differentiating characteristics, I would respond that these are the subjects of the PLENK2010 discussions – PLE, MOOCs, Connectivism, Intelligent Data, semantic web, Linked Data, extension of the Internet as a Internet of things.  Again, I think Connectivism would form a important influencing theory of the eXtended Web.

For me there are two important aspects of the data aspect of the eXtended Web – data trails (George) and sliced PLEs, and, new forms of collaboration leading to new learning analytics (like Connection Holes) that can replace the traditional 1.0 methods and tools.

Can informal learning work in mission-critical situations or in situations that demand proof of regulatory compliance? For the former, yes, absolutely. Where informal learning and connective learning models for learning and performance really succeed is because they realize that knowledge (and expertise) is distributed and problem solving is essentially a process of connection-making.

For both, there is a larger question – what are we measuring? Regulatory compliance – that organizations prove that employees spent time and obtained passing scores on key topic such as sexual harassment at the workplace – is built at cross purposes with the aim of the regulations (say, employees reflect and practice sensitivity to and abstinence from sexual harassment at the workplace and companies don’t have to submit proof of deviation just like you have to let a software vendor know if you are not license-compliant). Maybe the parochial measures prescribed by the legislations need to change rather than stating that traditional formal elearning does provide an accurate measure and meets the objectives of the legislation.

The argument is carefully articulated by Stephen in his post Having Reasons where he states:

The whole concept of ‘having reasons’ is probably the deepest challenge there is for connectivism, or for any theory of learning. We don’t want people to simply to react instinctively to events, we want them to react on a reasonable (and hopefully rational) basis. At the same time, we are hoping to develop a degree of expertise so natural and effortless that it seems intuitive.

I think the question, although someone did answer it from one perspective, “will the ability to repair a nuclear reactor emerge from the water cooler”, is a horrifying and irresponsible one intended to discredit the concept of informal learning. What if I flipped the question and asked “will the ability to repair a nuclear reactor come from learning online at your own pace” – which discredits WBTs as a possible solution altogether. It is not a new question and I think Jay Cross has defended it somewhere too. It trivializes the problem and the solution.

Janet also showed a learner home page in SABA and immediately compared the “technology” to the “technology” in the MOOC saying how is this really different. I think that is where the disconnect is – you cannot put technology and affordances of tools at the core, whether disruptive or not. It is also the reason I continuously state that current LMSs are building social learning add-ons, not rethinking from the ground up. Theory will inform not only how the technology will work but also how learning will happen. I know Stephen would have a mouthful to say on  this as well (pity he was not there).

On the discussion whether the two generations can give rise to a hybrid, there are mixed opinions. Connectivism is a very young theory. Even before it started, the challenge was still to put an implementation (practice) face to the theory. These pressures to generate a pedagogy or instructional design approach or practical guidance among other pressures, may prompt us to jump to a hybridization of the concepts.

But in a sense, we need to let this discussion evolve – the debate on my earlier post around constructivist and connectivist PLEs generated show us a healthy state on the road to resolving these practice challenges. Like in the response on sense-making among other comments on the PLE post (which I still have to respond to), Stephen is perhaps correct in assuming a pure unadulterated stance on what connectivism and connective knowledge are and how they can change what we believe and practice in learning.

I struggle with it all the time, but I think a pure stance is much-needed with occasional intolerance to evolve to a state where it can widely inform practice.

Read Full Post »

Is the PLE a connectivist construct or a constructivist construct? Or both? Or neither, just influenced by many theories? A statement by Wendy Drexler in her paper prompted this question. I quote:

Principles of connectivism equate to fundamentals of learning in a networked world. The design of the teacher-facilitated, student-created personal learning environment in this study adheres to constructivist and connectivist principles with the goal of developing a networked student who will take more responsibility for his or her learning while navigating an increasingly complex content base. (emphasis added)

It could be worthwhile to consider two interpretations (Wendy uses support from both theories in tandem in her networked student model to construct & analyze the teaching-learning experience she describes):

  1. PLEs are some combination of constructivist as well as connectivist ideas/principles, or
  2. There exist two unique types of PLEs – constructivist and connectivist.

The PLE and the MOOC are ideas in Connectivism discussions that are represented not only as direct innovative applications of the connectivist state of art (theory, process and technology), but also raise comparisons, as in this week’s discussion, to entrenched industry-wide systems such as LMSs, as cogent alternatives for the education system.

Learning theories, in the past, have spawned a set of practices unique to their strengths. These practices (techniques, processes and technologies) have made it easier for downstream adoption of theory into the classroom (online or offline) and the eLearning content development and delivery industry as a whole. Further downstream, it has enabled technology development, research and assessment leading to a level of analytics on which the current system is based, directly or indirectly.

The MOOC environments, such as those for the PLENK2010 discussion, and the PLE/PLN environments that participants have been contributing, are now as much centerstage as the concepts behind connectivism as a theory in this discussion.

A lot of insight will be generated by researchers in PLENK2010 on preferences, styles and behaviors with MOOCs and PLEs, which should feed into improvements in these environments for the future or perhaps even new innovations. Obviously, a whole lot of work is being done on the technology architecture to ensure that the state of the art is fully utilized to translate connectivist influences to the platform level.

According to Stephen and George, what sets apart Connectivism from Constructivism and other theories is importantly that knowledge is distributed, the set of connections formed by actions and experience, and learning is the constant negotiation of new nodes in the network being added or removed, gaining importance or losing it.

A new node is a new experience and the learning process dictates that we “dynamically update or rewrite our network of learning and belief”. We do that by continuously adapting, self-organizing and recognizing emergent patterns. Learning becomes a ““door opening” process that first permits the capacity to receive knowledge, followed by encoding the knowledge as a node within our personal learning network”.

In that context, the learning process/pedagogy used in MOOCs and PLEs, with its emphasis on network formation, reflection, open-ness, connected-ness and other ideas, reflect the principles of connectivism.

By definition, they are different from learning processes in other theories such as Constructivism, and therefore, in this sense, it is confusing to term MOOCs and PLEs as both constructivist as well as connectivist.

Let us address the technology aspect. Are there two technological alternatives for PLEs and MOOCs? If for a moment we were to ignore Connectivism as a theory, but recognize the MOOC and the PLE as technological platforms, could they be assumed as a logical manifestation of social constructivist practices in the digital age?

If Connectivism did not exist, would we still have moved to MOOCs and PLEs as they are visualized today (maybe under different names)? How would a Social Constructivist design an open course of the same broad characteristics as the MOOC (large number of people, distributed, no entry qualifications, no credits…) or an open process of guided discovery or problem solving or by defining a set of tools for personal learning in a community of practice environment.

Our current environment in PLENK2010 (or earlier in CCKOx) is built on Moodle (which is an LMS inspired by constructivism, constructionism, social constructivism and connected & separate motivation; also here is their view on the pedagogy that Moodle supports) and extended with tools such aggregators (Stephen’s gRSSHopper), Twitter, SL and Elluminate.

If the design of Moodle is an answer to the question, and due to the way we are using Moodle in MOOCs so far, I believe that MOOCs and PLEs would need to be seen then, technologically, as equally applicable to both theories, to be used in ways that each theory predicates in its belief of what the learning process should look like.

Janet Clarey did a host of interesting interviews early last year on how leading LMS providers are looking at incorporating (or have now already incorporated) informal learning and social learning environments as an extension of the standard LMS offerings.

In my understanding, PLEs/PLNs are not comparable to LMSs, rather it is the MOOC environment that should be generally comparable to LMSs. Comparing PLEs/PLNs to LMSs are an apples to oranges comparison.

In MOOCs (read MOOCs environment), the management part is facilitative of connection forming and collaboration, not dictatorial as in an LMS augmented by social learning. In a MOOC, learning is the “door-opening” process whereas in an LMS it has rigidly expected outcomes inline with traditional models of training and assessment. In a MOOC, connections are openly negotiated with no need for structure, while an LMS must obey structure and authority.

Likewise, LMSs (or more generally Human Capital Management Systems [HCMS]) today have features that allow users to perform many other functions that MOOCs have not addressed – assessment and performance management, talent & succession management etc. – and although these may not be addressed by MOOCs by design and we may want other downstream solutions there. We need to definitely think how needs that HCSMs respond to as also needs for content management (authoring through to publishing and standards therein), are to be addressed.

That said, if the PLE grows to include management features (say additional “environments” for teaching or mentoring or assessing or tracking can be added) in a way that decentralizes the teaching-learning process, it may be worth comparing it with enterprise or institutional LMSs.

My belief was, and is, that thinking that the standard LMSes (including to a lesser extent Moodle itself) can be extended to include connectivist learning is a contradictory approach. It seems to be responding more to a paranoid “need” to go social, on both sides – customer and LMS vendor. 

Which then takes me to the next question: Can we conceive a truly connectivist technological architecture that makes it technologically distinct from an implementation that could lend itself ambiguously to both constructivist as well as connectivist interpretations?

Connectivist systems need to address an important aspect – that of sense-making and wayfinding.  These systems ways should, in some way, allow us to design environments, generate learning analytics and assess performance at the level of the person, while at the same time allow us to loosely manage, provision and plan the connective learning experience at different levels in the organization.

We would not only have to think of learning but also of connectivist assessment and performance, topics that we have not made substantial progress on (there is an interesting conference coming up in early 2011 on Learning Analytics, please check out the Google Groups site for some discussions).

Among other things, these systems should find ways of integrating with the rest of the ecosystem in the organization in consonance with connectivist principles. These systems should be responsive to the needs for privacy, should be technologically open with well-defined interfaces and should store content metadata in ways that can support the learning process.

Above all, there will be many tensions – personal vs. organizational preferences/knowledge/data, diversity and autonomy vs. structure and control etc. – and connectivist systems must provide for ways to adjust that balance for each organization.

I believe, at the heart of these systems, will be the following design principles:

  • Open and extensible mash-up frameworks
  • Reliance on Open APIs to deliver mash-ups
  • Every object is made collaboration aware (X.0, technologically immune) irrespective of source
  • Spaces are multiple views around a cluster of object base types
  • Spaces are transferable as units and so are other dimensional views
  • All resources are associatively and progressively connected through metadata
  • Architecture builds in dynamic any-to-any connections while allowing any combination or view/perspective aggregation of X.0 objects
  • NBTs (Network Based Training) will make possible persistent learning and knowledge management environments

Read Full Post »

Yesterday’s session seemed to be interesting. I missed it but was catching up on the recording. One part of it, around Curation (at least where it initially started), was especially interesting, not only from the point of view of what was being discussed, but also as an interesting example of the anatomy of the “narrative discussion” happening over the microphone and chat.
Disclaimer: I have tried to piece together, part-transcription, part my own interpretation, the discussion and debate. Please do correct me if I have misrepresented, misheard, ignored or inadequately/inaccurately represented a point of view.

The discussion was essentially between Dave Cormier, George Siemens and Stephen Downes, although there were many contributions from other people like Rita, Al, Alan, Asif, Bruce, jpaz, Dawn, Graham and others. Here goes.

Dave Cormier: happy about the curatorial activities like using ManyEyes etc.

George Siemens: creating is a sense making activity – blog post, mindmap, curation activity – what degree in a course should the facilitators lead – student’s role as a creator of resources vs. educators role of a curator. Perhaps facilitators, since they are more connected, can be relied upon to present curated artifacts for further discussion.

Stephen Downes:  problem with the curator-facilitator role? Can only Facilitators curate?

George Siemens: No, everybody should curate and demonstrate their own viewpoints and perspectives, but for the reason of being better connected with the topic under discussion, in reality (not potential), facilitators can lead. Impact of the facilitator curation would be more consequential. This is because, right or wrong, the facilitator is expected to have a broad grasp of the topic and participants may not have the same grasp.

Dave Cormier: That’s a lot of expectation and presumption. Why (asks George)? George has more social power (status, reputation) in the community. Isn’t that what this boils down to?

George Siemens: Dave has shifted the discussion a bit, importantly to the distinction between status and power/influence/pull. There are certain things that status may afford to you and raise expectations of you – assessing, leading students – but influence is something that goes beyond that traditional role. The curatorial role would become more influential if you are more connected. And that happens if participation is not equal by all participants in a course. In a network, in theory, there would be greater equality.

George Siemens: In this course, look at the talk time by the moderators. By virtue of the amount of time, moderators would likely have a more important role in shaping the conversation (someone mentioned back channels as an influence that can change this too)

Stephen Downes: Since other moderators talked more, does it mean they are more influential?

George Siemens: Given the time we have had with the microphone, I would say that we have hashed out the topic and shaped the discussion. That may not mean that we have had greater influence on the course as a whole. Also, now that Stephen is asking questions, it gives an opportunity to further enhance the discussion. “The curatorial dimension is that the voices that are being heard are the ones that are shaping the discussion.”

Stephen Downes: A lot of work around network theory has been done. Let us look at Power Laws – influence concentrated on the spike while the long tail contains the regular types of people with low interactions. People at the top have viewpoints and influence that stands out. This is an example of an unstable network. In a stable network, you would see a straight line with more equality. Stability is where the network is resistant to cascade phenomena – phenomena where a small effect gets replicated and amplified in a cascading fashion.

Chat: George Siemens: How can you design a network? You are addressing small worlds, Stephen 

Stephen Downes: Design of a stable network should provide for open-ness, diversity etc. unlike the current (Elluminate) environment

George Siemens (and others): In reality, all have access to the microphone, active back channel exists and there is a cross-referencing of content such as blog posts; why do we want a network to be stable as a virtue; the majority of networks are unstable

Chat: Alan Cooper: Why is network “stability” a virtue? Stephen Downes: Network stability is a virtue because only stable networks can be dynamic – unstable networks, that experience cascade phenomena, revert to a configuration in which every node has the same state – and then becomes inert, and dead.

Chat: Al Pedrazzoli: But the majority of the networks are unstable. Stephen Downes In living networks – eg., humans, trees, etc. – there are physical constraints that limit the size if the big spike.  In artificial – ‘scale free’ – networks (like financial systems) there are no such limitations.

Stephen Downes: But it is a question of perception that we are up against – we bring our histories in with the perception that there ought to be a loudest voice and this is what we must address in the design for a connectivist course. So bringing it back, curation leads to structures for authority, for the loudest voice. Journalism is close to what I am thinking about, where one does make value judgements but one is more interested in the analysis and assessments that follow.

George Siemens: Lets talk about the PLE/PLN ideas. Stephen sits on the spike in the power law. That is a deserved role given his work across a decade. A new blogger will be at the low-end of the tail. It would be unfair to compare the two. Now Stephen compares education with social justice and social reform, which I don’t disagree with. In reality though, we would encounter power laws more naturally than stable networks. As an example, Stephen may be at the long tail when it comes to frogs (Stephen disagrees), so you really play different roles. Depending upon the context, the role and position (and thus the influence) will vary. We don’t give media, newspapers or teachers the same position/status for everything, but choose among them.

Dave Cormier: We may want to move from unstable to stable configurations. Talking about this discussion is not necessarily indicative. The format of a narrative discussion does not allow for a hundred separate voices to be talking at the same time (GoogleDocs is a different format, an example of how technology controls things) – it is just not technology but also human nature that we can’t have 50 people talking at the same time and have a useful discussion. I give a lot of weight to what people in my network comment. It is important to consider taking on more than a single role to start moving towards stable networks.

George Siemens: This discussion is bigger than what we can handle in this session, also given the amount of work done by people like Watts and Strogatz. Let’s move on.

…and so it went. For me, an important piece of the conversation was the reinforcement that the stable vs. unstable networks tension is not just about technology or collaboration but also about more broadly about ideas of equality and justice, however close or far we could be in relation to “designing” or wanting to design a stable network.

Another important takeaway, from the learning standpoint, is that the challenge is to build systems and practices that can allow a hundred different voices to speak all at once and not have a useless cacaphony in the end.

But I think this discussion was especially interesting because we are also debating how future PLEs/PLNs should look – what affordances they should have, as a collective research practice that is PLENK2010, and curation may be an important part of the deal.

Read Full Post »

This is my first post for PLENK2010 and I am glad to be involved in this discussion. Thanks to the MOOC organizers for setting this up.

I think of PLEs as Operating Systems just like regular operating systems are for computer users. In fact, I call the PLE a LearnOS.

Thinking of a PLE as a LearnOS helps me also get by the initial comprehension of what it can contain, such as tools, resources and connections, as also how it is deployed – PC, mobile and cloud. I can then move on to think about how learning will occur in this LearnOS by asking not only how the LearnOS can be organized to support my learning (feed aggregation, twitter tags and the like) in a given context, or how my LearnOS is connected to other LearnOS-es out there (PLNs), but also to thinking how my LearnOS can adapt to my learning contexts and my learning needs.

That is basically asking questions such as those relating to personalization (both the how can I personalize question and the how can the system know who I am question), learning environment configuration (how can I configure the environment to learn and perform in the best possible manner) and assessment (how can I assess my learning within a distributed environment of LearnOS-es).

Stephen’s take on it is to put together some fundamental dimensions of the PLE – resource profiles (profiling multiple data attributes), personal identify (linked to resource profiles), communities (that together create a combined description of an object), resource aggregators (which combine resources based on configuration of parameters to present to the user), repositories (moving beyond DOI registries and repositories to contain just educational objects) and resource production (authoring tools which may be multi-user and collaborative to create new content). These would come together in a PLE environment where rights, syndication (and things like authorization?) need to be common service level affordances. To achieve these, Stephen has identified six components – profiler, aggregator, editor, scaffolds (ways to design new forms of content potentially from existing sources – maybe go beyond just mashing content to create complex content such as games and simulations), services and recommender – each performing a distinct role in the PLE architecture.

Of these, scaffolds are a structured representation of content, a sort of database architecture of data constituting the content in such a way as to yield one or more representations of content (visually or otherwise). It is like saying if I had the sequence number,  title, predecessor sequence number fields in a database (table), I could easily generate a process workflow in many different visual formats. If I was to add a start date and an end date field to the same table, I could get a Gantt Chart from the “data”. This is “data” but about content and you are putting together content in new forms, not directly but through views to the fields constituting a form of content.

By Services, he means the relationships between PLEs, which when built over the scaffolds, can give rise to multiple types of collaboration. This part interests me significantly because at least a part of it implies that structured collaboration techniques could perhaps be accommodated in this layer. For example, what happens when your content table/database (definition of content elements) starts interacting with mine – it’s a new shared vocabulary necessary for collaboration (the promise of the semantic web).

I think it will be worthwhile to think of PLE servers, which as part of their job of bringing together communities among other things, reconcile these folksonomies as well.

Recommenders are going to be extremely important, both in terms of what they recommend and what they do not! And I think it makes sense to try to incorporate changing personal profile or resource profiles as an input to this system, not just look outward to the network, in the interests of personalization.

Wilson et al, make a reference to existing ADL standards like SCORM. I think it is important to think about whether there can be standards (like DITA or S1000D or SKOS) that can be evolved for complex content in this framework. Connection coordination, symmetric relationships, individualized contexts, open standards, lightweight integration, open content, repurposing/re-use, and the personal & global scope characteristics are all important when thinking of an alternate design. I would think the PLE is improperly or inappropriately compared with VLEs  (its like comparing apples to oranges, a better comparison would be a PLE or LearnOS server with a VLE).

I read Alec Couros’s distinction between PLE and PLN. To me it is rather like the difference between the Internet and the Web, inter-related and inter-dependent concepts. For Dave, it is rather the reverse, with PLEs being the holding environment for PLNs.

Read Full Post »

PLE Architecture

Rita Kop mentions Stephen Downes’ charter/vision for a PLE extending on from a discussion of critical literacies and the eXtended Web, building on Steve Wheeler’s Web 3.0, George Siemens’ xWeb and Stephen’s Web X, to which I would add some of my own thoughts from a couple of years ago on Learning X.0:

The components that were formulated in Stephen Downes’ vision for a PLE at the start of the PLE project of the National Research Council of Canada are the following: 1. A personal profiler that would collect and store personal information. 2. An information and resource aggregator to collect information and resources. 3. Editors and publishers enabling people to produce and publish artifacts to aid the learning and interest of others. 4. Helper applications that would provide the pedagogical backbone of the PLE and make connections with other internet services to help the learner make sense of information, applications and resources. 5. Services of the learners choice. 6. Recommenders of information and resources.

Interesting. Without really attempting to reverse engineer or second guess Stephen’s thoughts, I think this is an evolutionary approach to design the PLE. By that I mean, an approach that says, look at the emerging technology, networks and way people are learning and sharing, and create a solution that would mashup or cross-pollinate technology and context sensitive “intelligent” recommendations.

I have mixed thoughts about this approach. On the one hand, the cross-pollination is a perhaps inevitable in a personal learning environment (to various degrees as evidenced by that discussion on xWeb…), but for me it doesn’t feel like it captures the entire scope that we are confronted with.

Rita points to one such “leak” in terms of critical literacies.

The reality, however, is different and research is available to show that not all adult learners are able to critically assess what they find online and might prefer to receive guidance from knowledgeable others. There is also research available to show how difficult it is for anybody to reach and access a deep level of information by using search engines.

I have not seen the research, but it seems to be confirmation to an intuitive feel that I have. Particularly in India, there is a culture of a very strong “touch and feel” in almost all spheres of life – difficult to substitute the “guru” as the guide. And since search engines are what they are, architecturally, the latter finding also seems intuitive.

The other “leak” I feel is fundamental. “Helper applications that would provide the pedagogical backbone” does not sound quite right. These need to be “core” infrastructure for the PLE inasmuch as pedagogy needs to equally reside at the core as technology and the learner. Of course, what the pedagogical backbone consists of is of prime importance – the reason for building  a PLE as opposed to a PageFlakes.

One more concern is with “personal information”. How is personal information defined? Is it defined as your core demographics, interests and preferences, or is it defined as your actions as implicitly recorded by a search engine, or is it defined as the log of your learning activities captured explicitly by an intelligent system, or is it a combination or extension of that into your professional lives? As a corollary, in the context of the PLE, how is that information useful for recommender systems except in an information management and presentation algorithm?

Again, integrating “services of the learners choice” presents problems. Its like saying, I could add-on a service without worrying about how it helps integrate with my learning in a particular context – sort of like a widget that displays an aggregated feed. To have context built-in and some pedagogy to be in place within the context of a PLE is important and it will determine rules that services will have to follow if they want to integrate into the learning experience. Otherwise, it becomes just a site where various services coexist.

I am not sure what the alternative is to Stephen’s vision, but the above comments have been my starting points when thinking about PLEs.

Read Full Post »

%d bloggers like this: