Feeds:
Posts
Comments

Posts Tagged ‘plenk2010’

SCORM X.O

Ben Clark from Project TinCan reached out and responded to my last post on SCORM. They have an amazing platform – not only have they been chartered with researching what the next generation of eLearning runtime communication should look like, but they have also employed a cool tool called UserVoice to crowd source ideas and opinions. TinCan is managed by Rustici Software under a Broad Agency Announcement (BAA) from ADL.

ADL’s Future Learning Experience Project, launching soon, is focused on exploring an Experience API (with TinCan and Letsi) and a harmonized CMI data model (with AICC). Letsi seems to be  a really interesting initiative too – they started looking at SCORM 2.0 in 2008 and have a Wiki documenting the open effort of a large number of experts worldwide. 

This is exciting work which has a direct linkage (or should have) with Personal Learning Environments that we have been discussing in PLENK. The reason why PLENK-ers should look at it is that it is trying to establish some general frameworks for formal AND informal, linear and non-linear, distributed, web X.0, personal and community, and mobile learning. Our current work in this area can easily influence evolution into the future landscape. I personally need to really deep-dive into the amazing mass of materials and ideas, to get a proper appreciation of the good work these folks are doing. Kudos!

Read Full Post »

By far the most definitive week of discussions around PLENK for me, this week’s Friday discussion is worth multiple rounds of further investigation and discussion. I cam in a trifle late and it was difficult to catch up without a starting context, so I caught up with the recording later. Here are my notes and deductions – my version of the conversation between George and Stephen. As always, please feel free to add or correct me in case of any thing I missed or misinterpreted.

Stephen Downes

  • There is a Tool selection or Dashboard approach to classifying technology experience
  • There is no great application out there that allows me to read and write like gRSSHopper. This is the workflow approach. We need to model workflow, provide end-to-end functionality and that is the most daunting piece.
  • Should we be looking at a theory of everything (like Atlas in geography or Set Theory of everything)? Technology will evolve over time, but the core patterns of use may not (in fact, they may).
  • Is there was a way to hide the modalities, so that we focus on the core?What are these core ideas? Personal autonomy, distributed knowledge and social learning. There are frameworks like the 21st century skills frameworks. These are very widely  fragmented. I would add pattern recognition as a fundamental skill – is the optimal tool one that would be based on network theory and pattern recognition?
  • Machine analysis can give us a syntax. The human side would give us semantics.
  • Can we figure out, in technological terms, how humans do it – derive meaning? From the neurological sense, it is a very organic process that evolves over time, not intentional or deliberate, each new experience creating more understanding.
  • Is the tool of everything going to be a pattern recognition tool?

George Siemens

  • First time adoption of tools is difficult, not because of the tools, but because of concepts.This is where companies like MS or Facebook helped by aggregating functionality and establishing common ways of completing standard tasks
  • Tools that are available but the level of integration is too low at this point. With connective specialization, it is an each to her own preference. Also at the point of adoption, it adds to the confusion.
  • Do we need a tool of everything or do we need a way to build capacity?
  • The theory of everything: maybe with a combination of critical literacies and attributes or ideas of the disciplines?
  • The hiding of modalities is important.
  • There are two dimensions to pattern recognition – technological and human. The technological example would be reading through a mass of data vs. navigating a structured analysis of the mass of data. On the human side, Learning Analytics tools provide valuable patterns of use. That is what computing can do and visualization is going to be very important.
  • That does not mean that technology will be able to model personal or network use of the resources, but technology can help.
  • We need to have a balance between what a computer does well and a human does well (form vs. meaning).
  • Experts and novices think differently – experts think in patterns and novices think sequentially, or (Cris2B) plan ahead vs plan backwards. Conceptually, once some patterns are built up, some context, we are able to recognize more complex patterns.

My 2 cents.

I think that we must first start by presentation and analysis (as best as the computer can visualize in a simple way) and let humans and our networks derive the meaning. This is what I hope an NBT will achieve.

Maybe at some point, the insight from how humans use that information for semantics, through reflection and practice,  will start becoming progressively templatized as we understand or build tools and processes that can model how humans function – how we evolve from novices to experts in an area. I call this Native Collaboration and see it permeating every function in learning.

The discussions are fast evolving to a stage where some formal models of Native Collaboration (which attempts to model, functionally and technologically how we learn) and NBT (my terminology for Network Based Training – an evolution from Web Based Training or WBT) will emerge where the NBT environment encapsulates the modalities in a fairly standardized manner while allowing personal autonomy and includes specific connectivist techniques for Native Collaboration. This is really exciting!



Read Full Post »

PLE/N Tools

Really nice collection of links for this week’s #PLENK2010 discussions. I especially liked Patterns of personal learning environments, Wilson. Wilson looks at patterns of use of and activity in personal learning tools and learning networks, revising a previous approach which was very functional and tool-specific.

One of the ongoing challenges I have is with the constant comparison between LMS and PLE, which I happen to think, is an apples to oranges comparison. They serve different needs and are differently located on the spectrum between self-learning and managed-learning (if there is such a phrase). The MOOC and the LMS are comparable, as are NBTs (which I define as Network Based Training, the natural networked learning successors to WBTs) and PLEs.

Let us picture this. The LMS is used to launch a WBT course. The course pops up in a player which is really a navigation shell that acts as a data conduit between the WBT and the LMS. Suppose the LMS is learning network and personal learning tools aware (with blog, wiki, Flickr, connection hub-bing, discourse monitoring etc. affordances being provided by whatever mechanism – web services, XCRI…) and the WBT is just base reference material not quite unlike the Daily in this MOOC.

The player could then be technically programmed to act as a conduit between the WBT and the network or personal learning tool (people, resources, context, conversation, bookmarking service).  Sort of a SCORM for networked learning environments.

What would you call the WBT then? A NBT.

Would the PLE look similar to a NBT. Yes, it would resemble a slice of the PLE, a workspace which we organize around a theme that interests us. Similarly, the NBT could be conceived of as a combination of slices of many different PLEs, in fact as many as those learners enrolled in the NBT.

But the NBT would necessarily be a more constrained, parameterized environment, designed or seeded by a network weaver, an atelier – the new avatar of the teacher, and adapted and grown by the learners, present and past. The PLE would grow unfettered, the whole being greater than the sum of individual slices.

Most of the discussion, even in Wilson’s paper, focuses around the tools in the end. What can tools do to present the solution to a pattern? In fact, almost every solution is expressed in technological terms (notice how many times the word “tool” appears in the first line of the solution).

It is almost as if technology is the master problem solver for every pattern of learning, but that may just be me.

I would rather focus on Critical Literacies. On having reasons. Just like I would not count an NBT operating in an LMS environment to be a true NBT – as in truly architected as a networked learning aware solution from the ground up, rather than pasted on a WBT as a quickfix.

And that is perhaps why I would choose to take a radical stand – PLE/N tools do not yet exist. I would like to take you back to how PLEs were defined in Week 1:

A Personal Learning Environment is more of a concept than a particular toolset – it doesn’t require the use of any particular tool and is defined by general concepts like: distributed, personal, open, learner autonomy.

and for PLNs:

A PLN is a reflection of social and information networks (and analysis methods).

We are confusing our current lists of PLE/N tools with the concept or the method, like trying to measure the length of a moving snake by a tape measure or measuring the volume of a gust of air with a sudden clenching of our fist.

By far the most important attribute of the toolset, if you can call it that, for a PLE/N would be its complete invisibility. It would be implicit for learners in the way it has been designed. It is then that we will be able to project our personal aura on it and make it personal, as open as we are, as connectedly aware as we want to be (or can be) and as autonomous as we will allow it to be.

And that will also take a fundamental rearchitecture of the way we conceive of learning resources, away from resources as objects or networks, to living and dynamic forms that reflect our social and information networks.

More of a hard left than a gentle meandering this one, would you say?

Read Full Post »

New Literacies

Listening to Will Richardson’s session on PLENK2010 this past Wednesday. He brought up NCTE‘s definition of critical literacies:

  • Develop proficiency with the tools of technology
  • Build relationships with others to pose and solve problems collaboratively and cross-culturally
  • Design and share information for global communities to meet a variety of purposes
  • Manage, analyze and synthesize multiple streams of simultaneous information
  • Create, critique, analyze, and evaluate multi-media texts
  • Attend to the ethical responsibilities required by these complex environments

He said that tools become the central point of the conversation without worrying about the context. And in fact, he did not think the high school graduates he knew would make the grade if these were the certifying literacies.

Reminds me of a vision statement that I read recently. This is from the NAE (Committee on the Engineer of 2020, Committee on Engineering Education, National Academy of Engineering). Describing the Engineer of 2020, the enlightened authors write:

What attributes will the engineer of 2020 have? He or she will aspire to have the ingenuity of Lillian Gilbreth, the problem-solving capabilities of Gordon Moore, the scientific insight of Albert Einstein, the creativity of Pablo Picasso, the determination of the Wright brothers, the leadership abilities of Bill Gates, the conscience of Eleanor Roosevelt, the vision of Martin Luther King, and the curiosity and wonder of our grandchildren.

Yeah. Right.

Jenny started the discussion on ethical responsibilities. I think it is important to also evaluate ethical responsibilities in the traditional context as well, not just for issues that tools such as Facebook are creating in the context of the Internet. The traditional system throws up as many horrific examples of violation of ethical responsibilities.

Read Full Post »

The debate at Oxford Union this Wednesday on informal learning was very interesting, more so because some wonderful people on Twitter were actually relaying it blow-by-blow and also because I was testing my multi-tasking skills by juggling between the Twitter conversation and the PLENK session!

The motion was: The House believes that technology based informal learning is more style than substance.

Dr. Allison Rossett, Nancy Lewis, Mark Doughty argued for:

Informal learning is appealing and exciting but it has no authoritative voices, no assessments or guidance, and therefore no substance. The motion isn’t about us and how we like to learn. It’s about our need to know that the organisations and people we trust know what they are doing. Informal learning doesn’t provide that. It has no thermostat or control. We all love technology, but on the scale of substance and style, it’s still all about style. If you care about organisations, be they of pilots, doctors or cooks, if you care about performance then we urge you, support the motion.

Prof. Dutton, Jay Cross and David Wilson argued against:

Informal learning is not trivial; it is in every corner of institutions. People in the room are using technology to check facts as we speak. Technology-based informal learning enhances information and reach. It makes experts more accountable and raises the bar. And for parts of the developing world it is the only learning available. Therefore, we urge you to vote against the motion.

The main arguments were:

For the motion (more style than substance):

  • gets viewed as a cheaper alternative by managers, but no measure of formal effectiveness for learning managers
  • need assurance that our doctors are medical experts and our pilots can fly a plane
  • formal gets things done
  • not well-researched
  • no north star to guide, no common understanding of what it is
  • does not work when failure is not an option (mission critical)

Against the motion (substance and style):

  • Internet has become a first recourse for information
  • institutions need to learn to harness the network
  • (Cross) co-exists with formal learning on a continuum, only visible separation inside school learning
  • Not the tools but the collaborative activities that will sustain and evolve
  • it is part of work, we do not need to separate it

The ones against won comprehensively and there is an online vote if you want to add your weight. I think there are some important pieces to this debate.

One, learning does occur informally, whether with the use of technology or without it.

Two, by definition it is informal, loosely (if at all) structured, not pre-meditated or goal driven (let me go to the water cooler to get agreement on the next strategic shift in business). It is a space where data is not as important as the intelligence in the conversation, as the alignment between connections.

It is a space where in principle decisions may occur or new ideas may emerge or new connections may be made. It is a space that can trigger a lot of formal work. And since it is informal it may not always be serious.

Three, the separation categories of formal and informal make sense when one is trying to push out the other as being an equally or more effective way of learning. To make that claim, informal learning will have to defend itself against vague arguments of mission-criticality, dissipated theorizing and non-existent assessment methods. 

I say vague arguments because saying a doctor trained by informal methods (if any are identified) will fail to become a medical expert (or succeed) is an improperly framed, populist argument.

It assumes a distinct category for formal and informal. It assumes that informal learning is all about informal or non-serious, undirected chatter which depends on serendipitous events to become or be considered meaningful. It assumes, on the other side, that formal learning undisputedly generates medical experts or pilots. That every site of formal learning is serious, directed and purposeful.

It also throws out any chances of even considering that informal learning plays a huge role in the organization or in school learning.  In fact the argument that informal learning does not work when failure is not an option precludes the very idea of allowing mistakes to happen during formal learning (as Sir Ken Robinson argues in his TEDTalk – How Schools kill Creativity).

I would vote against the House on this one and also chasten it for selecting the motion the way it stands, more to provoke extreme reactions than to promote constructive debate.

Read Full Post »

With a little help from Jatinder, a kindred soul in the making of simulators that happen to attract Brandon Hall Awards, I tried to visualize a model of PLEs operating in a connective environment. It started with a reply I made to Janet and Carmen on what I think should be:

…let us contrast the MOOC environment with an LMS. Can we think of this environment as self configuring instead of being configured by an administrator. How about when a person wants to join a “course”, she gives rights to the MOOC server to “pull” content she wants to share in context of the course into the course environment…the content stays with her, but instead of (or in addition to) the LMS “pushing” some general stuff, it configures a learning space based on the expertise and contributions of its members?

Like if I join a space or a conversation, I bring not only my personal self but also my blog, my Zotero collection, my Diigo links, my tweets, my network etc., but also decide to bring in a relevant “slice” of these and other influences to the course or research I am taking on. Maybe such environments understand a shared semantic vocabulary for the subject so that they can quickly organize the combined course network without my explicit instructions. Wouldn’t this be a self-organizing, emergent ecology more in line with Connectivism and a way to differentiate against an LMS?

The first visualization I thought of was that of puddles and rain. Simply put, when the rain falls, puddles of water form. Some puddles mix with other puddles, self-organizing, to form streams, some stay quietly content to stay aloof and disconnected. Depending upon how much it rains and what the surfaces are that receive the rainfall, we will see patterns. There may be a point of bifurcation when the entire surface gets covered. When rain stops, and puddles start drying, a pattern of decay forms quite unlike the pattern of growth which was an emergent, complex pattern to start with.

So replace puddles with PLEs, the surface and environment with the network (a super-PLE?) ecology and the rain with a certain eventedness (a MOOC?) and you have my picture of what goes on in connective learning. Weird idea? I sincerely hope not.

So I thought I would bring about a better visualization with Jatinder’s kind help. Picture this (disclaimer: not to suggest any connection between the names of various people in my network on the visual and social connotations of the word butterfly, more from the effect of a butterfly flapping its wings….):

(Images courtesy various artistes on the web, but in particular for the incredible post here – did you know the Fibonnacci Sequence appears in the sunflower!)

This could be an environment unlike the above, with cacti and barren deserts instead, a metaphor perhaps for rigid institutional environments. The point is that each of the elements will feed on each other in complex ways, uncontrollable, still with distinct patterns. Of course, Stephen invoked that knowledge as a plant, meant to be grown metaphor when talking about connectionist networks. I am not suggesting that one plant is altogether separate from the other and knowledge is silo-ed, they will have dependencies and some common roots. But each plant will have a tapestry of complex patterns to reveal, strands of knowledge and butterflies will cross-pollinate.

But it is a picture where PLEs are an extension of the self, disembodied but in many ways a natural extension, making us a distributed entity operating as a singularity(?). I like this way of thinking (although the quickly engineered visual may not make the grade). And I think this way of visualizing gives us credible alternatives to the way LMSs are built today.

As always, would love to know what you think!

Read Full Post »

Missed Janet Clarey’s great interactive talk this Wednesday but caught up with the recording. I think it was a great session on many accounts. Janet brings her great experience in Corporate Learning Development research at Brandon Hall into the session she leads. Thanks, Janet!

The main questions that she addressed were:

  1. What are Web 1.0/2.0 learning  models/trends? Which theories are they informed by? What data do they collect and manage?
  2. How can innovations like Augmented Reality and Foursquare be used to support learning?
  3. Can informal learning really work in the face of regulatory requirements or mission critical situations?
  4. Take a social learning and networking enabled LMS like SABA. How is it really different from what we are doing in the open MOOCs?
  5. Can there be a hybrid model spanning eLearning 1.0 and 2.0?

Very interesting questions and even more interesting responses from participants. Let’s back up a bit. Responding to a July 2010 discussion around Critical Literacies and the eXtended Web, I looked at what my starting points for a PLE would be and why we need to closely look at what the PLE Architecture should be based upon. More recently, as George mentioned, there is an extremely interesting discussion going on in the Learning Analytics Google Group – I do recommend that you go through the bibliography and Rebecca’s summary of  discussions.

As background, as well, there is an interesting discussion I had with Rob Wilkins and Janet Clarey on LMSs, Assessments and RoI early last year after Janet’s set of great interviews with leading LMS providers, where I argue that LMSs can’t be social as an add-on (keep-up-with-the-trends thought or to do eLearning 2.0 the eLearning 1.0 way) and why current LMS metrics are woefully inadequate to provide us any strong indicator of learning or performance.

Back to Janet’s talk and the first question. Her slide on eLearning 1.0 emphasizes technology as a support for most of the eLearning dimensions that are in use today – courses (self-paced and virtual instructor-led), certification, LMS/LCMS, Authoring tools etc. Participants responded to her “Informed by what theory?” question by evoking concepts and theories such as cognitivism, constructivism and constructionism and characterizing eLearning 1.0 as “sage-on-stage”, body of knowledge etc.

I have made this point before, but it is hard for me to think of LMSs in the 1.0 era as anything but tools for learning automation, which was the pressing need then as organizations started adopting technology to manage learning. Because of this reason, it is also a little superficial to ask what theories informed eLearning 1.0 supportive technology. The theories influenced the way content was designed and instruction delivered rather than how the LMS or Virtual Classroom was built. I would put LMSs such as Moodle and LAMS and platforms such Mzinga’s Omnisocial in the eLearning 2.0 category instead as supportive tools informed by theory. Janet’s consequent question of what data are we collecting, reporting and analyzing in the 1.0 world, evoked the standard responses – time spent, scores etc.

elearning 2.0. I had problems with putting disruptive technology as the core behind eLearning 2.0. While that may be an important factor, it can’t be only thing at the core. I am not sure that blended learning, mobile learning,P2P, 3D-immersive environments and “search learning” (whatever that is) would fall under eLearning 2.0 which she also characterizes as “Self-serve. Social. Mobile” – at least not the way we have been talking about it.

What theories inform eLearning 2.0? To my utter surprise, nobody put up Connectivism up there (connectionist was the closest)! I think the data aspect, where I did get to see artifacts and relationships, would have benefited from some discussion around intelligent data (George went to it later in  the session).

Next were a few slides on network maps, augmented reality, location aware apps. I thought it was a good idea to provoke thought of how these tools could be used as part of the learning process. There are perhaps hundreds of ways to do that and to conjoin these with existing approaches/theories and design approaches is not very difficult. In my belief, Linked Data will play a massive role in terms of distributed connective knowledge (but that is another story) as will serious gaming and simulation combined with these new technologies. Obviously, data acquisition and capture will also be enhanced (and there are privacy and ethical concerns around this).

George referred to the Semantic Web and Web 3.0. It is interesting to note the title of a post that Stephen wrote about three years back “Why the semantic web will fail“. But of what theories inform the eXtended Web, participant responses included marketing, monetization model, authority, self watching vs crowdsourcing, surveillance (someone suggested sousvelliance) and personal learning. Steve LeBlanc asked for a list of differentiating characteristics, I would respond that these are the subjects of the PLENK2010 discussions – PLE, MOOCs, Connectivism, Intelligent Data, semantic web, Linked Data, extension of the Internet as a Internet of things.  Again, I think Connectivism would form a important influencing theory of the eXtended Web.

For me there are two important aspects of the data aspect of the eXtended Web – data trails (George) and sliced PLEs, and, new forms of collaboration leading to new learning analytics (like Connection Holes) that can replace the traditional 1.0 methods and tools.

Can informal learning work in mission-critical situations or in situations that demand proof of regulatory compliance? For the former, yes, absolutely. Where informal learning and connective learning models for learning and performance really succeed is because they realize that knowledge (and expertise) is distributed and problem solving is essentially a process of connection-making.

For both, there is a larger question – what are we measuring? Regulatory compliance – that organizations prove that employees spent time and obtained passing scores on key topic such as sexual harassment at the workplace – is built at cross purposes with the aim of the regulations (say, employees reflect and practice sensitivity to and abstinence from sexual harassment at the workplace and companies don’t have to submit proof of deviation just like you have to let a software vendor know if you are not license-compliant). Maybe the parochial measures prescribed by the legislations need to change rather than stating that traditional formal elearning does provide an accurate measure and meets the objectives of the legislation.

The argument is carefully articulated by Stephen in his post Having Reasons where he states:

The whole concept of ‘having reasons’ is probably the deepest challenge there is for connectivism, or for any theory of learning. We don’t want people to simply to react instinctively to events, we want them to react on a reasonable (and hopefully rational) basis. At the same time, we are hoping to develop a degree of expertise so natural and effortless that it seems intuitive.

I think the question, although someone did answer it from one perspective, “will the ability to repair a nuclear reactor emerge from the water cooler”, is a horrifying and irresponsible one intended to discredit the concept of informal learning. What if I flipped the question and asked “will the ability to repair a nuclear reactor come from learning online at your own pace” – which discredits WBTs as a possible solution altogether. It is not a new question and I think Jay Cross has defended it somewhere too. It trivializes the problem and the solution.

Janet also showed a learner home page in SABA and immediately compared the “technology” to the “technology” in the MOOC saying how is this really different. I think that is where the disconnect is – you cannot put technology and affordances of tools at the core, whether disruptive or not. It is also the reason I continuously state that current LMSs are building social learning add-ons, not rethinking from the ground up. Theory will inform not only how the technology will work but also how learning will happen. I know Stephen would have a mouthful to say on  this as well (pity he was not there).

On the discussion whether the two generations can give rise to a hybrid, there are mixed opinions. Connectivism is a very young theory. Even before it started, the challenge was still to put an implementation (practice) face to the theory. These pressures to generate a pedagogy or instructional design approach or practical guidance among other pressures, may prompt us to jump to a hybridization of the concepts.

But in a sense, we need to let this discussion evolve – the debate on my earlier post around constructivist and connectivist PLEs generated show us a healthy state on the road to resolving these practice challenges. Like in the response on sense-making among other comments on the PLE post (which I still have to respond to), Stephen is perhaps correct in assuming a pure unadulterated stance on what connectivism and connective knowledge are and how they can change what we believe and practice in learning.

I struggle with it all the time, but I think a pure stance is much-needed with occasional intolerance to evolve to a state where it can widely inform practice.

Read Full Post »

Older Posts »

%d bloggers like this: