Feeds:
Posts
Comments

Archive for January, 2011



My experience with Learning Analytics started about ten years ago with egurucool.com, arguably India’s first large scale online learning initiative for the K12 segment, as its CTO. Charged with building the technology frameworks to support over 20,000 users and over 12,000 hours of learning content across grades 9-12, we designed and built a complete LCMS with analytics.

The tools included distributed authoring, workflow, versioning, content management, content publishing, learning management, student performance reporting, customer relationship management and business analytics with the inherent re-use model enabling deployment of on-the-fly configurable and expanding list of products to online web, captive school systems, print, dynamic assessment and franchise centre applications.

At the centre of the architecture was the concept of a content form. My logic was that content can take many forms of organization, from simple to complex types. But each content form would have associated tangible meaning like, for example, a list of pre-requisites which would have a form that involved three elements – the pre-requisite sequence number, the statement and a brief explanation. A content form in its simplest version was a formatted text/image (FTI – rich text or HTML) fragment. So a pre-requisite would be a combination of a sequence number and two FTIs.

The presentation layer, given its understanding of this content form, could decide to represent this content form in multiple visual ways, given the target medium (web, print, custom). Similarly, a multiple choice question would be rendered as a complex content form with the stem and options being FTIs. Each such content form would be given a unique identification number that would help track interactions on or development of that artefact through the entire lifecycle – from authoring to use by the student.

A page of content, on the web/custom application or in print, would have fractions or more of such content forms. We had over a million such content forms developed in the space of a year and a half.

Such an approach made it easy to assemble, disassemble and reuse content. With the presentation style separated from the data and the structure of data, the entire system became extremely flexible. Creating a test prep product involved writing some rules to extract only Practice Exercises and Online tests from the database for a specific grade/class. It also made possible rapid deployment and upgrades of course material.

An important factor was homogeneity of input data in terms of raw formats. Today, a blog post is structurally indistinguishable from an email message content, i.e. it has a title, a URL/Identifier, content/body, tags and categories. It is the packaging (the envelope) that is different for both. We had to take special care to preserve homogeneity of the raw format.

So things got difficult, often impossible, when we crossed formats boundaries and were faced with the yet unsolved question of content equivalence – how do we make homogenous (or render to same base denomination)  content in different formats. It is easy to start from a given format and try to think of converting to others (start with text, convert to audio or even animated video stories). But to take two different objects in different formats, and try to establish homogeneity is difficult, if at all desirable. We homogenized what we could, and left others embedded in their native formats.

From the point of view of analytics, this content form would be a base dimension of our analytics. The content form could be a fraction of or could span multiple online web pages. Based on its association with the curriculum structure, we always knew basic statistics about who had accessed what, with what frequency, when and for how long. It was easier to determine performance statistics from scored content forms such as online quizzes.

It was also easy to build contextual searches based on exact content form in addition to knowing the content. And from the services/tools perspective, we had a series of tools like Ask an Expert, that could also get specific context because we always knew where the student was. Interestingly, all this user activity was also made available to our CRM which could run targeted queries to determine level of activity across individuals or groups, thus providing actionable strategies for intervention. Our data warehouse based on Oracle contained all tracking and performance information and had multiple data cubes for analysis by CRM, teachers and SMEs.

Of course, all these tools helped us provide an awesome dashboard for our students. They knew exactly how they were using the medium for learning, This was in addition to performance reporting mechanisms for the tests they took. By the time NIIT bought us out in 2002, we had already started taking it to the next level in all directions. Had we continued working on the solutions we built, I think we would benefited the most from the Web 2.0 / SoMe explosion.

Advertisements

Read Full Post »

Our digital life is being extended in multiple ways every day, not only by new software but also by new hardware and experiences being created that merge the social, physical and technological worlds we inhabit. I have been trying to piece together a framework for describing where we are today from the perspective of the design of future systems. This is important to me because I feel we are continuously responding to very small events in an over-hyped fashion.

Take for example, About.me, which was acquired by AOL for millions of dollars. It is a simple concept, definitely not new. However it is presented in a really nice format and did very well in its launch phase (got 400K users in its launch). It is about establishing a collated personal identity on the web. Incrementally, the venture adds nothing to the state of innovation, but (at least in AOL’s mind) adds a service to the existing offering making it more attractive. What is additionally interesting is that they raised over US$400K as initial funding, which reminds us there is money waiting to be spent. It is not just the investors, but a large part of the worldwide social networks, that contribute to this success.  

Different applications that have been successful across social and professional networks such as LinkedIn, Facebook and Twitter (I am actually surprised that WordPress has not gone social in the same way). They have been targeted at solving a specific need in a specific manner. If we were to think in terms of a generic breakup, the anatomy would consist of:

  • Identity: This is both one’s own identification and also the identification of groups and networks we participate in. Each group and network is also a collection of identities. A related sub-dimension is Privacy.
  • Content: This is the second dimension and spans both user generated content (self) and explicitly (e.g. collaboration) or implicitly (self) networked content generated as a result of activities that we perform on the network. Related sub-dimensions are of the experiences, skills and attitudes we need to contextualize (e.g. location or service), aggregate, organize, present, propagate and personalize this content.
  • Analytics: This dimension refers to the aggregation, analysis, reporting and insight into the Identity and the Content dimensions.

So far, we have not gone beyond these three dimensions. The paradigm is structurally similar to pre-Web 2.0 days when we talked about content, community, commerce and collaboration, the 4Cs or later with Sramana Mitra’s Web 3.0 = 4Cs + P + VS (content, community, commerce, context plus personalization plus vertical search) formula. So much has changed, but this remains pretty much the same.

I am not sure there exist other dimensions. In the last five years, these three dimensions have been rethought with the focus on the network rather than the commerce, giving agency to individual expression and the freedom of free.

Read Full Post »

« Newer Posts

%d bloggers like this: