My experience with Learning Analytics started about ten years ago with egurucool.com, arguably India’s first large scale online learning initiative for the K12 segment, as its CTO. Charged with building the technology frameworks to support over 20,000 users and over 12,000 hours of learning content across grades 9-12, we designed and built a complete LCMS with analytics.
The tools included distributed authoring, workflow, versioning, content management, content publishing, learning management, student performance reporting, customer relationship management and business analytics with the inherent re-use model enabling deployment of on-the-fly configurable and expanding list of products to online web, captive school systems, print, dynamic assessment and franchise centre applications.
At the centre of the architecture was the concept of a content form. My logic was that content can take many forms of organization, from simple to complex types. But each content form would have associated tangible meaning like, for example, a list of pre-requisites which would have a form that involved three elements – the pre-requisite sequence number, the statement and a brief explanation. A content form in its simplest version was a formatted text/image (FTI – rich text or HTML) fragment. So a pre-requisite would be a combination of a sequence number and two FTIs.
The presentation layer, given its understanding of this content form, could decide to represent this content form in multiple visual ways, given the target medium (web, print, custom). Similarly, a multiple choice question would be rendered as a complex content form with the stem and options being FTIs. Each such content form would be given a unique identification number that would help track interactions on or development of that artefact through the entire lifecycle – from authoring to use by the student.
A page of content, on the web/custom application or in print, would have fractions or more of such content forms. We had over a million such content forms developed in the space of a year and a half.
Such an approach made it easy to assemble, disassemble and reuse content. With the presentation style separated from the data and the structure of data, the entire system became extremely flexible. Creating a test prep product involved writing some rules to extract only Practice Exercises and Online tests from the database for a specific grade/class. It also made possible rapid deployment and upgrades of course material.
An important factor was homogeneity of input data in terms of raw formats. Today, a blog post is structurally indistinguishable from an email message content, i.e. it has a title, a URL/Identifier, content/body, tags and categories. It is the packaging (the envelope) that is different for both. We had to take special care to preserve homogeneity of the raw format.
So things got difficult, often impossible, when we crossed formats boundaries and were faced with the yet unsolved question of content equivalence – how do we make homogenous (or render to same base denomination) content in different formats. It is easy to start from a given format and try to think of converting to others (start with text, convert to audio or even animated video stories). But to take two different objects in different formats, and try to establish homogeneity is difficult, if at all desirable. We homogenized what we could, and left others embedded in their native formats.
From the point of view of analytics, this content form would be a base dimension of our analytics. The content form could be a fraction of or could span multiple online web pages. Based on its association with the curriculum structure, we always knew basic statistics about who had accessed what, with what frequency, when and for how long. It was easier to determine performance statistics from scored content forms such as online quizzes.
It was also easy to build contextual searches based on exact content form in addition to knowing the content. And from the services/tools perspective, we had a series of tools like Ask an Expert, that could also get specific context because we always knew where the student was. Interestingly, all this user activity was also made available to our CRM which could run targeted queries to determine level of activity across individuals or groups, thus providing actionable strategies for intervention. Our data warehouse based on Oracle contained all tracking and performance information and had multiple data cubes for analysis by CRM, teachers and SMEs.
Of course, all these tools helped us provide an awesome dashboard for our students. They knew exactly how they were using the medium for learning, This was in addition to performance reporting mechanisms for the tests they took. By the time NIIT bought us out in 2002, we had already started taking it to the next level in all directions. Had we continued working on the solutions we built, I think we would benefited the most from the Web 2.0 / SoMe explosion.
Leave a Reply