Traditional BI has permeated the education function, at least in terms of the available platforms. Nearly every LMS platform provides some kind of reporting, but some systems are really advanced. For example, SABA’s BI (I think they use Business Objects) & SABA Social or Mzinga Omnisocial Platform analytics or Valdis Krebs’ work at orgnet or the work being done at Radian6, don’t leave much to imagination.
I have also come across Knowledge Advisors and their Metrics That Matter line of products for talent & performance and learning. They offer an on-demand SaaS Human Capital Analytics System with rich dashboards and an impressive client list. They have integrations with LMSs, conference attendance records (phone/web), surveys and interfaces for data capture from offline events. They build the big picture RoI and help organizations make predictions and take data driven decisions about learning and performance.
The discussion on the LAK11 Moodle forum and presentations by Ryan Baker and Kim Arnold on their work show a differentiated experience in the academic sector. The tools do exist, the implementation is not that uniform.
These are fairly large datasets, atleast for a medium to large organization or university. I disagree with scores or tool access patterns or attendance being used to make predictions. Scores are not representative of competence unless tests are really reliable and have predictive power (like in high stakes testing).
Likewise, building judgments on comparative (between high and low performers) analytics of usage of (behaviors on) a learning tool or system is like reverse engineering the design of a learning program – looking at it upside down.
Attendance or time spent is not a standalone indicator, if at all, of learning performance. It is difficult to capture quantitatively in online interactions where, for example, a single tweet could reflect a microsecond or weeks of learning effort.
On a related note, we are not only looking at snapshot data, but also temporally changing data. And in looking at this temporal change, one forgets that the sources of this data have changed – students have moved on, infrastructure has changed, teachers have been reallocated and so on. When the base has shifted so much between two points in time, how can the corresponding statistical results be compared to one another?
Similarly, Freakonomics inspired data scientists would have a field day generating correlations among seemingly unconnected data, basing corrective actions on those predictions and then believing that their actions yielded results. In the process, they would, by definition have ignored several other seemingly unconnected data, undermining the very starting point of the analytics itself.
Where the Semantic Web has made its start at analytics is by evolving SPARQL – a way to query graph data / linked data. SPARQL in its current version does not include inserts or updates, but allows many of the typical query language affordances like joins. But the interesting thing is, that like relational databases returning (single or multiple) data views, the query operations on a graph can return a graph itself. Which means that theoretically at least, assuming that the entire web was modeled and linked, that we could start from any state and keep exploring in an infinite manner. Why? Because everything would be connected to everything else (and maybe in less than 6 degrees).
In this context, I found Dragan Gašević ‘s presentation interesting from the object-relational semantic web perspective, specifically around LOCO (Learning Object Context Ontology). In order to build context, Dragan identifies a mix of various ontologies that are needed to describe the context – domain and user ontologies being the most obvious. Using those ontologies, he presents a way to combine the social web with the semantic web. He comes across the same old challenges, though, in trawling the web for unstructured data and ends with the familiar buzzwords – personalized, interactive, social, collaborative and ubiquitous.
Leave a Reply