Feeds:
Posts
Comments

Posts Tagged ‘simulations’

In this introductory presentation for Amrita University’s T4E Conference this month, here are a few thoughts on differentiating between Gamification, Serious Games and Simulations.

Read Full Post »

A little delayed, but here are the two videos from the 2012 conference in Singapore. The first one is a panel discussion on how to monetize serious games where I flipped the discussion to “why monetize – what is the value that we are bringing to customers” instead.

The next one, is my pitch for standards in Serious Games and Simulations. The key argument is that standards are necessary for the industry as a whole and will bring efficiencies as well as increased customer satisfaction.

Presentation here:

Read Full Post »

It gives me great pleasure to formally announce the formation of the National Association for Simulations and Serious Games (India).

NASSG is an attempt to get together stakeholders and entrepreneurs in the SG&S industry in India. The objectives are: 

  1. To create awareness about the potential of serious games and simulations to help solve large scale learning and training challenges
  2. To create and facilitate a community of stakeholders actively engaged in raising awareness and extending the state of art
  3. To promote programs that build talent in this space
  4. To provide a mechanism to formally interact, both within the community and across communities, nationally and internationally

Our charter members include pioneering Indian companies (Atelier Learning, Indusgeeks, Sparsha-Learning, Vitabeans and KnolSkape) that create simulations and serious games, as well as develop supporting tools and technology. MindTickle has joined in as well.
 
Membership in the NASSG is open to organisations, developers, artists, programmers, publishers, faculty, middleware and tool companies, service providers, researchers, analysts, marketing, advertising, consultants and students connected with Simulations and Serious Games. Do sign up if you are interested!

We have also established a LinkedIn group. Twitter presence is at @nassgindia. Facebook group is here.

Delegates from NASSG are also presenting at the Serious Gaming and Social Connect Conference in Singapore from Oct 4-6, 2012.

Read Full Post »

Much has been developed, researched and written about the power of high fidelity simulations (especially in defense and healthcare) and their ability to provide far more effective training outcomes and better measurability of performance. And I will include Serious Games as well in the same context. I think it is perhaps a good time to raise a few questions.

Firstly, are simulations (and/or serious games) more suited for assessing performance than traditional paper/pencil or online tests? If yes, does this apply uniformly across all subjects/domains or are these particularly suited for certain types of assessments?

Secondly, what are the essential attributes of such simulation or game based assessments? What are the criteria upon which a simulation or game may be said to reliably, accurately and efficiently assess a student’s performance?

Thirdly, what are the more accessible ways in which such simulations or games can be developed? Learning implements like Multiple Choice or Drag and Drop questions are fairly quick and easy to design and develop, and there are a host of tools around that make the development process fairly rapid. But assessments based on simulations and games may take up too much time and the effort to develop them rises exponentially as the number of variables increase.

Fourthly, what kind of features in a simulation are required in order for the simulation to be more effective as compared to traditional alternatives? Are there techniques, like perhaps adaptive testing, that can be applied to simulation based assessments?

Fifthly, what evidence can be reliably obtained to show that simulations can indeed assess performance reliably?

I may be missing other questions, but the intent is to try to understand how simulation based assessments can be brought into the mainstream education, if indeed they can be proven a reliable and accessible alternative to traditional techniques. It will bring the fun back into taking exams for millions of schoolchildren. That itself should be motivation enough for us to research the space!

 

Read Full Post »

I have been researching management of simulations and other complex entity based learning implements such as serious games. The challenge here is that the traditional SCORM/AICC paradigm allows limited reporting capabilities. Another challenge is storing state for later resumption (bookmarks) and the third challenge is to be able to set simulation parameters. Another related challenge is to “pool” the simulation experience for a multi-user synchronous or asynchronous simulation/game experience. Yet another challenge is to capture/record simulation experiences for later analysis, grading and feedback.

Clearly, this is an important area of focus. ADL, the keepers of SCORM, have developed an architecture called the High Level Architecture (HLA). Their research:

focuses on developing instructional paradigms, training-specific data structures and communication methods between a simulation, Shareable Content Object Reference Model (SCORM)-based instructional content, and a Learning Management System (LMS), to facilitate using simulation as an environment where an individual or a team can practice a skill (instruction) or demonstrate their level of performing the skill (performance assessment).

Of particular interest is the intersection between S1000D and SCORM. S1000D provides a mechanism to define complex systems having multiple inter-related components, and to define various allied information items thereof (like defining a plane or a ship or even a bicycle).

But of particular interest is the advancement of simulations for learning through research on adaptive simulations, social simulations (utilizing the power of the network – maybe to run alternate reality games) and other ways to raise the bar on what simulations can actually achieve. Let me know your thoughts!

Read Full Post »

SCORM works on 2 main principles – as a way to package and sequence learning material, and as a way for learning management systems to track learning activity through a run time interface. It is based on traditional teaching-learning processes and provides additional promises of inter-operability and reuse through standardization of the way courses are organized and presented to the learner.

It has evolved slowly to include new features and rule sets, like sequencing, navigation and QTI (Question Test Interoperability). In fact, the SCORM 2004 4th Edition book defines an organization as:

A content organization can be seen as a structured map of learning resources, or a structured activity map to guide the learner through a hierarchy of learning activities that use the learning resources. One content developer may choose to structure the content organization as a table of contents for the learning resources, while another content developer may choose to structure the content organization as an adaptive guided path through a learning experience, invoking learning resources only if and when they are needed. A third content developer may create a content organization where some discovery activities include a free form use of some of the learning resources, while other activities are more formally managed.

The intent is to provide a way to flexibly organize content in the form of more than one sets (multiple organizations) of  tightly or loosely coupled learning activities rather than just a hierarchical or linear progression. This, coupled with sequencing and navigation information/rules, the LMS can interpret to provide some adaptive intelligence in the learning process.

While these are evolutionary improvements in the standard, there are at least four other dimensions or major impacts that both the Content Aggregation Model (with Sequencing and Navigation) and the Runtime component have not yet addressed.

  1. The scope for a Services extension to SCORM – In the current context, content or activities embedded in the learning workflow will have to integrate with resources outside the resource list and metadata identified by the CAM. With AJAX enablement, it is no longer necessary to navigate away from a web page to access a new piece of functionality. But these integrations violate the fundamental principles behind the notion of a self-contained object, which is why they have not been considered so deeply. This is a formidable impact to include. A related impact is on the Service under consideration. If you build a Services Extension to SCORM, you will most likely also mandate that the Service provides a SCORM compliant interface. This is critical. Imagine a WordPress implementation that reports how the learner reflected and interacted with a community to the LMS.
  2. The scope for Complex Data Interchange in SCORM – Games and Simulations as well as other activities that have complex data to seed a learning context or generate complex data both during the activity and for some kind of business intelligence post the activity. Already efforts have been made with HLA (especially refer the discussion on three prototype classes) and S1000D integrations with SCORM. Some of the efforts also integrate a further complicated scenario – multi-player SCORM based learning activities with shared state and communication via the LMS.
  3. The scope for Social Learning Networks in SCORM – the informality of the social learning network also brings a deep impact to SCORM. Whereas the ingredients to metadata or SCO Context may exist in the SCORM specification, the social influence is not accounted for despite the new understanding forged by the theory of Connectivism, the adoption of the informal by LMS vendors and by the fast paced technological developments we have and are witnessing. What this means essentially is the modeling of two major things – the student and the network or the learner and the community. Many will see the PLE in stark contradistinction – I think PLEs will arrive at the same conclusion from a different direction soon enough.
  4. The scope for a Mobility extension to SCORM – Content and interactions possible to leverage now and in the forseeable future based on the mobile platform (not just the presentation aspect) using services such as Location Awareness and Semantic web applications are now very integral to the learning experience and cannot be ignored. This goes past, obviously, thinking of packaging or presentation for a smaller screen real estate and limited processing powers – the focus is on what the mobility enables.

Without an adequate assessment and incorporation of these dimensions into SCORM, the standard is incomplete and anachronistic. There are pressing reasons why these should be incorporated for the Standard to become current and relevant – and soon.

Read Full Post »

Data visualization in 2D is what we have done most of our lives. Till recently, I viewed 3D as a medium for understanding and manipulating complex structures (say molecules, genes, architectural maps etc) both for academic and commercial use. With mashups, came the concept that you could intermix n-dimensional data (like OLAP) in a Web 2.0 environment. IBM’s ManyEyes uses this data to create 2D and 3D chart visualizations.

However, 3D visualization common use application still eludes me. I remember, as far as 10-12 years back, someone talked to me about “walking” into an Oracle database as an administrator and using hands to reorganize tablespaces, compress them and many other administrative actions. Years later, I watched Michael Douglas in Disclosure moving around a 3D file system and Tom Cruise and team waving life-videos and geo-spatial data on screens looking like a sheet of glass (I think it was in Minority Report).

Now Green Phosphor has come out with 3D technology based on the Content Injection and Control Protocol (CICP), sort of a “http for virtual worlds“, that merges excel data or database query outputs with 3D representations in a virtual world.

But I still struggle with possible applications. My friend Sid, at Indusgeeks, and his wonderful team, are looking at immersive and interactive 3D learning spaces for learning and collaboration.

What makes sense for me is not “representational” 3D (i.e. 3D visualization that depicts n-dimensional data visually), but “meaningful, context driven” 3D. For example, real time data about movement of whales in the oceans could be merged with a virtual ocean world where students could come and explore, replete with ocean and whale sounds, measurement techniques and tools etc. Or for that matter, a data center created on the fly for practice on measurements of power and cooling, from data that represents servers, power units, HVACs etc. These systems mirror real-life in ways that can go beyond the real life experience (e.g. walk inside a server or inside an artery). But they are also limited by the amount of kinesthetic immersion they can supply.

What is also interesting now is the availability of mobile phones with graphics accelerator cards built in. Imagine having a virtual world experience on your mobile phone. Look at Imageon. In fact, technologies are emerging today that allow you to use your mobile phone to click an image and have a backend system process indexed image (and possibly video) databases to return to you information related to that image. Imagine never having to be lost again or to click a product picture in a electronics store and get all the information and reviews related to that item.

In 3D terms, imagine being in a virtual world that reconstructs the actual background environment that each participant is coming from – one driving his car while in the conference, the other in her office, the third on the field at the scene of action, each being able to access and share information.

Read Full Post »

In one of my conversations with a reputed customer, we had an interesting discussion around a theme for our project. Essentially a gaming project as conceived initially, both teams got together to thrash out the underlying model – the processes, variables and algorithms that would constitute this project. As the teams got a better understanding of the project (and they separated out learning objectives from what they call gaming or simulation objectives), they began to present their case as to why it should be a simulation augmented by a fantasy experience rather than the other way around.

Their basic arguments were around the following factors:

  1. Level of real-life immersion required – the team felt that real-life decision making in this case was a little too complex to be fantasized in an abstract manner
  2. Complexity of algorithms and interactions between system variables – many different indicators exist for each state of the game and these were not only inter-related but also derived by decision making sets that the learner would have to make
  3. Learner motivation triggers – the team felt that fantasy could be placed to augment learner motivation (!) rather than as the basis for the central theme
  4. Extent of lateral transfer between a fantasy situation and real life skills – the team believed that lateral transfer of skills between a fantasy situation (such as fighting aliens) and the actual business skills that were to be practiced and concepts reinforced, was a bit of a stretch. In fact, the fantasy could serve to well distract the learner from achieving learning objectives.

It was an interesting debate and the team finally decided on a unifying theme with a blend of fantasy-game and real-life-simulation which seems to be ideal (at least at this point).

According to Dumlekar (2004) in the context of “Management simulations”: “ A simulation is a replica of reality. As a training program, it enables adult participants to learn through interactive experiences. Simulations contain elements of experiential learning and adult learning […] Simulations would therefore be useful to learn about complex situations (where data is incomplete, unreliable or unavailable), where the problems are unfamiliar, and where the cost of errors in making decisions is likely to be high. Therefore, simulations offer many benefits. They accelerate and compress time to offer a foresight of a hazy future. They are experimental, experiential, and rigorous. They promote creativity amongst the participants, who develop a shared view of their learning and behaviors. Above all, making decisions have no real-life cost implications.”

Simulation and gaming – EduTech Wiki (emphasis added)

Marc Prensky in Digital Game based Learning (McGraw-Hill 2001) attempts to map games and simulations to various learning types. This is an interesting classification and needs some serious thought. For example, he suggests that theories and systems types of learning are better handled through simulation based environments while a host of other learning types such as skills, procedures and communication can be handled well by game based learning. His chart is reproduced below.

As I write this, I am beset by another rambling thought. How do games and simulations, as we traditionally think of them, change or are impacted by the new 2.0/3.0/4.0 paradigms? For example, can we orchestrate role-playing for learning within a social community in an effective collaborative manner (what would be required to do that), or, can we harness the power of each community member’s PC to run a complex market simulation or collaborative team simulation? I think this merits some serious thought as well.

Read Full Post »

%d bloggers like this: