In a discussion yesterday, we explored the limitations of current technology in terms of kinesthetic immersion in virtual worlds. While Virtual Reality has a few answers in terms of mirroring and rendering visualization of 3D human motion, these systems are hard to implement in practice.
For example, a user exploring a virtual world may need to pick up an object and perform certain actions on it. While this is intuitive in physical settings, the virtual world immersion, using SecondLife like platforms, needs to substitute point and click or keyboard measures to actualize the visualization. For example, using the keyboard to move hands in a particular direction.
While this may be interesting to program in a virtual world, the very process of translating human actions such as turning a screw may be extremely arduous and counter-intuitive to do through a sequence of keyboard actions.
Rather the approach could be to choose from a set of predefined actions in an intelligent manner. For this the objects and the ecology containing these objects would need to be intelligent enough to suggest the possible “ideal action paths” – e.g. can’t use a hammer to unscrew an assembly – while allowing learners to make mistakes in a safe manner.
Virtual worlds such as SecondLife or those created using a host of other platforms may evolve to more kinesthetic immersion with pervasive developments in motion sensing input devices. I heard that Nintendo Wii provided such an experience (boxing game) and am hoping to try it out soon. Also that Logitech is coming out with a device that can capture motion information.
With ubiquity becoming a key focus in learning, devices will need to evolve to provide kinesthetic immersion. This will virtually revolutionize the learning experiences that depend on kinesthetics to provide much of the learning impact.