Since I co-led the presentation this week, the learning diary will be a bit more expansive than usual. Pete and I presented Multimodal technologies, which allow users to interact with technology using different modalities than the traditional mouse, keyboard and screen. It can improve usability because different modalities have advantages over others. For example listening to spoken instructions may take less effort than reading visual text instructions. It can also improve the users overall experience by providing novel and interesting modes of interaction that are entertaining or effortless. Most importantly in this course it can improve accessibility to technology, particularly to disabled users such as the blind and paralysed.
We went through all the sensory modalities looking at examples of multimodal interaction. The main three modalities of interest are vision, auditory and haptics (touch, proprioception and kinaesthesia). Other modalities include equilibrioception (balance) thermoception (sense of temperature), gustation & olfaction (taste and smell) and nociception (pain), though these are less relevant to current research and application of multimodal technology. Vickers and Alty (unknown) present a paper, which outlines their study that used the CATALIN auralisation system to test effects of musical program auralisation on the debugging tasks of novice Pascal programmers. It showed that musical interpretations could be interpreted and used to help locate certain bugs. Chouvardas et al. (2008) present a very interesting paper where they look at the science of touch and how HCI has used that knowledge to develop multimodal haptic technology. I also found an interesting way of representing colour to blind people; “Colour by touch”.
“Tactile colour is an easy system of twelve standardized textures representing twelve colours. The system is both intuitive and logical. Each texture is distinctive and has a bold colour. The textures reflect the colour spectrum; the texture for orange has a surface feel between those of red and yellow. Green is a mixture of the feel of yellow and blue and so on. The contrasting colours like red and green and black and white are represented by contrasting textures.”
- http://www.tactile.org/TactileColourinformation.html
- http://www.tactile.org/colour_texture_description.html
As of yet the system has only been used for education on cards etc and has not been adapted into a computer like device. But a device that worked with a computer could obviously be engineered if a use was found. People in the class seemed to think it was a bit pointless. I totally disagree. As normal sighted humans we encode loads of information through colour. Colour allows us to make thousands of associations effortlessly. It also allows us to perceive pictures in more detail. Blind people in lacking the visual modality, not only lose spatial information they lose colour processing functionality as well. If technology is to be universally usable, then colour needs to be taken into account. Combined with tools like haptic maps (see and the enactive torch (see Chrisley et al. 2008), the totality of haptics research could mean that soon blind users can appreciate all the functionality of sight using haptic techniques. An alternative to haptics for visual representation is audio, but it’s well known that our spatial resolution of audio information is far too low to be of any real use. I think haptics is the key to representing visual information to the blind.
In the second part of the presentation I first looked into Direct Brain Interfaces (DBI). DBIs can allow voluntary signals to be sent from the brain to a computer device to assist with control and navigation without the need to use other modalities. Clearly one use is to help paralysed people interact with computers and overcome an otherwise impossible accessibility issue. The BrainGate Neural Interface System is currently the subject of a pilot clinical trial being conducted. (http://www.cyberkineticsinc.com/content/medicalproducts/braingate.jsp). It works by placing an electrode deep within the brain of the patient. Hochberg et al (2006) demonstrated that a cursor on a screen could be moved using nothing but the power of the mind (after neural training). Kim et al (2007) did an extended study, which allowed clicks to be interpreted by the system in addition cursor movements. However the system researchers hope to extend its use beyond that of cursor navigation to include the control of objects in the environment such as a telephone, a television and lights.

Next year Emotiv Systems plan to release a wearable DBI head device consisting of a sensor implanted into the motor cortex of the brain and a device that analyzes brain signals. The system can analyze conscious thoughts to control movements in game environments, analyze a limited number of emotions, and analyze several facial expressions. Sceptics out there should be warned. This is the start of a whole new paradigm of Human-Computer interaction. The Emotiv headset is already impressive and the DBI paradigm will only grow further from now. The headset will bring DBI into the commercial games industry and also importantly it could potentially bring DBI into the daily lives of many of the mainstream public. The company offer SDKs for software developers to engineer applications for the headset. It will be interesting to see the response to its release in 2009.
I finished the presentation by looking at fully immersive virtual environments. The trick to getting them to be fully immersive is to make sure that the interplay between vision, sound, movement and touch is as coherent and consistent as possible. If a user in a VR environment turns their head the image projected should change appropriately as should the direction of sound. This is highly related to proprioception, kinaesthesia and the brain’s ability to match sensory patterns of the external environment. Some VR examples look more impressive than other. For example everyone in the class agreed that CAVE (http://www.evl.uic.edu/pape/CAVE/oldCAVE/CAVE.html) looked unconvincing with its cube shaped layout. Cybersphere on the other hand looked far more impressive (see Fernandes et al. 2003). It will be interesting to see how far researchers get, but if the pace of DBI research and development is as fast as it looks, we may see fully realistic computer assisted dream experiences before physical VR technology gets there.
That’s about it. I have to say I think multimodal technology is one of the most interesting areas of research in HCI and I may well come back to it later in my degree.
Chouvardas V.G. ,Miliou A.N ,Hatalis M.K.(2007) Tactile displays: Overview and recent advances. Displays 29. Available at ScienceDirect.com
Chrisley, R., Froese, T. & Spiers, A. (2008) Engineering Conceptual Change: The Enactive Torch. To appear in the WPE 2008.
Fernandes, K.J., Raja, V. & Eyre, J. (2003). Cyberspace: the fully immersive spherical projection system. Communications of the ACM, 46 (9), pp.141-146
Hochberg, L. R. et al., (2006) “Neuronal ensemble control of prosthetic devices by a human with tetraplegia,” Nature, vol. 442, pp. 164-171, July 2006.
Kim, S., Simeral, J. D., Hochberg, L. R., Donoghue, J. P., Friehs, G. M.and Black, M. J. (2007) Multi-state decoding of point-and-click control signals from motor cortical activity in a human with tetraplegia. Proceedings of the 3rd International IEEE EMBS Conference on Neural Engineering Kohala Coast, Hawaii, USA, May 2-5, 2007.
McGookin, D. K., Brewster, S. A. (unknown) Graph Builder: Constructing Non-visual Visualizations. University of Glasgow.
Vickers, P. Alty, J. L. (unknown) Musical Program Auralisation: Empirical Studies. Publishing details unknown.
No comments:
Post a Comment