Wednesday, 10 December 2008

Week 10: Course finish

This is the final week of the advanced HCCS course. On friday we will finish by making some links between accessiblity and usability, which should round things off well. The course has been a useful learning experience. My favourite area has definitely been multimodal technology and its application. The web accessibility aspects have been essential to me, for use in industry after my degree. Having nearly completed the accessiblity audit I can see how it may be tricky to maintain a unique personal style in developing websites, whilst keeping them fully accessible. Guidelines like WCAG 2.0 are restrictive web developers to some extent, but they are important because disabled people sahould have the same access to content as everyone else. Moreover as guidelines become more serious in the legal sphere, developers will have no choice but to comply. With luck the up and coming WCAG 2.0 guidelines will be more successful than its predecessor.

Friday, 5 December 2008

Week 9: Adaptable and adaptive systems

This week Pete and Sophie led the talks on adaptable and adaptive systems. These systems have the capacity to change and learn from experience. Adaptable systems require an input from the user to make adjustments; adaptive systems go a step further and automatically adjust by monitoring the user’s behaviour. There are many examples of adaptive software systems, particularly on the web, e.g. Amazon's personalization and the old Pandorra. The software is usually driven by artificial neural network (ANN) learning algorithms (see Sejnowski and Rosenberg’s, 1986, NETalk for an influential example). ANNs are different to traditional rule based inference algorithms. They consist of simple nodes linked in parallel interacting with higher levels through simple connection signals or weights, to produce an overall output from the resultant weight. Following a brain neural network analogous model, the computation style of ANNs is highly efficient at pattern recognition. This makes it an ideal candidate for adaptive systems because certain patterns of behaviour are picked up on and the the output is adjusted accordingly. However with highly complex networks the output behaviour is difficult to predict compared to more rule based inference algorithms. Thus it is possible that the output may turn out to be undesirable if the algorithms are poorly designed.

With technical issues aside the big debate in today’s class focused on whether adaptive systems cause people to lose control of their actions and threaten their personal identity, or whether they reduce cognitive load and free up time for creative activity. The thing is these systems are everywhere already! They are used in the economic and military world extensively, making thousands of financial decisions a second, and guiding weapons & military UAVs. They are also being used by millions of people a day over the internet in many thousands of software applications.  

The question is, how much do we want computers and software to control and guide our behaviour? I personally can see huge benefit in mass intergration of adaptive systems into our everyday lives. I would love my computer to react and adjust based on my personal use of it. For example it would be great if software prompted appropriately whilst I worked on my computer; perhaps sending me relevant research papers when it saw I was searching for a topic. The possibilities are almost endless; ordering the shopping when the fridge was low, suggesting music I might like, even perhaps reacting to my mood as research in affective computing is demonstrating is possible. Of course sometimes adaptive software gets it wrong, but that’s a limitation of the current technology, not of the principle in itself. I think ultimately it’s about finding the right balance between adaptability and adaptiveness, i.e. how much work the user does and how much the technology does. Stumbleupon I think does a good job, allowing you to input your interests and then adjusting content based on whether you rate sites. It thus strikes a balance between the user’s control and its own intelligence (if intelligence is the right word to use).

In terms of such systems benefit to AX for disabled people, they can have huge advantages. Every individual is different and adaptive systems potentially have the ability to react to individual’s needs and behaviour by recognizing and reacting on patterns. One example that was mentioned in class is ‘Leitza’, software that could help screen readers by searching ahead in the page for interesting and appropriate content. In another example the algorithms used to convert the patterns of light entering a camera into a haptic representation for blind people are often achieved using artificial neural networks. If universal usability is to become realised then I think adaptive systems are crucial.

Friday, 28 November 2008

Week 8: AX in 2D and 3D

This week Ed and Richard led the talk. They looked mainly into accessability in computer games. Generally it proves to be unsuccessful to make games universally accessible to everyone. Some people in the class argued that complex games that are quick paced and involve the interaction in an immersive 3D environment (such as BioShock) would probably be ruined if they were dulled down in certain ways for disabled or impaired users. I would argue that since blind people for example can navigate real worlds using haptic map technology, the same could be applied in principle to game environments. Haptic maps work through representations of input to produce a map output. If both the real world and game environments can be translated into haptic representation there may well be room for blind people using assitive technology to enjoy complex interactive games. Apparantly we were told that cognitively impaired users can use software to slow down the pace of games, another example of an assisstive technology use. Solutions for other disabilities are open to HCI research.

The other alternative to using assitive technology to playing mainstream games is to design the games from scratch with the disabled user in mind. They showed us a demo in which a particpant from the class tried to play an audio led game, designed for blind users. Again I refer back to haptics. Using auditory information for spatial awareness is probably much less natural than haptics. After all blind people usually navigate using a stick (haptics), a guide dog (haptics in the sense of following the pull) or haptic assitive technology. On the other hand you could argue that the abstract medium of knowing direction and space in the game is part of the challenge.

We also had a guest speaker this week who was an expert in accessibility. she gave us lots of helpful pointers for our accessiblity projects.  As well as looking into guidelines like WCAG, she pointed us to some useful AX tools that are free to use. She also emphasised that AX is not just about tedious guidelines, but about influencing huge numbers of peoples lives. Its easy to forget this aspect when your usability is not overly unrestricted by web design. I liked the quote she gave from Christian Heilman at Yahoo, "Disability is nothing more than a hard core usability testing of your product." Ultimately in this course we are just taking usability to the extreme end.

Friday, 21 November 2008

Week 7: Multimodal Technologies


Since I co-led the presentation this week, the learning diary will be a bit more expansive than usual. Pete and I presented Multimodal technologies, which allow users to interact with technology using different modalities than the traditional mouse, keyboard and screen. It can improve usability because different modalities have advantages over others. For example listening to spoken instructions may take less effort than reading visual text instructions. It can also improve the users overall experience by providing novel and interesting modes of interaction that are entertaining or effortless. Most importantly in this course it can improve accessibility to technology, particularly to disabled users such as the blind and paralysed.

 

We went through all the sensory modalities looking at examples of multimodal interaction. The main three modalities of interest are vision, auditory and haptics (touch, proprioception and kinaesthesia). Other modalities include equilibrioception (balance) thermoception (sense of temperature), gustation & olfaction (taste and smell) and nociception (pain), though these are less relevant to current research and application of multimodal technology. Vickers and Alty (unknown) present a paper, which outlines their study that used the CATALIN auralisation system to test effects of musical program auralisation on the debugging tasks of novice Pascal programmers. It showed that musical interpretations could be interpreted and used to help locate certain bugs. Chouvardas et al. (2008) present a very interesting paper where they look at the science of touch and how HCI has used that knowledge to develop multimodal haptic technology. I also found an interesting way of representing colour to blind people; “Colour by touch”.

“Tactile colour is an easy system of twelve standardized textures representing twelve colours. The system is both intuitive and logical. Each texture is distinctive and has a bold colour. The textures reflect the colour spectrum; the texture for orange has a surface feel between those of red and yellow. Green is a mixture of the feel of yellow and blue and so on. The contrasting colours like red and green and black and white are represented by contrasting textures.”

-          http://www.tactile.org/TactileColourinformation.html

 



-          http://www.tactile.org/colour_texture_description.html

As of yet the system has only been used for education on cards etc and has not been adapted into a computer like device. But a device that worked with a computer could obviously be engineered if a use was found. People in the class seemed to think it was a bit pointless. I totally disagree. As normal sighted humans we encode loads of information through colour. Colour allows us to make thousands of associations effortlessly. It also allows us to perceive pictures in more detail. Blind people in lacking the visual modality, not only lose spatial information they lose colour processing functionality as well. If technology is to be universally usable, then colour needs to be taken into account. Combined with tools like haptic maps (see  and the enactive torch (see Chrisley et al. 2008), the totality of haptics research could mean that soon blind users can appreciate all the functionality of sight using haptic techniques. An alternative to haptics for visual representation is audio, but it’s well known that our spatial resolution of audio information is far too low to be of any real use. I think haptics is the key to representing visual information to the blind.

In the second part of the presentation I first looked into Direct Brain Interfaces (DBI). DBIs can allow voluntary signals to be sent from the brain to a computer device to assist with control and navigation without the need to use other modalities. Clearly one use is to help paralysed people interact with computers and overcome an otherwise impossible accessibility issue. The BrainGate Neural Interface System is currently the subject of a pilot clinical trial being conducted.  (http://www.cyberkineticsinc.com/content/medicalproducts/braingate.jsp). It works by placing an electrode deep within the brain of the patient. Hochberg et al (2006) demonstrated that a cursor on a screen could be moved using nothing but the power of the mind (after neural training). Kim et al (2007) did an extended study, which allowed clicks to be interpreted by the system in addition cursor movements. However the system researchers hope to extend its use beyond that of cursor navigation to include the control of objects in the environment such as a telephone, a television and lights.

 

Next year Emotiv Systems plan to release a wearable DBI head device consisting of a sensor implanted into the motor cortex of the brain and a device that analyzes brain signals. The system can analyze conscious thoughts to control movements in game environments, analyze a limited number of emotions, and analyze several facial expressions. Sceptics out there should be warned. This is the start of a whole new paradigm of Human-Computer interaction. The Emotiv headset is already impressive and the DBI paradigm will only grow further from now.  The headset will bring DBI into the commercial games industry and also importantly it could potentially bring DBI into the daily lives of many of the mainstream public. The company offer SDKs for software developers to engineer applications for the headset. It will be interesting to see the response to its release in 2009.

I finished the presentation by looking at fully immersive virtual environments. The trick to getting them to be fully immersive is to make sure that the interplay between vision, sound, movement and touch is as coherent and consistent as possible. If a user in a VR environment turns their head the image projected should change appropriately as should the direction of sound. This is highly related to proprioception, kinaesthesia and the brain’s ability to match sensory patterns of the external environment. Some VR examples look more impressive than other. For example everyone in the class agreed that CAVE (http://www.evl.uic.edu/pape/CAVE/oldCAVE/CAVE.html) looked unconvincing with its cube shaped layout. Cybersphere on the other hand looked far more impressive (see Fernandes et al. 2003). It will be interesting to see how far researchers get, but if the pace of DBI research and development is as fast as it looks, we may see fully realistic computer assisted dream experiences before physical VR technology gets there.

That’s about it. I have to say I think multimodal technology is one of the most interesting areas of research in HCI and I may well come back to it later in my degree.

  Refrences:

Chouvardas V.G. ,Miliou A.N ,Hatalis M.K.(2007) Tactile displays: Overview and recent advances. Displays 29. Available at ScienceDirect.com

Chrisley, R., Froese, T. & Spiers, A. (2008) Engineering Conceptual Change: The Enactive Torch. To appear in the WPE 2008.

Fernandes, K.J., Raja, V. & Eyre, J. (2003). Cyberspace: the fully immersive  spherical projection system. Communications of the ACM, 46 (9), pp.141-146

Hochberg, L. R. et al., (2006) “Neuronal ensemble control of prosthetic devices by a human with tetraplegia,” Nature, vol. 442, pp. 164-171, July 2006.

Kim, S., Simeral, J. D., Hochberg, L. R., Donoghue, J. P., Friehs, G. M.and Black, M. J. (2007) Multi-state decoding of point-and-click control signals from motor cortical activity in a human with tetraplegia. Proceedings of the 3rd International IEEE EMBS Conference on Neural Engineering Kohala Coast, Hawaii, USA, May 2-5, 2007.

McGookin, D. K., Brewster, S. A. (unknown) Graph Builder: Constructing Non-visual Visualizations. University of Glasgow.

Vickers, P. Alty, J. L. (unknown) Musical Program Auralisation: Empirical Studies. Publishing details unknown.

Friday, 14 November 2008

Week 6: Accessability audits

After a week’s break, we continued the course by looking at accessibility audits. Carina and Mike gave a presentation where they explained the key issues. They also showed us a real example from AMEX and the WCAG 1 guidelines. We recently found out that our first assessment would be to undergo an accessibility study into 5 websites. I would have liked to look at poker software as it is one of my interests and some are clearly more accessible than others without even applying any AX tools. However since they aren’t strictly speaking websites (despite communicating through the web and having website-like features); I will have to find something else.

 

I may look into shopping websites, since I do a lot of shopping online and am sure that plenty of disabled people must do as well, given that urban spaces in shopping areas are often not designed with the disabled user in mind. It is important that these websites are made fully accessible for disabled users, as often they are home bound and require the service even more than normal users. I will definitely use the WCAG 2.0 guidelines that come into place in December, since I want to learn the guidelines that will be used in industry by the time I complete my degree. WCAG 2.0 consists of four main principles; ‘Perceivable’, ‘Operable’, ‘Understandable’ and robust; and 12 main guidelines (with sub-guidelines) under these principles. The study will require some assistive accessibility technology to test all the guidelines and these should be available for free online.

Friday, 31 October 2008

Week 4: Web Standards

This week we looked at web standards particularly in relation AX. Not the most interesting area in HCI in my opinion but very much a necessary area for correct practice. The main focus in the talk was on WCAG, guidelines for website design. Breaking these guidelines is not illegal, but it is considered unlawful. In other words sufferers from unlawful practice can take the offenders to court. Luckily this hasn't happened in the UK, but in the suing culture of the USA there have been several cases. I think there needs to be a better way of enforcing web standards, which doesn’t involve hundreds of unnecessary law suits that cost disabled peoples' and organizations' time, and tax payers money. If for example following good web standards (recognized by an algorithm) placed websites higher in search results, then the web site designers would be positively reinforced to comply with such standards. Also web standards such as WCAG 1 are criticised for being too specific and are uncontextual to the variety of practical issues in developing usable and accessible websites. Researchers such as Sloan et al. (2006) have proposed more flexible frameworks such as the 'Tangram' model (a puzzle with many solutions), and 'Blended learning' approaches that take each disability as a specific learning style to tackle. Ultimately web standards are important, particularly with the rise in web use over the past decade, and it is important that standards are implemented so that they are fair to both users and developers!

Friday, 24 October 2008

Week 3: Designing for AX

This week we delved further into the topic of universal usability by looking at accessibility for impaired users. I didn't realise but commercial websites actually have to be made accessible for the disabled in certain ways by law. But even with legal issues aside I think it’s fair to say we have a moral obligation to help the disabled work with technology. Moreover with the rising aging population the issue of accessibility will become ever more important in the future since we are all likely to want to make use of it when we are older. However the issue was raised in class that this may not be the case if in the future bioengineering and nanotechnology provide solutions to people's impairments, so that they can use the same technology as normal users. But since it may be quite some time for such advances in technology to become realised, accessibility will certainly be important for a while. Furthermore designing interactive products with universible accessibility in mind usually makes them more usable for everyone (not just the disabled), and can make greater profits for companies making interactive products.

 

George, who gave the presentation this week, showed us a video example with a blind man using a screen reader to help him use his PC. The speed at which he could crank the voice up to and still understand was incredible! This is far faster than I can read text visually. I would hypothesise that this may be due to the speed of language encoding in the auditory system compared to the visual system (after all we do communicate predominately through spoken word). One thing that blind people have trouble interacting with computers is their location on the screen. The speech screen readers give no clue as to spatial location. It seems to me that the best way to do this is by using haptic devices, either as a pad the user can touch with their hands, or somehow attached to the user (on the back or tongue) so that tactic sensations can be administered where appropriate and the user can build a spatial mental map of the screen. I have no doubt we will see some very interesting solutions to accessibility in the near and distant future.

 

I also looked at two papers in some detail Allen et al (2008), which showed how domain experts are involved in designing assisted technology, and Newell et al (2007), which showed how to use older people in the design process. I found the Newell paper more interesting. Older people require different methods for collecting data, including more social events and the use of trained actors in some cases to increase involvement.

Sunday, 19 October 2008

Week 2: Universal Usability

This week we looked at universal usability, the idea that we could make technology accessible and usable for everyone in society, regardless of culture, language, impairment, etc. The concept was pioneered by the computer scientist Ben Schneiderman and is now a huge area of research in the HCI field. Universal usability is in fact an incredibly challenging task (in fact I think true universal usability may be impossible). For example since symbols are almost always culture/language specific, how do we create symbols that everyone relates the same semantics to? Also how do blind users understand the 2D/3D space of a system interface? One interesting possibility for blind people is multimodal technology. Research has shown that through the use of a head mounted camera and a touch simulator on the back or tongue, the visually impaired user is able to represent visual information tactically, a truly interesting feat that demonstrates an incredible possibility for technology, and more incredibly the adaptiveness of the human mind. It seems to me that the best way to achieve universal usability is simplicity in design and transparency between the user and interface. No user should have to think about how to use technology; it should come to them like an instinctive sense. Technology such as haptics and Direct Brain Interface are surely then important areas of research!

Sunday, 12 October 2008

Course start

I've created this blog to post my learning diary entries for a module I'm taking this year; HCCS advanced topics. I will link my other blogs to it and  comments from any readers are welcome on anything I muse, comment or rant on.

We have been told to look out for usability and accessibility issues in the real world throughout our studies to serve as discussion points in class, and also I guess to get us thinking in the right way. Two things I can think off already. Firstly, I bought a new sound system for my laptop a few weeks ago and you can only turn it on and off and adjust EQ levels with the remote. So if I or any other unfortunate person who invested £50 in it loses it they are in trouble!  Secondly, I lost my passport the other day and the online lost passport form requires you to know your passport number to continue to the next screen. Who designs these things????