A number of years ago, just as Windows 3.1 was being replaced by Windows 95 in the office I was working in, one of the newest employees whom I worked with was having an incredibly hard time learning to use a mouse.
She held it gingerly, as if afraid that she might break it. She had just been given a new computer, and her old one was a DOS-based computer with a keyboard driven menu. It really didn’t look like she would get the hang of it.
About a year later, she was actively teaching others in the office how to use a fairly complex software system that I had helped work upon, and distribute through the office. Without Solitaire, she might never have mastered that mouse and gotten as far along as she did.
As I watched her initial efforts to use the mouse, and the frustration on her face (she had worked for many years in offices that used typewriters, and learning to use a computer was a big step to begin with), I remembered hearing that Microsoft included the game with their operating systems to help people learn to use a mouse. I figured I had nothing to lose, and showed her how to get to the game.
As we worked on finetuning this new software that she developed an expertise in, she provided a lot of great feedback on workflow involved in performing different tasks on the system. I think that early experience in her use of computers tuned her into how people interact with computer systems. Our experience together made me pay a great deal of attention to human computer interface issues.
Three new papers from researchers at Stanford University describe studies involving a user interface that may make it easier for people with disabilities, and people without, to navigate around a computer screen, scroll down pages, and switch between applications. The authors of these papers tell us:
For our research we chose to investigate how gaze-based interaction techniques can be made simple, accurate and fast enough to not only allow disabled users to use them for standard computing applications, but also make the threshold of use low enough that able-bodied users will actually prefer to use gaze-based interaction.
Might the gaze-based system described in these papers be an interface that we use in tomorrow’s computers? I can’t answer that for certain, but if they are, I might be playing a fair amount of solitaire to get the hang of them.