Club DNA

Updates / Archive
Proposal
Storyboard
Team Information
Paper

Links
CS92 Course Website

blogger



CS92 Writing Assignment for April 7, 2003
Marianne Harrison, Vincent Cheng, Michelle Higa

Part I
"Metros, S. E. and Hedberg, J. G. (2002). More Than Just a Pretty (Inter) Face: The Role of the Graphical User Interface in Engaging eLearners. The Quarterly Review of Distance Education, 3(2), 191-205.

In "More Than Just a Pretty (Inter) Face: The Role of the Graphical User Interface in Engaging eLearners" (2002), Metros and Hedberg propose a three-phase approach to designing educational software that inspires and facilitates active involvement of the user. The authors' main objective for writing this paper is to convey the importance of learner engagement, and to provide a framework for creating effective and engaging interfaces for educational software to all involved in the design process.

A well-designed, goal-specific graphical user interface (GUI) is key to an effective learning experience, because it clearly communicates the functions of the program, the actions available to the user, and the consequences of such actions in an intuitive and appealing way. Good GUI design depends on more than just good graphics; it depends on defining the goals of the project and an organization scheme that fits these goals, even before the visual implementation comes into play. Metros and Hedberg suggest a process for developing successful software that builds from an analysis of the needs and uses of the program to their realization in the specific elements of the interface. In the following summary we will outline the three phases of the model proposed by Metros and Hedberg, some of the ways they were implemented by the authors in a program called "123 Count with Me", and how they might relate to our own project in CS92, "Amy's First Shot".

Phase 1 of the model takes into account a project's specifications: the intended audience, the content of the program and how it is to be used, and the general goals of the project. In communication with the "project originator" and ultimate user, the designer may identify and refine these parameters and thus form the foundation necessary for meaningful and directed organization of the software's functions.

Phase 2 addresses the method by which learners will be able to access and manipulate information. The challenge of this phase is to make the flow of information as flexible and intuitive to the user as possible. Some good rules of thumb are to make it as apparent as possible what choices the user has, to provide feedback as to how chosen actions will affect the way the program will progress, and to always make these actions reversible (to reduce the stress of making a mistake or changing one's mind). Intended audience should play a substantial role in this phase, because the way a learner interacts with the computer will hinge on his or her skill level, interests, background, etc. One way in which "123 Count with Me" addressed potential individual differences in its target audience was to incorporate many different ways to access the various pieces of information the program had to offer. For instance, learners could choose to access content by directly clicking on objects in a classroom scene or by using pull-down menus. In "Amy's First Shot", we have organized the material so that it can be accessed in different ways, depending on the aim of the learner. For instance, very directed learners who have a specific question in mind might access information by means of the "library", while more exploratory learners might choose to progress through the material in a more arbitrary way by clicking on Amy's thought bubbles. Another way we seek to address the way learners will manipulate information in "Amy's First Shot" is in fact quite similar to "123 count with Me". In both, a familiar setting is used as a metaphor; a classroom in "123", and a doctor's office in "Amy's First Shot". This metaphor makes it easier for the learner to relate the content and organization of the program to things they have experienced in real life, and provides a consistent overarching theme. It also serves as a "home base" from which all parts of the program may be accessed (separate windows open from the main window, leaving the doctor's scene as a stable visual reference). In addition, making objects "clickable" provides a straight-forward and visual way of accessing content without needing much previous computer experience (e.g., clicking on the bookshelf in the doctor's office will provide access to the "library", while clicking on Amy will generate questions she has for the doctor). Perhaps we could take this further by having other objects in the office clickable as well, for even greater variety of access to information (this was suggested after our storyboard presentation).

Phase 3 is about making the right interface design decisions based on the insights and organizational structure laid out in the previous two phases. Metros and Hedburg believe quite rightly that the aesthetic "look and feel" of an interface should be determined by functionality and usability. If the elements of an interface evolve out of an analysis of the issues addressed in phases 1 and 2, and are influenced by knowledge of cognitive factors governing the way people perceive stimuli and conceptually organize them, they will be more meaningful and interpretable by the user than if they existed merely as arbitrary "decoration", or were a consequence of using a particular authoring tool. For instance, principles of visual perception can inform designers as to how a visual metaphor should be realized. In "123 Count with Me", the visual scene of the classroom is organized according to the Gestalt principle of grouping by proximity (the children that are close together are visually grouped into one clickable(?) "item", and buttons with similar functions are likewise grouped close together so they might reinforce each other). Other perceptual cues such as shading, texture and linear perspective can create a sense of depth in the depicted scene, assigning objects to different depth planes and effectively extending the boundaries of the space in which the user may interact. Color can be used to increase or decrease the saliency of certain items, or to unify or group items together. Using such visual cues in "Amy's first shot" will help learners parse the scene visually, making it easier to locate the objects in the scene that have corresponding actions and to navigate in general.

Extent and quality of learner engagement depends on the decisions made in all three of the phases just described, from initial description of the project space to designing the final visual components of the GUI. Developing the GUI in this manner will, as Metros and Hedberg argue, help to "ensure that the learners focus on learning rather than operating the software."

Part II
Mann, B., Newhouse, P., Pagram, J., Campbell, A., and Schulz, H. (2002). A comparison of temporal speech and text cueing in educational multimedia. Journal of Computer Assisted Learning, 18, 296-308.

In "A Comparison of temporal speech and text cueing in educational multimedia", the researchers hypothesize that children will learn more from educational multimedia when information is spoken as opposed to being displayed as text on the screen. This hypothesis had already been shown to be true for adults and college students by previous studies. However, there are questions about the whether these findings can be generalized to younger populations. Current evidence in this area is scant and contradictory. So, this study was devised to try to test this hypothesis more conclusively.

The researchers believe that speech cueing will be more effective than text cueing, because speech cueing would be better at focusing student attention on conceptual understanding. Previous studies have shown that readers tend only catch the surface features of text, while listeners were better able to pick out the important details and grasp the underlying meaning. Furthermore, with the visual path already occupied by other media (graphics or animation), it is more effective to send conceptual information through the auditory path, instead of overloading the visual path.

The study had subject pool of 42 12-year-old children in their final year of a private primary school in Western Australia which had a pre-existing development relationship with the researcher's university. A customized piece of educational software was written to teach students how 4-stroke combustion engine works. To control for confounding factors, a stratified random sampling method was implemented that ensured that both experimental groups had equal number of students with different characteristics that might influence learning ability (gender, reading ability as determined by teachers, auditory-visual-tactile-kinesthetic learning style inventory, attitudes toward computers, prior knowledge of subject). To test student learning, students were given problem solving questions testing their conceptual combustion engine knowledge before, right after, and a while after using the program).

It was discovered that there were no statistically significant differences in learning between the children who used the sound cued program and the children who used the text cued program (although the average improvement in problem solving for the sound cued children was a little bit larger). Thus, the researchers conclude that using sound-based multimedia as opposed to text-based multimedia with children does not improve learning, a rejection of their initial hypothesis.

However, the researchers believe that the sound cues grabbed the attention of the children better than the text cues based upon anecdotal observations that sound cues caused the room to fall silent and the children's eyes to be focused on the screen, while text cues caused the students to be restless and their eyes to wander around the room. Thus, sound-based cues do lead to a better focusing of attention that text-based cues. The researchers believe that the lack of a statistically significant difference is due to the general inability of the children to combine multiple streams of information into a conceptual understanding due to a still developing memory system. Thus, the children's memory and mental processes could not take advantage of the greater stream of information coming from the increase in attention.

The main strength of this study is the fact that it's a well-designed experiment. For the most part, the effect of confounding factors have been controlled for with stratified sampling. However, its main weakness also derives from the fact that it's a controlled experiment. The sample was limited to 12-year old children in just one private primary school in Western Australia learning about how a combustion engine works. Perhaps the findings would have been different if early elementary school are middle/high school students were used. Generalizing these findings to other countries from just one school in Australia is impossible. Most importantly, perhaps the task of learning how a combustion engine works was overly complex. Perhaps with an easier learning task the children would have had an easier time learning, a larger improvement in problem solving, and perhaps enough of a change to produce a statistically significant improvement in learning for sound-cued multimedia compared to text-cued multimedia. Thus, more studies should be done to: 1) perhaps find a statistically significant difference with different and perhaps easier to learn subject matter, and 2) generalize these findings beyond a limited age range and location.

Even with the limited data and conclusions from this study, I believe that teachers should definitely recognize the importance of sound, not just in multimedia software, but also in general education (lecturing vs. writing on the board), in focusing attention and perhaps, as a result, improving learning.