We presented "storyboards" for our perception applets two days ago to the CS92 class, illustrating rough sketches about what the finished applets would look like, and we received some preliminary feedback from the class.
The storyboards are preliminary sketches of the interface and functionality of the finished applets; naturally, they are very rough, presenting only the "big picture" idea, and details will definitely be improved upon as the projects are actually implemented and tested. The storyboards can be currently viewed here. (Unfortunately, the accompanying description of each applet's functionality that was presented in class is not on the web page at present.)
One of the things that we wanted to ask was what extensions would be useful for these applets. The initial specifications, as set forth by Professor Welch, were rather focused and simplified. For example, the goal of the human motion applet was to study the way the human perceptual system forms a mental picture from random moving dots, and allowed users to remove certain dots to see what separated the human motion image from random dot movement. An option to allow rotation of the figure would have also focused on this topic (studies have shown that people are good at identifying a running figure from a side view and a frontal view, but are much poorer at identifying it from a 3/4 view, for example), but the professor declined to have this option implemented in the applet.
The comments and questions we received from the class fell into three main categories: interface comments, specific tasks for the user while running the applet, and issues of computer simulation.
The opinions on our user interface were generally positive. One person did comment that the user interface was, quote, "horribly boring," but considering that the applets are supposed to be specific designed simulations of specific phenomena, a utilitarian interface is better than one full of unnecessary bells and whistles. Most of the other respondents liked the initial user interface, saying that it looked professional and was appropriate for a project aimed at college students (one commented that our design was "simple yet effective -- the best kind"). There were a number of detail suggestions directed towards the user interface, as was to be expected, given that the interfaces presented in the storyboard were coarse, broad-view ideas, and we intend to act on the detail suggestions as we move further into the project.
Another comment, which we originally brought up in the storyboard presentation merely as "trivia," was the issue of computer simulation, especially in the color applet. Several people remarked that the fact that the color matching is simulated in software cheapens the experience. One color applet, for example, deals with color matching, and the fact that any pure wavelength of light can be simulated by adding multiple wavelengths of light together at various intensities. Pure cyan light, for example, can be simulated by mixing pure blue and pure green. The applet asks students to match "pure" wavelengths by mixing together various colors. A computer, however, cannot actually "mix" color wavelengths together, so any results of mixing would necessarily have to be the results of an algorithm. In essence, then, from the algorithm's point of view, if a student selected green and blue light, the result would be cyan simply because the algorithm was programmed to say that it was cyan. There was a question, therefore, whether students would actually find this tool useful, as it could be argued that it is a "hack."
Realistically, though, this is an inevitable consequence of using computers. Not only can we not actually mix green and blue wavelengths in analog and therefore need to preprogram them to be cyan, we can't even show a pure wavelength to begin with: there is no way to present pure yellow light, for example, using a computer's RGB color guns. Nevertheless, I don't believe that this detracts from the applet at all. First of all, while these considerations may occur to us as programmers, it is unlikely that an end user would actually try to unravel the algorithm behind the program for its own sake. Secondly, these applets, as supplements, not replacements, to the class, are designed not so much to prove that these concepts are true as to allow students, who would have already learned the concepts in class, to experiment with them and see what would result. Simulation tools are used in other classes as well, and in general, if the results of the simulators agree with concepts that they have learned in class, students take these results at face value. For example, last sememster I took a biology course which used Populus to model population evolution. No one would think for a moment that the computer actually grows a real-life population in a petri dish, and yet the results of the population simulation are accepted, even though they have been "programmed" to come out the way they did, since they are based on principles that have been learned in the class itself and the user accepts the program, if not as reality itself, than at least an accurate model of reality. The computer allows simulation to be performed that wouldn't otherwise be possible, whether it's growing 200 generations worth of bacteria or seeing what the world would be like if you lacked your color photoreceptors, and as long it's not being presented as "proof" of a principle's veracity, only as a model and a supplementary tool, I don't foresee problems with the issue of computer simulation.
The final main issue touched on by the storyboard comments was that of the tasks a user has to perform. The applets, as they were presented in the storyboard, were completely open-ended; the user is basically plopped into this "model" of various perceptual phenomena and left to their own devices. Aside from the color matching applet, which at least provided a task for the user, most of the other applets provided no real direction whatsoever. Not only did the applets not provide tasks for the user to perform, they didn't even highlight specific issues about these perceptual phenomenon that the student might want to look at. For example, one respondent suggested that a video clip of someone driving be shown for the color applet; once in normal vision and once as it would appear to someone who is red-green blind. This would highlight the specific phenomenon and provide more direction than leaving the user to just play around with cones aimlessly. On the one hand, these applets would probably only be made available for students to play around with the effects on their own time, and the relevant span of time for each applet would probably only be a few days' worth of lectures, and therefore the applets need not provide a semester-long "mission" of tasks for students to complete. On the other hand, complete directionlessness and just a sense of "here are some perceptual variables, play around with them" is not terribly effective either. Specific direction is not an issue that we had discussed in great depth prior to the storyboards; it is something that we'll need to take into consideration.
Back to top