PROJECT
  APPLETS
  NOTES
  STORYBOARD
  LINKS
Notes:

Notes from May 11:

The project is now officially complete. The applets can be viewed here.

Looking back on the project as a whole, I would say that we've successfully completed our project. Professor Welch is pleased with the applets, which is of course the project's primary goal.

One thing that defined our project as opposed to all of the other CS92 projects this year, I think, was the relative individual nature of the work done for this project. Our group had the task of designing a number of smaller unrelated programs, whereas most of the other groups worked on a single larger project. If everyone worked on each of the applets, it would probably be more complicated that it was worth, coordinating classes and rewriting each other's support code -- falling into a "too many cooks" problem -- especially due to the fact that most of the applets were completely unrelated in terms of subject matter or common code. As a result, the project was divided between the four of us fairly early on, with only moderate interaction between everyone when designing and building the applets.

That, along with the relatively smaller scale of our applets (in terms of the change it would cause in the classroom dynamic), I think, defined our project. Had we worked on a single large project, I think the course of our program design would have been very different. We worked predominantly as four individuals, who met as a group just to plan large-scale concerns, such as demoing and when to present the applets to Professor Welch, rather than a group who planned the entire program as a whole. This is neither better or worse than working as a single group per se -- I think the nature of our particular project demanded that we be more individualistic, as opposed to other projects -- and yet, it would have been interesting to work on a single larger project, as contrast.

The aforementioned aside, however, the applets perform the tasks they were designed for, and both Professor Welch and we are pleased with the result. Now all we have to do is find replacement GIFs for those ugly backgrounds that are currently bundled with the sceneview applet one of these days...

  • Back to top


    Notes from May 8:

    We met with Professor Welch today -- we had been unable to meet with her the previous week since she was out of town -- and showed her the applets we had designed. She was quite impressed with the applets as a whole. She offered a few minor suggestions, such as altering the buttons somewhat to make operations easier and thickening lines to make them easier to see, but she seemed to like the applets a lot all in all. This, of course, reassures me after my worries after our May 2 demo.

  • Back to top


    Notes from May 2:

    We presented a demo of our projects today to the class; well, something akin to a demo, which I say due to the fact that a large number of our applets were only semi-functional. (Click here to see one of the applets presented.) The lack of functionality was due in part to a major operating system failure I suffered over the weekend, which took a considerable amount of time to fix that would otherwise have been spent better preparing for the demo.

    But while that may an explanation, it's not an excuse, and it doesn't really get at the heart of the matter. The applets, as presented today, did not give an accurate view of their final state, and as such, we have not been able to show Professor Welch our applets and ask for feedback, as there is nothing at this point to really show her. This is, of course, something that should to be remedied.

    Part of the reason that we have (or haven't, depending on how you look at it) progressed in the manner that we have is due to the nature of our project. Unlike several of the other CS92 projects, our project did not require much pedagogical deliberations before the actual work on the projects could begin. In fact, the basic design of the applets was pretty much set from the start -- there were minor design choices that had to be made, of course, but the "big picture" of the applets was pretty much set from the start.

    This "taskmaster" nature of the project lessened the import of constant feedback, at least for me; I was of the mindset that I would just work on the applet myself and make a few minor adjustments if the professor requested them once complete, but that I could just go ahead with the general applet design, confident that it would be fine as is. After today's demo, however, and after speaking with Professor Blumberg, I am notably worried that this may not be the case. We had planned to meet with Professor Welch later this week once our applets were finished; it becomes all the more critical now to do so.

  • Back to top


    Notes from March 23:

    We presented "storyboards" for our perception applets two days ago to the CS92 class, illustrating rough sketches about what the finished applets would look like, and we received some preliminary feedback from the class.

    The storyboards are preliminary sketches of the interface and functionality of the finished applets; naturally, they are very rough, presenting only the "big picture" idea, and details will definitely be improved upon as the projects are actually implemented and tested. The storyboards can be currently viewed here. (Unfortunately, the accompanying description of each applet's functionality that was presented in class is not on the web page at present.)

    One of the things that we wanted to ask was what extensions would be useful for these applets. The initial specifications, as set forth by Professor Welch, were rather focused and simplified. For example, the goal of the human motion applet was to study the way the human perceptual system forms a mental picture from random moving dots, and allowed users to remove certain dots to see what separated the human motion image from random dot movement. An option to allow rotation of the figure would have also focused on this topic (studies have shown that people are good at identifying a running figure from a side view and a frontal view, but are much poorer at identifying it from a 3/4 view, for example), but the professor declined to have this option implemented in the applet.

    The comments and questions we received from the class fell into three main categories: interface comments, specific tasks for the user while running the applet, and issues of computer simulation.

    The opinions on our user interface were generally positive. One person did comment that the user interface was, quote, "horribly boring," but considering that the applets are supposed to be specific designed simulations of specific phenomena, a utilitarian interface is better than one full of unnecessary bells and whistles. Most of the other respondents liked the initial user interface, saying that it looked professional and was appropriate for a project aimed at college students (one commented that our design was "simple yet effective -- the best kind"). There were a number of detail suggestions directed towards the user interface, as was to be expected, given that the interfaces presented in the storyboard were coarse, broad-view ideas, and we intend to act on the detail suggestions as we move further into the project.

    Another comment, which we originally brought up in the storyboard presentation merely as "trivia," was the issue of computer simulation, especially in the color applet. Several people remarked that the fact that the color matching is simulated in software cheapens the experience. One color applet, for example, deals with color matching, and the fact that any pure wavelength of light can be simulated by adding multiple wavelengths of light together at various intensities. Pure cyan light, for example, can be simulated by mixing pure blue and pure green. The applet asks students to match "pure" wavelengths by mixing together various colors. A computer, however, cannot actually "mix" color wavelengths together, so any results of mixing would necessarily have to be the results of an algorithm. In essence, then, from the algorithm's point of view, if a student selected green and blue light, the result would be cyan simply because the algorithm was programmed to say that it was cyan. There was a question, therefore, whether students would actually find this tool useful, as it could be argued that it is a "hack."

    Realistically, though, this is an inevitable consequence of using computers. Not only can we not actually mix green and blue wavelengths in analog and therefore need to preprogram them to be cyan, we can't even show a pure wavelength to begin with: there is no way to present pure yellow light, for example, using a computer's RGB color guns. Nevertheless, I don't believe that this detracts from the applet at all. First of all, while these considerations may occur to us as programmers, it is unlikely that an end user would actually try to unravel the algorithm behind the program for its own sake. Secondly, these applets, as supplements, not replacements, to the class, are designed not so much to prove that these concepts are true as to allow students, who would have already learned the concepts in class, to experiment with them and see what would result. Simulation tools are used in other classes as well, and in general, if the results of the simulators agree with concepts that they have learned in class, students take these results at face value. For example, last sememster I took a biology course which used Populus to model population evolution. No one would think for a moment that the computer actually grows a real-life population in a petri dish, and yet the results of the population simulation are accepted, even though they have been "programmed" to come out the way they did, since they are based on principles that have been learned in the class itself and the user accepts the program, if not as reality itself, than at least an accurate model of reality. The computer allows simulation to be performed that wouldn't otherwise be possible, whether it's growing 200 generations worth of bacteria or seeing what the world would be like if you lacked your color photoreceptors, and as long it's not being presented as "proof" of a principle's veracity, only as a model and a supplementary tool, I don't foresee problems with the issue of computer simulation.

    The final main issue touched on by the storyboard comments was that of the tasks a user has to perform. The applets, as they were presented in the storyboard, were completely open-ended; the user is basically plopped into this "model" of various perceptual phenomena and left to their own devices. Aside from the color matching applet, which at least provided a task for the user, most of the other applets provided no real direction whatsoever. Not only did the applets not provide tasks for the user to perform, they didn't even highlight specific issues about these perceptual phenomenon that the student might want to look at. For example, one respondent suggested that a video clip of someone driving be shown for the color applet; once in normal vision and once as it would appear to someone who is red-green blind. This would highlight the specific phenomenon and provide more direction than leaving the user to just play around with cones aimlessly. On the one hand, these applets would probably only be made available for students to play around with the effects on their own time, and the relevant span of time for each applet would probably only be a few days' worth of lectures, and therefore the applets need not provide a semester-long "mission" of tasks for students to complete. On the other hand, complete directionlessness and just a sense of "here are some perceptual variables, play around with them" is not terribly effective either. Specific direction is not an issue that we had discussed in great depth prior to the storyboards; it is something that we'll need to take into consideration.

  • Back to top


    Notes from early March:

    We have divided up the labor of the project in a preliminary way. (These task are not binding.) Benjamin Smith and Gary Ault will work on the development of the Perception of Color models. These models will consist of various Java applets that illustrate different perceptions of color. David Emory will work on the Perception of Human Motion Model. The animations will be created by identifying key points in the human anatomy and using those points to animate its motions. Shiwon Choe will work on the 2D representation of a 3D scene model. This applet will allow the user to alter the attributes of the displayed objects in order to investigate the effects of the 2D perception of a 3D scene.

    Our next meeting is scheduled for March 18, 2000.

  • Back to top

     

     

    © 2000 Brown University CS Department

     

  •