http://www.cs.toronto.edu/~gvwilson/reading/sillito-questions-program-change.pdf
This paper presents a study conducted by getting programmers (students and professionals) to work while thinking out loud, then categorizing the questions they ask themselves (and their debugger). These fall into 44 distinct categories, under 4 main groups. Following this, each of the categories is analyzed to see if existing tools are able to directly answer the question proposed.
I found my mind wandering around while I was 'reading' this paper. Not that the subject matter is uninteresting, far from it. I found myself coming up with ideas for new tools all throughout this read, which eventually started to detract from the primary text itself. Anyway, the following are some thoughts:
There is mention of how programmers divide their workspace to show them, for example, code and executing program, or two code files, etc, by using emacs screen splitting, multiple windows, or multiple monitors. I wonder what the results would be in a study where we a) measure a programmer's productivity with one monitor, then b) add a second monitor (I'm pretty sure this has been done before), allow them to get used to it (productivity should plateau), then remove the second monitor. I predict that productivity will drop below that measured in a) for a while, then gradually come back to a nominal level.
Of the two groups studies (students and professionals), both had a single category of question (of 44 possible) that was asked vastly more than all others. For students, this was "Where is this method called or type referenced?". For professionals, this was "What will be (or has been) the direct impact of this change?". A couple of things come out of this.
First off, students seem more concerned with direct program behavior or structure, while professionals are concerned with impacts of code change. This seems like a much more organization-oriented behavior. I'm having trouble expressing my exact idea here, so I'll come back to it. Bottom line, is that professionals are less hack and slash than students.
Secondly, there are tools for addressing the students' question. Why aren't they using them? The tools for the developers' questions, however, are lacking. Can we make them better?
Exemplar-driven documentation. There is discussion in this paper about finding examples of the type of operation one is trying to create or modify within the subject code base, and using that as a template for the new feature/modification. I wonder if this could be applied not only to the target code base, but to any code base (or indeed every code base). Lets say, for example, I want to implement a convolution matrix to do a gaussian blur over a java BufferedImage. Imagine I had a search engine that would search the code of a vast number of open source code bases, with some natural language query, and returned code snippets of convolution matrices over BufferedImages. Useful? I dunno, just had to write it down before I forgot it.
This leads into another idea that popped up. A few months ago, when I had a job and spare time, I was playing around with the jMonkeyEngine, which is a handy little open source scene graph for java, based on JOGL. Its documentation is in the form of a Wiki, which unfortunately has a bunch of holes in it. However, I found that downloading the source trunk and looking at the extensive unit tests was a much better learning tool. I simply loaded the unit test hierarchy into the IDE, looked for a test for the feature I wanted to use, ran it to see it work, then looked at the test code, which by definition is short and concise. I propose a study where we take two groups of developers and one large API, and task them with implementing a given application off of this API. One group will have standard documentation, and one will have a complete set of unit tests. Let them go and check the results. If the unit tests turn out to be better, this would be a huge boost for the motivation for TDD.
Two more quick ideas, and then I'm done. This one relates back to finding usages of methods/classes, as was one of the prime questions asked by students in the paper's study. Using a 'Find Usages' feature in an IDE can solve this, but it is not the most efficient when looking for loose relationships between two or more elements. What if I wanted a tool that was "Find Usages of these TWO methods" or three or four or etc. Basically, find the class,method, block, or statement which uses all of the given input elements. I think this would be handy.
Lastly, the paper used ArgoUML as its code base for the student tests. The authors had the students fix bugs submitted via the ArgoUML tracker. I wonder if there's a market for shopping out bug fixing time to ethnographic research subjects?
Tuesday, October 14, 2008
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment