Friday, March 13, 2009

Zak is the worst person! The worst!

Muller and Pfahl: Simulation Methods
  • Chapter describing the way in which simulation can be used to project the outcome of a software project.
  • Most readers found this method to be too clunky, or simply inappropriate for software development estimation. The counter example of embedded or safety critical systems seemed to sway a few minds, however.
  • Interesting discussion about whether this actually qualifies as an empirical method. Also, everyone seemed to agree that what the Hadley Center is doing is valid science, even though it is simulation.
Atkins, et al: Using version control data to evaluate the impact of software tools
  • Paper evaluating possibly the worst version control system ever! At a more meta level, it was an example of how you can run an empirical study who's sole input is data mined from a past project (similar to what Samira did for her master's).
  • Nick mentioned that, despite its archaic premise, a versioned editor like this one would have been helpful at EA.
  • Discussion ensued as to whether this type of validation was actually required for this tool. It seems almost anything would be better than the existing 'version control'. In fact, there are some in the field who feel that expert intuition is ultimately more useful than empirical experimentation.

Sharp & Robinson: An Ethnographic Study of XP Practice
  • Ethnographic study of an extremely well-oiled XP team in england
  • Study found that, in this case, XP was the style best suited for maximal performance of the team
  • Threats to validity include not spending enough time (one iteration?) with the subjects
Kitchenham & Pfleeger: Personal Opinion Surveys
  • Chapter describing the process of creating and administering personal opinion serveys (questionnaires and the like)
  • Primary message is: making a questionnaire isn't easy! There's lots of confounding effects/sources of bias to worry about.
  • Interesting discussion ensued concerning the reuse of standard instruments from psychology, and whether or not SE should have similar standard instruments.
Cherubini et al: Let's go to the whiteboard: how and why software developers use drawings
  • Interesting case study conducted by Microsoft Research to see how their developers use graphical representations of code
  • Researchers were able to categorize their uses into Understanding, Design, and Communication, and the amount of investment into Transient, Reiterated, Rendered, and Archival.
  • Pretty good

Flyvbjerg: Five Misunderstandings about Case Study Research
  • This paper attempts to disprove several common misconceptions about case study research, primarily things like "case study results cannot be generalized to a larger population", "case studies cannot be used to test hypotheses".
  • A fairly good piece of advocacy. It certainly makes me feel better about considering a case study as a direction for my research.

Edwards: Using software testing to move students from trial-and-error to reflection-in-action and related papers
  • Details findings of the WebCAT system - an online assignment submission and automatic grader created at the Virginia Tech.
  • Edwards found that the system was useful and well received by both instructors and students. The primary objective, encouraging students to do test-first development, was achieved.
  • Interesting effects of introducing hints into the automatic test cases to discourage last minute submissions.
Juristo et al: Reviewing 25 Years of Testing Technique Experiments
  • A taxonomy/summary of the various means of divining test cases that have been invented over the last quarter century.
  • Focuses mainly on machine-derived cases (random input-output samples, etc), doesn't focus too much on human-created unit tests, unfortunately.

No comments: