There is some controversy regarding who are the most appropriate raters of artifacts when using the Consensual Assessment Technique (CAT) to assess creativity (e.g., whether novice raters' judgments can validly replace those of expert raters). There is also evidence that the answers to some of these questions vary by domain (e.g., novice raters' judgments more closely parallel those of expert raters when judging the creativity of fiction than when judging poetry). We report new evidence about the degree and kinds of expertise required for valid CAT judging that shows both vary by task domain. We compare these findings to previous research in this area and suggest (a) possible explanations for the observed raterdomain interactions and (b) guidelines for assembling panels of experts.
REFERENCES