Perspectives concerning the validation of faculty-developed instruments for the assessment of student performance at Alverno College are presented. Sixteen instruments were identified by departments for the validation studies. Three validation strategies were found to work best. One was a pre- and post-instruction comparison that determined if changes in student performance can be attributed to the effects of instruction. A second strategy was criteria evaluation, which involved the clarification, revision, and refinement of criteria based on an analysis of student performance. A third approach was the interrater reliability of assessor judgments, which enabled a test of reliability as well as the development of instrument criteria. Criteria evaluation appeared to be most helpful when the instrument was being evaluated and revised. Pre- and post-instruction comparisons were used most effectively after faculty had judged the instrument as meeting most other instrument design guidelines. Interrater reliability studies were most useful when they were conducted currently with criteria evaluation. The validation studies showed that direct involvement of faculty in analyzing student performance data and probing validity questions generated a broad scope of validity issues. (Author/SW)