Learning in a Technology Rich Environment NC State University LITRE Home Page

GENERAL INFORMATION, INCLUDING PROS/CONS,  ON ASSESSMENT METHODS

RELATED TO STUDENT LEARNING


There are many resources that detail good assessment methodology for any educational discipline, and include detailed discussion of the pros and cons of each method. A few of these resources include:


The assessment process is designed to answer “what” and “how well” students are learning. Once student learning outcomes have been established, then the next step in the assessment process is to select the most appropriate assessment methods Below are some sources of information that can be used in assessment of student learning. Many assessment professionals discuss “direct” vs “indirect” assessment methods. Assessment methods that are “direct” are those that judge student work, projects, portfolios developed as a result of the learning experiences; some consider this “authentic assessment.” “Indirect” assessment uses students' or others' opinion to provide evidence about students' abilities. Below is a table which lists some pros and cons for using each method. Many times we worry about not having the perfect assessment method, but matching the correct assessment method to the outcome is more important than having a perfect, well-controlled assessment method! As stated ! by Tukey (1962): “Far better an approximate answer to the right question…than an exact answer to the wrong question…." (pp.13-14).


Information about student learning can be obtained from the following sources. These sources use the various assessment methods, including those listed.


Information about student's satisfaction, attitudes can be assessed through the following indirect assessment methods:

Information about faculty's satisfaction can be assessed through the following indirect assessment methods:

Example Assessment Methods on Student learning

Pros of Method

Cons of Method

  • From course work (embedded, course-based) (direct assessment methods)

  • In general, students take embedded course work seriously; therefore work has a good chance of reflecting actual abilities.

  • Reflects program or department's course and curriculum, and program outcomes

  • In general, biases of the data over years, instructor or departmental differences can influence the results.

  • Reluctance of faculty to share results with entire faculty membership.

  • Tests, including pre-post, entry and exits

  • Inexpensive

  • Comprehensive

  • Pre-post testing allows for “value added” assessment

  • Developing appropriate test questions that reflect learning outcomes and complex levels of learning takes time and skill.

  • For pre-post testing: difficult to design tests that are comparable at different times.

  • Graded Homework

  • Reflects students' ability when they have access to resources

  • Does not assess students' ability or overall learning as typically defined.

  • Ratings or Rubrics judging quality of papers, reports, projects,

  • Can be used by others besides instructor, to assess quality

  • Developing accurate rubric dimensions that reflect learning outcomes and levels of learning takes time and skill

  • Tests, rubrics on paper, projects from capstone course experience

  • Allows for assessment of higher cognitive abilities such as synthesis and evaluation of knowledge

  • Can assess in-depth knowledge

  • Allows creativity

  • Assessment of integration of learning

  • Labor intensive for both faculty and students

  • Because course and project are high-stakes, it may produce student anxiety that may result in assessment reflecting lesser ability than actual ability.

  • Concept mapping or knowledge mapping

  • Unique technique to understand connections of concepts within students knowledge-base

  • Assessment of complex relationships

  • Difficult to compare across students

  • Difficult to obtain objective judgment on abilities

  • Expert's judgment of performance (e.g., art, drama, healthcare)

  • Improves face validity of assessment activities

  • Obtaining appropriate experts' time

  • Criteria, rating, rubrics judging thesis, dissertation work

  • Allows for judgment about overall graduate program across several students

  • Difficult to define rubric dimensions that relate to multiple thesis or dissertations

  • Qualifying exams for graduate work

  • Developing exam questions across several graduates allow for better assessment of the graduate program.

  • Oral presentations may be a challenge for those with language difficulties

  • Difficult to define questions that relate to several students

  • From longitudinal, cross-sectional or cross-course comparisons including student portfolios (direct assessment methods)

  • In general, shows longitudinal trends with rich detail

  • Assessment becomes an integral part of students' learning process

  • In general, validity depends on how work is collected

  • Can overload assessment committees with too much information

  • Rubrics judging quality of work across time, sections or courses

  • Highlights' students' strengths and weaknesses in comprehensive manner

  • Developing accurate rubric dimension that reflect learning outcomes and levels of learning takes time and skill

  • Content may vary widely by students

  • Comparison of best examples of student learning

  • Students do the work of providing the assessment “data” by supplying their best examples

  • Student's judgment of “best examples” may not actually reflect faculty's judgment of “best examples”

  • Reflections by students about their learning

  • Provides opportunity for students to synthesis own work;

  • Identifies strengths and weaknesses

  • Difficult to judge objectively

  • From internships/coop experiences

  • Supervisors typically provide feedback to students anyway

  • Ratings and criteria of supervisor may not reflect program outcomes

  • Surveys completed by intern/coop advisors/faculty about student's abilities (direct assessment method)

  • Based on actual work experience that may reflect future career

  • May obtain information only on a small number of outcomes

  • Limited observation time

  • Survey, interview, focus groups about satisfaction with student's performance (indirect assessment method)

  • Provides information about other outcomes besides competencies such as attitude

  • Satisfaction with performance may not be reflective of student's ability

  • From employers/potential employers

  • In general, improves face validity of assessment activities

  • Difficult to identify where alumni are employed

  • Sensitive information for both employer and program/department

  • Surveys to employers about student's abilities (direct assessment methods)

  • Provide information about student's abilities needed by employers

  • Difficult to get direct supervisors to respond to surveys

  • Survey of those who interview for employment purposes about perceived students' abilities

  • Best person to compare quality of one institution's graduates to other institutions' graduates

  • May only be able to assess a small number of general outcomes such as communication skills

  • From outside evaluations Experts judge overall major/program quality of students' abilities (direct assessment methods)

  • Improves face validity of assessment activities

  • Obtaining appropriate experts' time

  • From nationally-normed tests (direct assessment methods)

  • Ability to compare from year to year or to other groups

  • National standard can be used for program's performance criteria

  • Convenient

  • Well developed test

  • Nationally or commercial surveys have reliability and validity information

  • May not reflect program or institution's curriculum or outcomes

  • Limited faculty ownership

  • Costly to institution or student

  • Information about student's satisfaction, attitudes (indirect assessment method)

  • Important to hear from student's viewpoint

  • Conduct comparison of different groups of students on same outcomes/questions

  • In general, students' perception of their ability may not relate to their actual ability

  • In general, alumni are more satisfied than graduating seniors who tend to be more satisfied than sophomores, etc.

  • Surveys about satisfaction with learning environment, faculty, courses, curriculum, their learning, equipment/tools from prospective, current, graduating, withdrawn students and alumni

  • Easy to administer

  • Low cost

  • Nationally or commercial surveys have reliability and validity information

  • Usefulness is based on good design of survey questions


  • Interviews or focus groups about satisfaction with learning environment, faculty, courses, curriculum, their learning, equipment/tools from prospective, current, graduating, withdrawn students and alumni

  • Can provide rich data, personal perspectives; can go into depth about a particular aspect or factor

  • Other factors may arise that relate to academics such as pedagogy, class size, etc. which not expected or asked about.

  • Those who participate tend to have either very positive or very negative opinions which is a selection bias

  • Fear of retributions may bias respondents' answers

  • Inventories about students' attitudes; monitor attitude changes over time

  • Commercially available instruments provide reliability and validity information

  • Usefulness depends on how related to program outcomes.

  • Information about Faculty's satisfaction (indirect assessment method) through survey, interviews or focus groups

  • Important to hear from faculty's view

  • Factors may arise that relate to academics such as pedagogy, class size, etc.

  • Usefulness is based on good design of questions

 

Copyright material; from: Spurlin, J.E., Rajala, S.A., & Lavelle, J.P. (2007). Assessing student learning: Ensuring undergraduate students are learning what we want them to learn.  In J. E. Spurlin, S.A. Rajala, and J.P Lavelle (eds). Designing better engineering education through assessment: A practical resource for faculty and department chairs on using assessment and ABET criteria to improve student learning . Stylus Publishing: Sterling, Va.