March 2022

Overview & Why We’re Doing This

In addition to the assessment every unit already does annually, every unit/department will need to perform a more in-depth assessment every four years. This is to keep up with changing trends in Student Affairs: more thorough and higher-quality assessment is now required and being done across the country. Student Affairs leadership is setting this assessment at every four years instead of annually to reduce the burden on our units. The schedule for the next four years is here. Note that the smaller, regular assessment that all units conduct and report in SALO will continue with the regular annual deadlines.

For this in-depth assessment, you don’t need to assess all your unit’s programs.

  • If your unit has just one program, you must assess that one in depth every four years
  • If your unit has five or fewer programs, choose one program to assess in depth every four years (and ideally, rotate between your programs, don’t always select the same one)
  • If your unit has six or more programs, choose two to assess in depth every four years (again, ideally rotating among your programs)
  • Of course, you are welcome to assess more programs than required

In-depth assessment generally includes:

  • Pre and post data
  • At least two and ideally three different methods of assessment (observations with rubrics, tests of knowledge, focus groups, surveys)
  • At least some “direct evidence

You will also need to write a report of your in-depth program assessment. We hope you will post your report on your website. Your report can be turned in gradually to the Student Affairs Director of Assessment, Research, and Evaluation for feedback so that your final report is both useful and something you can show with pride. (Turning in report sections gradually is strongly encouraged, though not required.)

When Do We Have to Do This?

Look at the schedule; each unit will need to do in-depth assessment every four years, and approximately one-third of each cluster is each year. We plan to repeat this four-year cycle indefinitely, even though only eight years are listed in the chart.

You should start planning your assessment in approximately March or April of the previous academic year.

It’s Our Assigned Year, How Do We Start?

Step 1. Choose your program(s) to assess while thinking about the following:

  • Choose a program that’s central to your unit
  • Choose a program for which an in-depth assessment will help your unit’s leadership know how to revise programming, staff resources, budget, etc.
  • Note that this assessment should be useful for your next external program review, so keep that in mind in your selection process too
  • If you are a unit with just one program, you must assess that one in depth. If you are a unit with five or fewer programs, choose one to assess in depth. If you are a unit with six or more programs, choose two to assess in depth. (You are welcome to assess more programs than required.)

Step 2. Do you plan to make changes during the coming year in the program you have selected? If you are planning changes, you should collect data during the previous spring (to reflect your old “post” program outcomes) so you can then compare them with your post data from the following spring (assuming it’s a one-year program). Or collect post data from the spring prior to compare with your post data at the end of fall, winter, and spring if it’s a one-quarter program offered quarterly. If you are not planning changes, you can omit the previous spring data collection and just collect data during the academic year you are assessing. In either case, i.e., whether or not you are planning a program change for the coming year, you should collect pre and post data during the assessment year. Thus, you will be comparing pre with post and, if program changes, also post with post.

Step 3. What do you believe/hope/expect the outcomes of your selected program are?  (Note the plural – units should assess all the main outcomes of the selected program.) (Maybe think about what the primary outcomes are in terms of how the program ties to your mission statement.) What kinds of assessments make sense for measuring these outcomes? Remember from above that this should probably include:

  • Pre and post data
  • At least two and ideally three different methods of assessment (observations with rubrics, tests of knowledge, focus groups, surveys)
  • At least some “direct evidence

Step 4. Write the rubrics, focus group questions, tests of knowledge, and/or surveys that you will need for the year. What will your benchmarks be? Decide when you will administer each rubric, test, focus group, and/or survey.

Step 5. Start writing the background section of your report.

All the above can (and ideally should) be done during the spring and summer prior to your assessment year.

Note: You can be flexible: if conditions (covid, staffing, etc.) change, you can change your focus or even your selected program if you need to.

 

Your Report

Look at the reports done by previous units. The pilot units’ reports should be available this summer and posted on the Student Affairs Assessment website. The pilot units are CARE, DREAM Center, Undergraduate Housing, and Student Life & Leadership and for this pilot program, each performed in-depth assessment on just one program regardless of unit size.

Please be sure your report is well-written and well-organized. Be careful with grammar and sentence construction. Use headings and subheadings to add clarity and readability throughout. We want Student Affairs Assessment reports to impress the high-level administrators, WASC reviewers, faculty, and your Student Affairs colleagues who read them.

You are strongly encouraged to write your report gradually. Not only will the focus on writing/organizing throughout the year help to guide your assessment and keep it on track, but this will mean you don’t have to spend so much time writing it at the end of June and during July. Reports are due August 1.

Below is the suggested content for each section of the report. Feel free to deviate from the descriptions in each section depending on your program, your assessment, and your purposes/needs, though it would be good if you talked with the Director of SA Assessment, Research, and Evaluation about that first.

The below includes recommended interim report deadlines. This should help you to keep on track and also allow the Director of SA Assessment, Research, and Evaluation to better help you along the way by providing feedback and perhaps suggestions at each step, so you can incorporate that feedback it before you turn in the next round. Here are suggested section deadlines and brief descriptions of what to include in each:

Background (suggested completion date, December 1)

  • Brief description of your unit (include your mission statement)
  • Description of the program you’re assessing and how that program fits into your unit’s mission/goals and why this program is important to students
  • Also include: How old is this program? Has it had any big changes in the past couple of years? Is this program normally assessed annually or not? If so, how is it normally assessed (what methods)?
  • Why you chose this program for your in-depth assessment (unless your unit only has one major program)
  • Give an overview of this year’s planned assessment. Include outcomes and benchmarks if you have them.
  • This section might be only a couple of pages, depending on the program you select
  • Why you chose to assess this program (unless your unit only has one major program)

Table of Contents (suggested completion date, December 1)

  • Think about each of the programming and data points you will need to include and break them up into logical sections
  • The background section itself might be one, two, or three sections, depending on how you choose to organize and how much you have to say

“Pre” Results (suggested completion date if all pre data are collected fall quarter, March 1)

  • Describe what data you collected (how, when, response rate)
  • You should include surveys, rubrics, focus group questions and whatever else as a “pre” instrument appendix
  • Include results – probably tables, but possibly not. Write in sentences the highlights of what’s in the tables. Include statistical analyses if you have any.
  • Again, include tables for your statistical results (unless it’s just a couple of items) and then write in sentences what they mean.
  • Also, what do your pre results mean, i.e., do they look generally as you expected? If you are making any programming changes as a result of your pre (probably not, but maybe), explain what and in response to what data.

Programming (suggested completion date, May 15)

  • What programming have you done? Some units have a lot of programming for the single program selected, others have a narrower focus
  • Include a syllabus or schedule if you have one
  • If you have weekly meetings, discuss what is done in them and what sorts of topics are covered; is each topic covered for several weeks, one week, a whole quarter?
  • What are the goals/purposes/learning outcomes for each section?

Interim Results (suggested completion date, May 15)

  • Include any the interim results you’ve collected since your “pre” (if you’re assessing a one-quarter program, you might not have any)
  • Include your data collection instruments in an appendix
  • Probably include tables and sentences to describe the high points of the tables
  • What do your results mean, how do you interpret them? Maybe add to what extent the interim results are looking as you expected and/or hoped.
  • Are you making any programming changes during the year as a result of your interim results (maybe not, which is fine). If you are, describe what changes and why (what data they are in response to). If you are making changes for any other reason (covid, staffing, budget), say what changes you are making and why.

Post Results (due August 1)

  • Probably tables with sentences to point out the highlights
  • Include any statistical analyses, both in tables (if applicable) and sentences to describe their high points
  • What does it all mean? What do all these data tell you about your program? And they tell it with how much confidence (here you can consider direct vs. indirect data and also any statistical tests you have and anything else you think is relevant)?
  • What conclusions do you draw about student learning in your program?
  • Add the rest of your rubrics/surveys/focus group questions as a “post” appendix. (If your instruments are identical to earlier ones, you need only say that, don’t include them again.)

Discussion (final report due August 1)

  • Note that this last section will take a lot more thought than the others, so I encourage you to write the post results right away, then give them and your entire project a lot of thought before finalizing the discussion.
  • What are the flaws/weaknesses of your assessment (research) project, which the reader should keep in mind when they read your conclusions about student learning?
    • Maybe there was a way you could have collected more direct evidence instead of indirect evidence?
    • Maybe you could have gotten a better response rate on surveys with more gift cards? (With a lower response rate you have no way of knowing if your respondents really reflect your population, i.e., whether they are kind of a random sample of it. And they almost certainly are not.)
    • Maybe with more staff you could have included something additional that you’d wanted to include.
    • Maybe a few survey questions seem to have been misunderstood.
    • Maybe you wish you’d included an assessment or a survey question on such-and-such.
    • Maybe some pre surveys were returned so late that they weren’t really fully pre?
    • Maybe you didn’t have a good demographic mix of students this year and you can’t be sure your results will generalize to future students in your program.
    • There are a lot more possibilities, but this should give you a starting point for brainstorming with your team.
  • If you were repeating this assessment, what would you change to make your assessment stronger, to collect more useful data? Or data that would be useful to you now, after what you learned the past year?
  • How are you using your results?
    • How you are “closing the loop”, i.e., what does all this mean for your program?
    • What changes, if any, will you make in programming, budgeting, staffing, budgeting requests?
    • Is this program a good use of time and money as is, or do you think you can you get a bigger bang for your buck with a modified or different program?

What do your student participants think about the program (surveys/evaluations/focus groups/interviews)? What suggestions did your participant students have (these don’t outweigh your assessment data collected, but they can add a little more insight and things to think about).