I think that I’m just sick enough to do this post. What do I mean by that? Well, you know how when you’re really sick you can sometimes achieve a state of exhaustion that leads to a certain tranquility of mind? Actually, I think strep throat in particular leads to this for me. The last time I had strep throat was in college, and unlike this time, I suffered for about a week before finally getting myself some antibiotics. But what I remember most vividly about that bout was that I had this huge midterm in my literary criticism class, and I remember having this bizarre clarity of focus while at the same time I felt like I was going to die. That’s how I feel right now – not all out of it like you feel with a head cold, and not totally down for the count, like with the flu.
Anyway. I feel pretty awful, but I also feel like I have some fuzzy thoughts that I want to get out, so here we are. (By the way, for those of you who are interested, the reason that I’m thinking about this stuff is because of this post and then this one, though I’m not really directly responding to either.)
What interests me first is the way that “assessment” and “accountability” are often framed in opposition to student learning in actual classrooms, no matter whether one comes down in favor of assessment and accountability measures or against. There is a sense, on both sides of this debate, that “measuring” is some separate activity from what currently happens in classrooms.
The argument from the pro-assessment, pro-accountability camp goes something like this: These kids today aren’t learning what we need them to learn. Students can’t write, students can’t think.* The solution is clearly to put a mechanism in place that makes “education” accountable for the fact that students aren’t “succeeding” the way that we think they should be succeeding. By “education,” we mean teachers, curriculum, etc. Perhaps there is a test out there in the world that can do this?
The argument from the anti-assessment, anti-accountability camp goes something like this: These kids today aren’t learning what we need them to learn. Students can’t write, students can’t think. The problem is that they’ve been taught to the test so much in K-12 that they come to college unable or unwilling to do the work. All this assessment makes it impossible for students to learn! The solution is clearly to do away with interventions that limit teachers’ ability to enforce rigor, to design creative assignments, to give students individual attention. By “interventions,” we mean the government, testing agencies, buttinsky administrators with no disciplinary expertise. Perhaps if we complain and resist enough somebody will listen to us?
I don’t personally like either of these models for thinking about or talking about assessment and accountability. I’ve been trying to think about why they both bother me, and I think it comes down to a combination of things. 1) Both take as a given that students aren’t performing to their capacity and that students have no ownership over their performance; 2) Both take as a given that the work that we do in classrooms as teachers in the present moment is an exercise in futility (see #1); 3) Both fail to articulate the potential role of disciplinary practices and expertise in determining and evaluating what happens in the classroom; 4) Both see assessment as an end rather than as a beginning. Add to these the fact that we automatically assume that “taxpayers” want more testing, as if “taxpayers” are a monolithic group who’ve been hypnotized by a conservative agenda for education, and, well, it becomes difficult to have any sort of meaningful conversation about assessment and accountability, whether you’re pro-assessment or anti-assessment.
But so, let’s take as a given that my objections to the way this conversation is framed are legitimate. If they are legitimate, then why do they persist? When I try to get to a root cause or a foundational reason, what I come up with is a basic lack of trust. Pro-assessment folks do not trust teachers. Anti-assessment folks do not trust administrators and legislators. (There are “taxpayers” and “members of the general public” on both sides.) On both sides there is a foundational paranoia about the other side’s motives, a paranoia that is exacerbated by the fact that there are very few settings in which people actually get together in good faith to talk about a) what we want assessment to achieve, b) how to come up with assessment instruments flexible enough to allow for pedagogical innovation, and c) what we aim to do with information gathered by assessment. These questions do not have self-evident answers, and until we talk about those questions and the potential answers for them, we remain in a turf war. We fight. We assume the other side has bad intentions or refuses to come to the table.
Oh, and neither side trusts students, and neither is really interested in what they have to say about their experiences. Paranoia about each other extends to students, with the anti-assessment folks assuming that students want to take advantage of an educational model that spoon-feeds them factoids, and with the pro-assessment folks assuming that students want to take advantage of an educational model that has no standards. Students really are a bunch of lazy, manipulative fucks, whatever side one’s on.**
What I’m suggesting here is that we can’t all come to the table if we don’t actually have a table that has enough seats for members of both sides, and even for students, too. And we don’t have that table if we don’t start first with honest questions, rather than with agendas.
Seriously, the only agenda any of us should have is that we educate students to the best of our abilities. I don’t think I’m being all that controversial in making that statement.
With all of that being said, I actually don’t have a problem with a movement toward assessment and accountability, as long as the mechanisms for that actually benefit students. And some of the mechanisms that have been put in place in the time between when I was in college and today actually do benefit students.
Let’s take this as one example. When I was an undergraduate, the norm for a syllabus in my discipline was that it amounted to little more than a list of readings and, if I was lucky, a list of tests/paper deadlines. As a first generation college student, the syllabus gave me little to no information about what I was supposed to get out of a course. There was little transparency about what I was supposed to do or why I was supposed to do it. You have a paper due? Yeah, you write a paper. No assignment sheet. No sense of what the point of the assignment was. If you were an enterprising student, you might go talk to the professor about his expectations. I didn’t figure that out until about 3 semesters in.
What I’d say is that I don’t think that’s terribly effective pedagogy. What I’d say is that the movement toward including learning outcomes on syllabi, if we take those learning outcomes seriously, encourages greater transparency and does, in effect, facilitate student learning. Similarly, giving students an actual assignment for a paper with expectations and projected outcomes can facilitate student learning.
For me, thoughtful assessment would allow me to learn whether my assumptions in the above paragraph are true. Is student learning being facilitated by including outcomes on syllabi and expectations on assignment sheets? If the answer is yes, we can then move to more detailed questions: are some outcomes being achieved with better performance than others? What assignments or activities are linked to those outcomes on which we’re doing the best? Can we learn from how that works, and then perhaps use some of the same techniques for outcomes with lesser performance? Or, perhaps, do we need to revise the outcomes that aren’t working – is the problem not the work that students are submitting but rather the way that we have articulated the outcome? By analyzing how students are performing across sections, can we then initiate a conversation among different faculty teaching the same course about pedadogical best practices? Can we think about our work as teachers as part of a collaborative community, and can we share our successes and challenges in order to improve learning across the curriculum?
As a teacher, I like the idea that we could ask those questions and get those answers and then maybe do something cool as a result. But I’ll tell you one thing: if we design learning outcomes to be as opaque as possible so as to jump through an assessment and accountability hoop rather than to learn something about our programs, then all we’re doing is engaging in a pointless exercise. If we implement tests that don’t facilitate conversation and collaboration, then all we’re doing is engaging in a pointless exercise. If we see “assessment” and “accountability” as the end of the line, rather than as the start of a process that helps us to examine teaching and learning with care, then what we get are assessment and accountability practices that don’t actually produce anything. We’re doing the higher ed equivalent of jerking off.
But to be fair, hand-wringing about how assessment and accountability are the end of the world and proposing a plan that amounts to “I believe the children are the future” and “teachers know how to do their jobs and have nothing to learn about how to do them better” is the higher ed equivalent of jerking off, too.
If we’re going to appeal to the desires of the general public surrounding education, it might make sense to find some middle ground between these two positions. For me, a middle ground begins taking the idea of assessment seriously, and about seeing it not as a big stick with which to beat faculty or institutions or students but rather as an opportunity to shape the future of education. In order for that to happen, I think faculty need not only to participate in the conversation but to drive it. Also, we (as citizens) need to think about these issues in terms of budget priorities: if we really believe that assessment and accountability are crucial, then we need to find the money to do those things properly. If we don’t want to pay for these things, then perhaps we don’t really care about them so much after all. And if we don’t, isn’t it better just to admit that rather than to implement half-assed approximations that come with little to no cost?
Somehow, I don’t think that the “general public” wants to support a giant circle-jerk at its public universities. And I certainly don’t want to be drafted to participate in one.***
*This reminds me a lot of the recurring phrase in Woolf’s To the Lighthouse that “Women can’t paint, women can’t write.” This phrase comes from the insufferable Charles Tansley, and it echoes in Lily Briscoe’s mind. Is this statement “true”? To what extent does the phrase create the reality, enforcing a prohibition, performatively?
** For those who don’t get tone, or who don’t read carefully, let me just state for the record that this is NOT my attitude to students.
*** I totally find the literal idea of a publicly funded circle-jerk at universities across the country hysterical, however. Now that’s the way we should be spending our tax dollars!
Edited to add: I just happened upon this story over at IHE this morning, which seems to fit well into this conversation.
I see what you’re saying, but I just don’t think the perfect world of assessment being a useful tool exists. Assessment in your platonic world would be fine, but assessment has to happen in the real world we have now, in which 1) the regular faculty who are tasked with this are already stretched thin, and 2) the regular and adjunct faculty already assess student learning throughout their quarters or semesters as well as at the end of the term.
My department already does assessment by asking a few regular faculty to look over some senior seminar papers. (This kind of assessment I think fulfils your interest in departments and disciplines controlling assessment and assessing what we think is important.) We dutifully report the data to the Black Hole of Information, and we never hear back one way or the other. So, although this assessment exercise is the least intrusive I can imagine, I’m cynical because it feels like a faculty makework program meant to feed data to the beast.
As I said in the thread of the post that you linked to, if our assessment data found that we need 10 more tenure-track faculty lines and survey classes capped at 40, I know those resources aren’t materializing. So, we all have to ask: who wants this information and to what end?
And, as I also said in that thread: there is no evidence that one kind of assessment, 1) high stakes testing has worked to improve education at the K-12 level, and 2) there is no evidence that our system of higher education is not held in high esteem by our neighbors and fellow citizens. Universities should think twice before they take the bait dangled by the Manhattan or Cato foundation.
One last note, as I recognize that this comment is getting long: Our first-year apps are up 16% AGAIN, and number more than 8,000. So it seems to me like my fellow citizens are perfectly confident that a Baa Ram U. education has value. Our bigger problem is, who’s going to teach the nearly 6,000 of them who get in and enroll in our classes?
That’s the real problem from where I stand. Not fake problems like “accountability” and “assessment.”
H’Ann, you’re right: I’m definitely talking about assessment in an ideal world, here, but I suppose I’m more idealistic about the possibility of coming up with something that approaches the ideal than you are – or at least about the value in trying to do so. In my experience thus far, *any* kind of assessment plan is going to make more work for me – and with that being the case, I’d rather do more work that I believe in, and have more ownership over the work that I’m going to do one way or the other, than do work that I think is bullshit.
That said, I don’t disagree that we have bigger problems than assessment and accountability (though I wouldn’t call those “fake problems” – just less pressing and urgent problems than the others).
I admire your noble optimism and your can-do spirit! And I also should have said: I hope you feel better soon. Although I know they’ve been around for 75 years or so, antibiotics are the world’s miracle drugs when an antibiotic is really what you need.
(And with strep, it really is what you need!)
Thanks, H. 🙂 I have only moved from the couch when in need of a refill on my tea – otherwise, I am zombified and camped out for the duration. The doc said that the antibiotics should work their magic and that I’ll feel better in a couple of days, so long as I rest and drink lots of fluids. Needless to say, I’m resting and drinking lots of fluids.
Five years ago, we did a giant self-assessment in our department. Multiple parties both internally and externally used a wide variety of measures — talking to students, talking to professors, looking at survey data, looking at graduation rates and test scores and student outcomes, numbers, all of it. The suggestions they came up with as to what would be most beneficial for both faculty and students? Yeah, no. Every single one of them required money. We don’t have any. Now we are re-assessing. So far the draft of our report says that we haven’t been able to implement any of the previous suggestions because we don’t have enough time, money, or faculty, and that our faculty are spread too thin and are too overwhelmed to add one more task (even a task that would help student retention & progress).
What on earth was the point of the previous assessment? What’s the point of this one? Why did we bother to check what our students are learning if we are unable to change anything? I’m sure you’ll excuse the cynicism I’ve developed…
Nicoleandmaggie: well, there you go. It’s my nightmare come true!
Jeezus fucke, dude! If you can lay down a motherfucken diatribe like this, you can throw down at the Pseudonym Exchange. C’mon, man! PEOPLE ARE COUNTING ON YOU!
Few people who know me would dare care me an optimist, but I do share Dr. Crazy’s optimism that faculty design of assessment can and must be accomplished. If we don’t help drive the direction of the current assessment/accountability craze, it will go in directions we don’t want it to go.
However, it is difficult to translate this into action with already-overworked low-morale faculty. I’m on the assessment committee and though we have a very non-invasive system that is quite fair and flexible, one in which everyone ostensibly participates, I still can’t get any information from them. There’s a lot of distrust around assessment. Part of that is faculty fear and defensiveness. Instead of reading the question “So how do you know that the students in your class have learned what they’ve learned or performed at whatever level?” as an attack as if someone’s demanding that we justify our grades or our teaching choices, we need to engage with it as a sincere inquiry. “Well, let me show you. First we do this, then we talk about this, and then we put it altogether in this project. See the project? Let me show you where the specific outcomes are met.
And I think a self-assessment plan wherein none of the recommendations can be implemented because of lack of funds is not a pointless exercise, though it’s certain painful. Administration gets what it pays for. If you’re not willing to give us the money to make our outcomes better, then you’re saying that our present widget is good enough. Definitely low morale for department. But clear signal to and from administration.
But one thing that I think IS a big problem is the misnomer of “learning outcomes.” Learning, in a constructivist sense, is about synthesizing new information with old, putting things together, being able to see things from a new direction or as a whole, being able to transfer information or skills into a new situation. Yes, you can have an outcome of learning, but it is not usually the same for each person. People do different things with their learning because things don’t build up the same way and they have different viewpoints and lives to start with. Really I don’t think we think enough of encouraging students to use our knowledge and skills across situations, but anyway.
Learning outcomes are really behavioral/performance objectives from instructional design, people. They assume that you put the quarter in, the monkey always dances the same way. It’s fine for learning certain kinds of repeatable skills (like a particular computer program), but not most of the stuff we’re focusing on in college. There’s a difference between a performance objective (student will be able to recount certain pieces of information on a test one day) and a learning objective. Remember that we’re talking about learning, which is a more complex process than needs more than simple assessment tools. Do case studies on particular students, longitudinal studies over four years, or I’ll do it. I’d love to do that work and get a picture of what students learn and do with their learning over the course of their four years. That’s cool research. It’s also assessment of learning outcomes.
Sorry this got so long. I guess I do have stuff to say about assessment. Next time I’ll post it in the right spot.
Well, for someone camped out on the sofa with antibiotics and tea, this is remarkably coherent. And I’m mostly with you. The benefit of assessment — at least as it is designed on our campus — is that it makes us think of our courses as part of a program, not just free floating courses. So if I’m teaching Lower Division course X, I need to be sure to cover Y skill so that when you get my students in an upper division class they know how to do Y. We may not, in history, be as linear as the sciences, but there are cumulative critical and analytical skills. So yes, while we assess our students all the time, we do NOT always assess them in terms of what we think our program is teaching. (And at least for my program, those are very meta-conceptual things, not specific detail ones.)
We also have it structured so that at least in theory, the annual assessments are formative, and then every 7 years you do a program review which is summative and evaluative. So the big use of assessment is for changing the major, altering what we cover, or whatever. It also satisfies the (evil) accreditors, who say they accept qualitative assessment, but in fact don’t.
In a conversation today with a colleague in bio, he commented that one of the things that his colleagues were learning from assessment was that skills that people assumed students had coming in needed to be taught. We’ve figured that out too. So it will make us better teachers. On the other hand, we had mostly figured out (impressionistically) what we demonstrated with a semi-rigorous study with a rubric and all that. We spend a lot of time quantifying our knowledge. And that’s when I want to scream.
I trust the antibiotics kick in soon!
Like Susan, I hope your health is soon restored. Also, in agreement with her, I think that assessment can be sometimes useful if it’s used mindfully within the context of the discipline and the program. What works for us at our regional comprehensive wouldn’t work at your institution or hers.
I’m pushing for my department to lay out some basic ideas of what skills need to be mastered in the 1st, 2nd and 3rd year of the program, for instance. (We’re pretty much coherent on the 4th year, but before that, we’ve never really discussed!) I’m pretty sure that my worn-out colleagues are wary of the muddle this portends. Who has time for one more thing on their plates, especially given all the faculty positions we’ve lost over the past few years without any replacement?
[…] off of a couple of other blog posts, notably this one by Historiann, Dr. Crazy argues for a view of assessment in higher education that gets out from under the usual pitched battles between […]
[…] counters in the first comment to Crazy’s piece that assessment in many places is already happening, but that the data is […]
Thanks for continuing to take up this subject, but could you come up with a tag for everything about assessment?
My latest insight is in a comment toward the end of Dean Dad’s latest screed, but I really like some of the things you are offering here. They match well with what we have learned at my institution. In a few words: we need to take leadership in defining appropriate assessment precisely because we are the experts on what students are supposed to get out of taking our classes.
Historiann: We dutifully report the data to the Black Hole of Information, and we never hear back one way or the other. So, although this assessment exercise is the least intrusive I can imagine, I’m cynical because it feels like a faculty makework program meant to feed data to the beast.
The system described here has zero overlap with what we are doing as we pilot what we will likely be doing for the next decade. Yes, data gets fed to the IR beast (preferably in a format that makes the beast’s job manageable), but it isn’t IR’s job to give us feedback. It is their job to collate it for external consumption. It is our job to give ourselves feedback as we learn from the results of assessing what we were trying to do.
I was actually for assessment before it was in vogue, and I think I got that way in college, where they seemed to also have some form of assessment before it was in vogue.
I am not talking about some weird extra exercise, though, or complex syllabi; I am talking about course goals. I seem to remember these being clear from the catalogue — which was much clearer than the one at my current institution — and more detailed in the booklets departments would put out each term, with little paragraphs on each course. There were introductions to each level of course, so students could place themselves at skill and knowledge levels where they were comfortable, and so on.
After graduate school I have worked at various places where courses were – whatever the person teaching them came up with, and whatever the students in them could do. It has always seemed odd to me, and not because I believe in uniformity or a canned curriculum – I don’t. It’s just that it seems to me we should all have an idea of what sorts of things we want to be teaching students to do, and we should be able to tell them what those things are. It also seems to me that figuring out what they typically know at the end of course X, and being able to say what it is, is helpful for the person teaching course Y. But I am a minority voice on this matter.