In the spring of 2011, Richard Arum and Josipa Roksa released what some would consider a controversial book called Academically Adrift. As many in the ACCU community know, various media outlets referenced this study in order to report college and university students were not learning at an acceptable level, did not study enough, and were not meeting the expectations of employers. Soon afterward, I began receiving phone calls and emails from students and colleagues at my institution asking if the results applied to University of St. Thomas (UST) students. Fortunately, we began administering the CLA (2007) two years after the authors study cohort. This fortuitous timing placed us in a position to respond to our university community. In fact, the institution was in the midst of administering the final section of the longitudinal study at the same time the study was released.............
As I read the book, I was surprised to learn the actual assessment of learning occurred once the students had completed three semesters of an eight-semester program. In addition, the study methodology used a convenience sample of 24 institutions with approximately 3,200 students. In all fairness, the authors did identify this as a limitation of the study but continued to argue this sample was representative of higher education in general. Still, many media outlets consider the results to be an accurate representation of colleges in the US. As a result, college officials throughout the country found themselves trying to explain why their institution did or did not mirror the results of Arum and Roksa’s Study.
Admittedly, I found the book to be a fascinating read and appreciated the author’s attention to higher education literature. So much so that I would recommend this text as a resource for better understanding student engagement and behaviors in higher education. In addition, I applaud the authors for taking on one of the most controversial and debilitating topics in higher education. In fact, after 20 years in higher education, I would argue that learning assessment at the institutional level has been one of the most politically charged and difficult tasks faced by administrators at all levels of our industry. Still, I do have reservations concerning the author’s conclusion that students in American higher education are not learning at an acceptable level. The Collegiate Learning Assessment (CLA) is the primary instrument used by the authors to make this determination. The CLA is a product of the Council for Aid to Education in conjunction with several philanthropic foundations. As I indicated earlier, UST voluntarily participated in this assessment. In fact, we are quite happy with the instrument and are in the process of using the results to identify strengths and weaknesses in the undergraduate program at our institution. Further, I believe the CLA has provided the higher education community with a tool to use in conjunction with other learning assessment tools so that we may finally provide quantifiable learning outcomes to internal and external constituents.
However, the fact that the authors reached their conclusion of limited learning after only three full semesters may not be the best approach to sharing these results. After all, the first three or four semesters at our institution focus on what we refer to as the Core Curriculum. Like many Catholic Colleges, the UST Core Curriculum provides a knowledge base for our students as they work toward determining their vocation. In fact, UST does not allow students to declare a major until their fourth semester. As such, we would not expect a student at our institution to be exposed to the requisite skills necessary to thrive on the performance task portion of the CLA assessment. True, our students have developed a strong intellectual foundation focusing on literature, science, Catholic tradition, reason, etc. However, the skills they have developed in their first three semesters at UST are not conducive to the sample CLA questions provided by the authors. These two examples included purchasing an airplane or taking a position on a crime reduction program. Of course, these particular skills are very important and we expect our students to develop them more fully during the final two years of their baccalaureate program as this is the period they are preparing for their vocation. In short, I agree with both the authors and CLA regarding the validity of the questions. My disagreement lies with the timing of the assessment. Having access to our own CLA data enabled us to test this hypothesis.
So what did we learn? Using the same section of the CLA cited by the authors, UST researchers conducted the same freshman to sophomore study. In addition, we took the analysis a step further and explored the data beginning with the freshman year and concluding with the senior year. In our view, this would provide us with a fuller understanding of the learning that had occurred using this one measure of the CLA. The authors and CLA researchers have chosen to report this learning using a statistical term called effect size. This measure is arrived at by subtracting the average (mean) score of students during their freshman year from the average (mean) score of these same students during the second semester of their sophomore year. The result of this computation is then divided by the standard deviation derived from the freshman year distribution. According to the authors, the results of their analysis revealed an effect size of .18. They go on to argue that a .50 to 1.00 effect size would have revealed significant learning on the part of the students participating in the study.
When considering UST, the results of this same analysis reveal a UST effect size of .01, which is lower than the figure presented by the authors. As I indicated earlier though, these results are not entirely surprising given the combination of the three semesters of curriculum completed and the skills necessary to excel on the performance task section of the CLA. In response, we then measured this same effect size from the beginning of the first year to the middle of the eighth semester (senior year). The results of this analysis were markedly different with the effect size reaching the high end of the author’s definition of acceptable learning at .84. Again, the dramatic difference is not surprising given the nature of the UST core curriculum the first two years and the intentional relationship of this core to the vocational preparation occurring in the junior and senior years.
Certainly, these results are probably not unique to UST. One could argue the baccalaureate experience is the sum of the parts rather than a specific point in time approximately half way through the student’s program of study. CLA already does this and provides the results to the participating institutions. In closing, I want to emphasize my appreciation for the effort put forth by Arum and Roksa as they have given credibility to a very important conversation involving the idea of learning assessment in an ever-increasing age of accountability. My hope is that this work will help higher education faculty and staff to realize the importance of assessing the teaching and learning processes at their institution. After all, if we do not, someone else may.
Wednesday, February 8, 2012
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment