Harness Continuous Improvement through Planning Intelligence & Quality Assessment

Harness Continuous Improvement through Planning Intelligence & Quality Assessment

Continuous improvement is at the heart of campus efforts to strengthen performance. Yet, higher education oftentimes fails to realize how the quality of our assessment planning is the true driver of continuous improvement. While we tend to devote a significant amount of resources to document the assessment activities of campus constituents, less emphasis is given to the plan and framework we put in place for assessing, even though the former can only be as strong as the latter. So, how do we move toward a culture of authentic continuous improvement rather than live in a mindset dominated by compliance and accountability?

Defining Continuous Improvement on Campus

To invigorate our practice of continuous improvement, we need to first establish that it’s not externally driven. No matter what lawmakers, parents, or taxpayers are asking of the value of higher education today, they cannot be the impetus for why we assess and look to do better. Even if we know all regional accreditors — and most disciplinary ones — speak (directly or tangentially) to the ideas of continuous improvement, we cannot disguise our efforts as being driven merely by a desire for demonstrated compliance. Even if compliance makes sense as a goal given our need to fulfill our mission, battle constrained resources thanks to budget cuts, tuition increases, and low enrollments, as well as an ever-increasing skeptical public means we must strive to do more.

Moving from the lens of compliance to intelligence takes intentional efforts. Changing the culture from one focused on accountability and completion to one centering on using data meaningfully and advancing our mission will involve deep conversations with multiple constituencies on campus — perhaps none as important as faculty given their role in student learning. To ensure their buy-in, we must find a proper way to balance areas of frustration with those of cooperation. For example, faculty may happily assist in efforts aimed at improving student success, developing their own pedagogy, garnering classroom feedback, or more effectively designing their curriculum. But they are reticent when given unclear expectations, lack of transparency in data usage, or what can be perceived as forced duplicative efforts. Even worse are threats that actions must be taken or are only requested due to the demands of regional accreditors. This demoralizes faculty and creates a false dichotomy between internal efforts and the peer accreditation process.

To begin a worthwhile evolution, we need to start doing the best job possible of supervising compliance. Campus perceptions tend to suggest that faculty believe assessment is done (or should be done) to inform learning, while administrators think of it through a lens of accreditation, leading to unnecessary tension. Thankfully, monitoring assessment is a shared responsibility on most campuses. Senior-level stakeholders, embedded assessment administrators, and even administrative assistants can have a meaningful impact on assessment efforts and processes on campus. But all parties should be asking whether they are spending their time on resources merely making sure boxes are checked, versus identifying and creating efficiencies that will both make checking boxes easier and further advance efforts.

Initiating Change to Better Campus

No matter what nomenclature we use for tracking assessment, nearly all efforts involve documenting a foundation framework and making a plan to assess, gathering and detailing assessment results and evidence, analyzing and interpreting results, and closing the loop on the impact of actions taken to drive continuous improvement. Through this process, we can work to move beyond merely checking to see if a field is filled out and instead focus on ensuring an active process that produces meaningful work. Administrators of the assessment process can monitor for quality as much as completion, allowing for healthier conversations that better impact student success, program effectiveness, and institutional progress. In short, rather than issuing mandates to meet deadlines, calling to check-in, and sending surveys seeking feedback once reports are done, we can thoughtfully engage all campus stakeholders throughout the process — aiding their efforts to fully live the idea of continuous improvement. And we can make it easier on ourselves at the same time. Completion — while necessary — simply isn’t sufficient today. We have to use the data collected if we want to truly experience continuous improvement.

But how can we best move from making sure we have a process to ensuring we follow said process and actually improve as an institution by using collected data? Strategically, we need to focus on three areas: tightly defining and well documenting our use of results, meaningfully surfacing assessment data, and creating structures of support at various institution levels. If these seem potentially daunting to you, you are by no means alone. Few institutions have truly figured this out, and even those that have seem to wish they could still do more. Navigating the delta between knowing something needs to be done and implementing that something — while demonstrating impact — takes time and concentrated effort. Research shows it takes over a year and a half on average to implement a closing the loop action, meaning it’s essential to both surface information and understand impacts if we want to enact change for our current and future students.

A New Campus Process

If we focus on assessing student learning as an example, we must remember the direction we provide academic programs will ultimately drive how they approach using their results. We can, after all, only expect out of them what we guide them toward. Basic approaches to using student learning results might merely make direct reference to student development. The next step may go further and explicitly mention “use of results” or “improvement.” Adding sophistication could lead us to hearing about intentional changes made to a program’s curriculum. Even more advanced would be discussions of faculty-led and initiated pedagogical changes based on their own learning data. But the pinnacle would be calls for re-assessment to determine whether changes are leading to improvements.

What structures lead to advancing our sophistication? It begins by devoting the same amount of resources to acting on assessment data as we currently provide for collecting and gathering data. We need to move the goal posts for success and demonstrate our commitment through actions as opposed to merely through words and processes. Further, we need committees or roles on campus where assessment is embedded in responsibilities and rewarded when innovative and successful. Our culture must celebrate good assessment work as opposed to breathing sighs of relief for just finishing. Leadership must be visible and speak to the importance of continuous improvement as internally benefitting our students, programs, and institution. Given that no one working in higher education was necessarily taught how to assess as part of a degree program, it’s equally important to provide opportunities for staff and faculty to grow through internal and external professional development opportunities. And to enhance efficiencies, we should actively seek opportunities for authentic assessment data collection embedded within programs. If we want faculty and staff to see assessment as more than a requirement or chore, we should try to make it easy for them.

A New Vision for Assessment on Campus

Ultimately, the aim is to create clear, coherent institutional assessment systems. This system should lead to a shared understanding around what we need to assess, how often we need to assess it, and to what end we are doing so. It should allow faculty to analyze their own data and take action to help students learn, enhance their pedagogy, and drive programs to be as effective as possible. And it should lead to an environment in which student learning data leads programmatic improvements. We urge campuses to move beyond compliance in the interest of true continuous improvement by asking how processes and plans can be improved today. Campuses will benefit when they improve their process quality by identifying gaps in their current approach and better understanding the time and resource distribution across all phases of their assessment cycle. Benchmarking with best practices, peer institutions, and previous internal performance helps ensure optimal conditions for continuous improvement to take root. This helps create a culture where promoting actionable data will lead to easier demonstrations of closing the loop, better use of actions to drive demonstrated improvement, and clearer identification of exemplars.

Five Ways to Remember Students and Their Privacy in Predictive Analytics

Five Ways to Remember Students and Their Privacy in Predictive Analytics

It’s More than Asking Questions and Getting Answers: The Need for Data Translators in Higher Education

It’s More than Asking Questions and Getting Answers: The Need for Data Translators in Higher Education