To Belong or Not Belong to POD in Higher Education?

We may be one of the few centers for teaching and learning (CTLs) that do not belong to POD (the Professional and Organizational Development Network in Higher Education), a professional organization that Sorcinelli reported in 2006 contained 1400 members. We used to belong, but their institutional fee was too large of an item to maintain on our budget, a budget which we think could be among the lowest in the nation. We also confess that we have never been to POD’s annual conference.

Should we belong? Probably. The benefits of being able to reach out to other CTLs for solutions to common problems are great. But despite not subscribing to a national service, we are not totally alone.

Related Reading: How to Improve Faculty Attendance at Higher Ed Professional Development Events

State Organization

Luckily, our state has its own mini-POD. All higher education in the commonwealth is governed by the Council on Postsecondary Education (CPE), which years ago created the CPE Faculty Development Workgroup (FDW). Consisting of the six regional comprehensive universities, the University of Kentucky, the University of Louisville, 19 independent schools, and the state’s community college system, this group has its own bylaws, a rotating annual facilitator, four meetings per year (half accomplished online), and a yearly conference.

As members of the FDW, we have primarily contributed to the group’s well-being throughout the past decade. In fact, in six of the last ten years, we have been in charge of the group’s conference. Three years ago we moved it to our university and renamed it the Kentucky Pedagogicon and then just The Pedagogicon (usually followed by its year).

What probably prodded us into this post was that last Friday we held Pedagogicon 2016, and the entire conference experience has weighed heavily on our minds. After a few years of increased attendance, this year the number of conferees fell off. Ironically, as we have added attendees from Massachusetts, Texas, and even Toronto, three of our major regionals did not send a single person to the conference, nor did they actually name a representative to our group.

And that decline we directly attribute to the gradual disintegration of the FDW. In point of fact, one of our counterparts at another state institution told us at the Pedagogicon that the CPE FDW has fallen to the bottom of her list of priorities. Indeed, this year the group met only once online, and we had to hold a working BBQ the night before the conference to get everyone up to date. Fittingly, only three persons attended, one of whom represented the CPE.

The principle is simple—a chain is only a strong as its weakest link.

So we sit here at the end of another academic year trying to figure out what direction to go. As we see it, we have three choices (or some combination thereof):

1) Drop out of the FDW (if we can).

2) Join the POD.

3) Rejuvenate the FDW.

We have chosen option #3, and we may even find the money to join POD in the fall, especially since its national conference is being held this fall less than a hundred miles away.

Related Reading: An Innovative Plan for Assessing Faculty Development

Suggestions for Rebuilding

As we try to generate more interest in the state workgroup, we have come up with several suggestions for doing so.

  1. Survey the members. At last week’s Pedagogicon and the pre-conference BBQ, we talked with various college representatives as well as some from the CPE. All of those contacted offered some suggestions, but most were about the conference and not the FDW.
  2. Contact the provosts at the member institutions that do not have a representative and ask if these administrators will select someone.
  3. Remind all that the higher ed accrediting agency to which we belong, SACS, has SACSCOC Principles of Accrediation (2012) that include faculty development as a Comprehensive Standard: “3.7.3 The institution provides ongoing professional development of faculty as teachers, scholars, and practitioners.” Obviously, the FDW and its Pedagogicon offers a partial way of fulfilling that requirement.
  4. Contact the state’s CPE to have them call together a meeting of the workgroup. Perhaps if we can get this authority to exert some influence, the group can be revitalized.

Final Thoughts

Sometimes we blame ourselves for the FDW’s decline. The group’s basic function for as long as we have belonged has been to provide a state-wide conference for the exchange of the best pedagogical strategies. Maybe by handling those duties ourselves for 60% of the decade, we have caused other state schools to feel less relevant and less participatory.

In any case, belonging to a larger group with similar concerns, be it the FDW or POD, seems absolutely necessary.

JFDbloglink

Author

author Hal BlythePh.D Hal Blythe writes literary criticism to mystery stories. In addition to the eleven books he’s published with New Forums, Hal has collaborated on four books on a variety of subjects, over 1000 pieces of fiction/nonfiction, and a host of television scripts and interactive mysteries performed by their repertory company. He is currently co-director of the Teaching and Learning Center for Eastern Kentucky University. Meet Hal Blythe.

 

Share It!

The Importance of Physical Space for Faculty Performance

While space was the final frontier for countless episodes of Star Trek, creating the optimal teaching and learning spaces may be the final academic frontier for centers of teaching and learning (CTLs). As Tom Kelley, CEO of IDEO put it in his “Forward” to Make Space (2012), “Space matters. We read our physical environment like we read a human face” (p. 4).

The importance of physical space is a concept we delved into in both Teaching Applied Creative Thinking (2013)—see Chapter V on “The Learning Environment for Optimal Creative Thinking”—and Transforming Your Students into Deep Learners (Stillwater: New Forums, 2016)—see “Strategy VII: Creating Spaces for Deep Learning.” In these books, we enunciated several key principles for achieving Kelley’s goal: “Space is a valuable tool that can help you create deep and meaningful collaborations in your work and life” (p. 5).

At the moment, our CTL includes three major spaces: the Noel Studio for Academic Creativity (a space housed in the campus library with a large open area, a high-tech classroom, breakout spaces for small-group collaboration, presentation practice rooms, and a media wall with large monitors), the Teaching & Learning Center (consisting of two offices and a 33×37 foot Faculty Lounge), and a new space in the old campus bowling alley for us to build an experimental classroom. The Noel Studio was completed in 2010, and the Faculty lounge in 1939.

Related Reading: Transforming Your Students into Deep Learners

Rebuilding a Space by Principles

This past year, we have been refurbishing the 75+-year-old Faculty Lounge, an Art Deco relic of opulence. Forty years ago the Faculty Lounge was the campus version of the 18th-century coffee house. Faculty came from across campus and often had to wait for a seat. In an era before social media and campus email, the lounge was where one learned what was happening on campus, official and unofficially (the rumor mill ran as often at the coffee grinder). Our central focus in the refurb was to transform the space from a comfortable lounge of yesteryear to a trendy, contemporary workshop space that invited faculty in to participate.

Our first upgrade was obviously technological. We had a new wireless access point installed that could handle heavy traffic. From that time we met quite often and followed several design principles we brought out in Teaching Applied Creative Thinking:

  • Natural light
  • Bright colors
  • Flexible and comfortable furniture
  • Writable spaces (pp. 23-24).

The Faculty Lounge has two large east-facing windows as well as a French door, so we have plenty of natural light (in fact, so much light beams through in the early morning that shades are a must). Overhead lighting consists of four decorative but dim lights as well as a light circle we refer to as the Cone of Silence (thank you, Get Smart). Given the building’s basic knob-and-tube wiring system (our budget prohibits such extensive rewiring), the most effective upgrade was switching to LED lights.

Bright colors (which adorn the Noel Studio) presented a problem for the Faculty Lounge. The space is fairly traditional. As a result, we have lighter colors on the walls and even some colorful paintings and posters. The rug is basically a multi-colored brown, the tables are a dark wood, and the chairs contain a mixture of blues and grays. Obviously, combining the past and present necessitated a compromise.

Aside from the colors, the furniture’s main element is movability. We have seven tables with six chairs, and all seven tables can be reconfigured to achieve Kelley’s “deep and meaningful collaborations.” As the lounge contains a fireplace, we built a foundation of a sofa and two chairs in front of it. In its previous iteration, the lounge contained eight wheel-less, square tables, each able to seat only four people. As a result of the upgraded space, we will now be able to accommodate 50 people for interactive faculty development sessions.

Writable spaces also followed the principle of movability. Rather than try to attach screens to old plaster-and-lath walls, we went with two smartboards and two monitors on wheels. The latter meant that even for presentations we wouldn’t need the traditional projector and screen because anyone can access the movable monitors.

Related Reading: How to Improve Faculty Attendance at Higher Ed Professional Development Events

The Key Principle: Mentoring from the Middle

Our major principle was not spatial, but pedagogical. In Teaching Applied Creative Thinking, we stressed the importance of spatial and technological decisions emanating from pedagogical preferences, and in the same book we posited that the teaching and learning paradigm best suited for students is the mentor from the middle. Obviously, such a pedagogical concept underscores the need for the instructor/mentor not to be a sage at the front of the room or even a guide on its side, but rather a teacher-learner immersed in the middle of the group. To translate this concept into reality necessitated running a thin, flat wire under the rug to a podium/pocket cart at its center.

Conclusion

The new workshop space debuts this fall. We plan to run all our Teaching & Learning Innovation (TLI) series workshops in it as well as our three-to-four professional learning communities. More importantly, we plan to assess how the faculty interacts in this new space, including how it is used and how it functions and whether their collaborations actually result in deep learning.

To paraphrase Wittgenstein, the limits of my space mean the limits of my world. The more optimal the space, the more optimal the learning.

JFDbloglink

Author

Russell CarpenterDr. Russell Carpenter is director of the Noel Studio for Academic Creativity and Program Director of the Minor in Applied Creative Thinking at Eastern Kentucky University. He is also Assistant Professor of English. Dr. Carpenter has published on the topic of creative thinking, among other areas, including two texts by New Forums Press. In addition, he has taught courses in creative thinking in EKU’s Minor in Applied Creative Thinking, which was featured in the New York Times in February 2014. Meet Russell.

 

Share It!

Faculty Assessment Efforts: To Stipend or Not To Stipend

Assessment is a people process – not a person process – a people process. In higher education, the people are the faculty. It’s always a bit of a surprise that in the world of higher education where shared governance and decision-making are sacrosanct, faculty members often take little interest in assessment. That irony aside, assessment in higher education MUST include faculty.

The challenge, of course, is how to change policies and practices in a way which will increase faculty involvement in assessment. There it is. It’s finally been said. Enhancing faculty involvement in assessment means change.

Related Reading: The Ins and Outs of Higher Education’s Culture of Assessment

Initiating Change

Change can be mandated, but that type of change is hardly engaging, almost impossible to sustain, and does little to create the coveted culture of assessment so often referred to in higher education.  In reality, assessment by command creates the antithesis of a culture of assessment – an environment where assessment is relegated to a few administrators who chase down faculty for contrived and often meaningless information in effort to meet burdensome accountability demands.

As an alternative to mandates, administrators on many campuses attempt to use extrinsic rewards to increase faculty involvement. These incentives come in varied forms including stipends.

Will incentivizing assessment efforts actually change the level of faculty involvement in the process? Well, it depends. Because change is complex, it might be helpful to reference a bit of change theory. Kurt Lewin’s (1951) seminal portrayal of change still offers a suitable framework for a contemporary discussion of change. According to Lewin, change involves three steps.

  1. Unfreezing or disrupting status quo behaviors
  2. Moving to and reinforcing new behaviors
  3. Refreezing or stabilizing new behaviors

Stipends work well to unfreeze current behaviors. Individual faculty member stipends can serve as a counter balance to the risks of trying something new in a course assessment.  Stipends awarded to programs can broaden faculty involvement in program assessment efforts and offer compensation for the faculty’s investment of time. When criteria for stipends require recipients to share their efforts with colleagues, the thaw initiate by the original investment increases.

If stipends unfreeze existing behaviors, recognition reinforces new behaviors. Recognition of the exemplary assessment efforts of a faculty member or program conveys the value of assessment to others on campus. Publishing assessment accolades on websites or highlighting them during faculty events takes few resources but reaches a large audience. This type of recognition underscores the value of the new behaviors and at the same time provides exemplars which can guide future assessment efforts of others.

Related Reading: Assessment is a Team Sport: A Collaborative Faculty Process

Sustaining Change

True change implies long-term, sustainable returns. In Lewin’s terms, refreezing embeds new behaviors into an organization’s ethos and improves the odds it will stick. Refreezing assessment efforts can only be accomplished by rewarding faculty for assessment efforts. While stipends can jump-start initial assessment efforts and recognition can reinforce behaviors, rewards stabilize and hopefully institutionalize the efforts.

There are a couple of caveats. Rewarding assessment only as a faculty service activity will thwart attempts to establish a culture of assessment. To galvanize an institutional culture of assessment, campus structures must reward assessment as a legitimate teaching activity.  This does not have to be difficult. Teaching awards are common-place on most campuses. Adding an award (or two) highlighting assessment’s impact on student learning requires minimal resources but publically validates assessment as a critical component of teaching.

However until assessment intersects with research, widespread faculty involvement will be difficult to achieve. Rewarding research related to assessment may not be as challenging as it sounds. Campuses can do two things.  First, internal mini-grants can be earmarked for assessment research. This type of funding offers great opportunity for junior faculty to secure a seed grant and to be engaged, from the outset of their career, in assessment. Second, campuses can elevate action research within the promotion and tenure process. Action research and assessment make great partners. Both are participative and emerge and respond to situational context.

Related Reading: 3D Assessment: Using Data, Discussions and Decisions to Engage Faculty

Final Thoughts

Yes, even small assessment stipends will require resources, and resources are limited. As in most organizations, anticipated return on investment and institutional values drive campus resource allocation. Higher education has been clamoring about increasing faculty engagement in assessment for years. In terms of building a campus culture of assessment, stipends actually offer a way to put the proverbial money where the mouth is in terms of assessment.

References:

Lewin K. 1951. Field Theory in Social Science. New York: Harper & Row.

JFDbloglink

Author

Connie SchafferDr. Connie Schaffer is an Assistant Professor in the Teacher Education Department at the University of Nebraska Omaha (UNO). She serves as the College of Education Assessment Coordinator and is involved in campus-wide assessment efforts at UNO. Her research interests include urban education and field experiences of pre-service teachers. She co-authored Questioning Assumptions and Challenging Perceptions: Becoming an Effective Teacher in Urban Environments (with Meg White and Corine Meredith Brown, 2016).

Share It!

Under Construction: Developing a Style Sheet for the Journal of Faculty Development

The past few weeks we have been reading and rereading manuscripts for a special issue of the Journal of Faculty Development (JFD) on the future of faculty development. Rusty has taken over as the new editor of the Journal, and one of the things he would like to institute is a guide on writing, writing styles, and, specifically, some major grammatical suggestions. If you were to visit New Forums’ section of this website devoted to the JFD’s “Author Guidelines,” you would find instructions about following the APA Publication Manual (6th Edition), dealing with copyrighted materials, and the review process, but as of yet lower level concerns about grammar have not been addressed.

As a result, one of our projects has been developing these guidelines, and like our title indicates, these guidelines are “Under Construction.” What follows are a rationale and some guidelines. You might remember we began addressing this issue last year with our JFD article called “The Pancake Professor and the Decline of Scholarly Writing” [29.3 (2015): 69-70].

When you have finished this post and if you have any suggestions, please email them to us at [email protected].

Related Reading: Going to WAR: Using a Weekly Activities Report for Assessment

Rationale

Last April Manchester Guardian data editor Mona Chalabi (HT Daily Wire) opined that grammatical correctness is actually a type of white privilege, claiming that “Grammar snobs are patronizing, pretentious, and just plain wrong.” She continued, “All too often, it’s a way to silence people and that’s particularly offensive when it’s someone who might already be struggling to speak up” (we won’t point out her lack of a comma between two main clauses joined by a coordinating conjunction). In rebuttal, Newsbuster’s Melissa Mullins argued that grammatical correctness is more “a sign of an educated person.”

As writers of over 25 published books and 1200 articles as well as editors, we offer a third position. Grammar exists for one purpose, clarity, and the basic commandment of all grammar rules is “Thou shalt not confuse thy reader.” Note that by our employing pronouns not in current usage, we might have violated the key rule (especially for younger readers)—and confused you. Clarity is what allows both someone “struggling to speak up” and “educated persons” to communicate on the highest level. Grammar is not a privilege, but the appropriate tool for exchanging ideas. As Barry Wylant argues, “Indeed, even language forms a type of conceptual space where the rules of spelling and grammar allow one to make sense of individual letters and words” [“Design Thinking and the Experience of Innovation,” Massachusetts Institute of Technology Design Issues 24.2 (2008): 9].

To help our readers “make sense” of what our writers are trying to express, we ask those writers to follow the following guidelines.

Related Reading: How Design Thinking Helps Innovate Faculty Development, Part I

12 Grammar Guidelines:

1) Use consistent formatting of author bios, including degree. For example:

Charlie Sweet, Ph.D., is the Co-Director of the Teaching & Learning Center at Eastern Kentucky University. With Hal, he has collaborated on over 1100 published works, including 23 books, literary criticism, educational research, and novels (as Quinn MacHollister).

Hal Blythe, Ph.D., is the Co-Director of the Teaching & Learning Center. With Charlie, he has collaborated on over 1100 published works, including 22 books (eight in New Forums’ popular It Works For Me series), literary criticism, educational research, and a stint as ghostwriter of the lead novella for the Mike Shayne Mystery Magazine.

2) Try to avoid “There is . . .,” “There are . . . ,” and “It is . . . ” constructions.

3) “This” and “That” are always followed by a noun.

4) Authors should use the serial comma (i.e., one comma less than the total number of items in the list) for words, phrases, or lists in a series. For example: The journal publishes research, scholarship, and creative works. Semi-colons appear in a list only if the individual units contain commas.

5) A comma is used to separate two main clauses joined by one of the seven coordinating conjunctions: for, and, but, or, nor, so (that), yet. Place the comma before the conjunction.

6) Avoid “when” and “where after “is” in definitions.

7) Don’t use redundancies such as “the reason why” or “is because.”

8) Do not substitute “would be” for the present or past verb tense.

9) Use pronouns properly (e.g., “Terry was the kind of student that took tests poorly” should be “Terry was the kind of student who took tests poorly.”

10) Hyphenate two consecutive modifiers being used as a singular adjective (e.g., I developed a six-minute video on degree-completion students).

11) Use introductory commas (e.g., Therefore, I conclude that flipping goes well).

12) If APA does not provide a format for a reference, make one up as best you can following close examples. Be consistent if you have to make up similar references.

Any suggestions?

JFDbloglink

Author

Author Charlie Sweet EKUCharlie Sweet is currently Co-Director of the Teaching & Learning Center (2007+) at Eastern Kentucky University. Before going over to the dark side of administration, for 37 years he taught American Lit and Creative Writing in EKU’s Department of English & Theatre, where he also served as chair (2003-2006). Collabo-writing with Hal Blythe, he has published well over 1000 items, including 15 books; of his 11 books with New Forums. Meet Charlie.

Share It!

3D Assessment: Using Data, Discussions and Decisions to Engage Faculty

My two previous blogs clarified the meaning of the common, if not slightly overused phrase, culture of assessment, and presented assessment as a collective and collaborative process. Of course, no assessment conversation is complete without talking about data. Certainly, data are central to the process of program assessment, but data alone result in one-dimensional assessment. Robust assessment includes two other important dimensions: discussions and decisions. Using this 3Ds model – data, discussions, and decisions – enhances assessment and increases faculty engagement in the process.

Related Reading: The Ins and Outs of Higher Education’s Culture of Assessment

Data, Discussions & Decisions

Let’s start with the obvious – data. Whether a program is drowning in data or struggling to find data, it is important to realize not all data are equal. Effective assessment efforts focus on the most relevant data. By limiting the amount of data to that which is most critical, programs can routinely collect, organize, disperse, and analyze data without overwhelming faculty with these tasks.

But assessment does not live on data alone. Discussions contextualize data, and context matters. As they discuss static data, faculty can recognize program strengths and weaknesses. More importantly, faculty should be encouraged to identify departmental, campus, and/or external dynamics which impact data. Discussions about what may hinder or contribute to student learning make data dynamic and more meaningful to faculty.

Discussions should lead to decisions. Here we need to be wary of another assessment cliché – data-driven decisions. Data-driven decision models discount the very contextual variables faculty should be encouraged to discuss. Data is always situate within programs which must respond to internal and external influences as well as data. When faculty consider both program data and context, data guide rather than dictate decision making. Decisions become data-informed rather than data-driven.

Related Reading: Assessment is a Team Sport: A Collaborative Faculty Process

Making it Happen

Engaging faculty in data-informed decision making takes deliberate effort. Not only does the process need intentional facilitation, faculty involvement must be documented. If you think (or hope) this is just another example of utopian expectations in higher education, you may want to think again.  Even a cursory review of the Higher Learning Commission’s accreditation criteria tells us institutions must provide evidence of substantial faculty involvement in this process.

Administrators need not lock faculty in a room full of data charts or spreadsheets to make this happen. A less dramatic approach, can draw faculty into assessment and decision-making processes. Initiating 3D assessment can be as straightforward as this three-step practice.

Step 1. Routinely and systematically collect the most relevant data throughout the course of a semester or term.  Distribute the data to those who have the most vested interest in it.  Keep in mind, not all faculty members need to review all data. Stick with what is relevant.

Step 2. Create a simple form, similar to the one in Figure 1, to record the date, topic, and participants in the discussion. Pose open-ended questions which prompt faculty to consider the many factors influencing the data. For example, ask faculty to identify possible barriers to student learning or evaluate the extent to which an assignment, project, or examination is aligned (or not) with program outcomes.

Data Discussions Decisions 5.23.16

Step 3. On the bottom of the form, document what decisions were made. A full spectrum of choices, ranging from decisions to change nothing to decisions to change an entire program, will validate the consideration of data in conjunction with the program’s context and professional judgement of faculty. Collect the forms as documentation of faculty involvement in assessment and decision making.

Three 3D assessment is really that easy. Most faculty willingly engage when they see the connections between data, discussions, and decisions. These dimensions shift assessment from a bureaucratic distraction unworthy of faculty time to a professional dialogue in which faculty influence important decisions.

JFDbloglink

Author

Connie SchafferDr. Connie Schaffer is an Assistant Professor in the Teacher Education Department at the University of Nebraska Omaha (UNO). She serves as the College of Education Assessment Coordinator and is involved in campus-wide assessment efforts at UNO. Her research interests include urban education and field experiences of pre-service teachers. She co-authored Questioning Assumptions and Challenging Perceptions: Becoming an Effective Teacher in Urban Environments (with Meg White and Corine Meredith Brown, 2016).

Share It!

How to Improve Faculty Attendance at Higher Ed Professional Development Events

With one week remaining in the semester and one event to go, we are already assured of doubling last year’s attendance in our Teaching & Learning Center’s professional development program. What did we do to improve campus participation? In truth, it wasn’t just one thing, but a combination of them. Of primary importance is the fact that our two-person (we lost an administrative assistant in a reorganization), faculty-facing CTL achieved greater funding and more personnel (a director, a tech, a media producer, and a part-time instructional designer) by merging with another academic unit, the student-facing Noel Studio for Academic Creativity—critical mass.

Of course, increased attendance is not our only goal, but even assessment to evidence value—faculty learning outcomes—can only occur after we’ve put those faculty bodies in our seats.

Six Suggestions for Improving Faculty Attendance

1. Create a name for your sessions. Make sure that the name emphasizes both a continuum of similar events and some aspect of your mission statement. As one of our favorite proverbs states, “The beginning of wisdom is learning to call things by their right names.” What had simply been labelled “Roundtables” became the Teaching & Learning Innovation Series (the TLI). “Teaching” and “Learning” obviously reflect our new unit’s facing both faculty and students, while “Series” suggests that professional development is more than a potpourri of one-shots, and we appeal to the “Collect the entire set” mentality. Importantly, the new name of the series captures the essence of our unit’s mission statement—“Helping Teachers Help Students Learn Deeply”—and allows us to brand or create an identity.

2. Try to theme your events with something valued by your audience. While the three of us are now primarily administrators, Hal and Charlie began as faculty, while Rusty still is, so we have a good sense of what faculty want as well as need. Since even at a minimum faculty try to stay current with higher education trends, the term “Innovation” implies they are getting things that happened after they left graduate school. In our specific case, the very name of the Noel Studio for Academic Creativity is the basis of innovation. Of course, a theme also unifies professional development.

3. Establish an electronic registration system. While we never turn away drop-ins, we prefer faculty pre-register for every event. Pre-registration provides a commitment on their part, and a list of participants first gives us an idea of how much material we need for them (e.g., food as well as how many books to order or articles to reprint). Google has made the pre-registration easy, and it provides valuable contact records because we always ask for an email address.

4. Publicize the events multiple ways. Prepare a base list of events in the semester before the series is held. We always provide physical and virtual copies of the entire schedule during New Faculty Orientation. We advertise the maximum times that our university’s daily electronic announcement system allows (in our case, EKU Today). Our pre-registrants receive personal emails about not only an event for which they registered, but every event in the series. Flyers for next event are handed out to faculty whenever they attend a workshop. At the conclusion of every event, we email our participants a toolkit—a list of resources, exercises, and strategies they can use. Our website lists the events. And Rusty tweets more than the birds flying inside our favorite big box stores, Lowe’s and Home Depot.

5. Utilize a workshop format. If faculty are going to surrender their time, they must feel they are receiving something worthwhile, and they must feel they are taking charge of their learning. Hence, the required active-learning format of workshops satisfies those needs. Most importantly, if we as professional developers are stressing an active-learning, mentor-from-the-middle model of instruction, we had best reinforce that concept at our own workshops. We insist all workshop facilitators employ some activities and interaction. In fact, we have a set of guidelines for facilitators on our website.

6. Try to use a variety of workshop facilitators. Our first choices are always our Faculty Innovators (about whom we’ve written much in the past) because they have been selected for their skills and been trained in effective strategies. Our second choice is the three of us, but while we know we can do the job, we’d prefer to solicit others (in fact, the three of us have facilitated our three highest-attended workshops in the TLI Series). For that reason we solicit workshops from faculty and administrators who have contacted us about some pedagogical strategy with which they have had success, and sometimes when the University’s every-other-week email comes out with a list of faculty publications and presentations, we solicit those people. Sometimes faculty bring their classes to the Noel Studio for a workshop for their students, get to talking with Rusty about what they are doing, and, voila, a TLI Series event is born.

In short, this year we have focused on reaching more faculty. The TLI Series has done that, but we have found other ways, something we’ll discuss in future posts.

JFDbloglink

Author

author Hal BlythePh.D Hal Blythe writes literary criticism to mystery stories. In addition to the eleven books he’s published with New Forums, Hal has collaborated on four books on a variety of subjects, over 1000 pieces of fiction/nonfiction, and a host of television scripts and interactive mysteries performed by their repertory company. He is currently co-director of the Teaching and Learning Center for Eastern Kentucky University. Meet Hal Blythe.

Share It!

Assessment is a Team Sport: A Collaborative Faculty Process

If you look up “assessment” in the dictionary, you will not find reference to team sports. What you will find are definitions that include concepts such as evaluation, judgment, appraisal, valuation, measurement, or estimation. Certainly, these are important elements of assessment, but they have the potential to constrict assessment to a product. Assessment is not a product, it is a process. Nor is assessment a solo endeavor. It is a team effort.

According to Linda Suskie (Assessment & Accreditation Consultant) and Jane Wolfson (Towson University), program assessment is the process of deciding what we want students to learn and making certain they learn it (Suskie, 2016). Note the emphasis on the plural pronouns.

Related Reading: The Ins and Outs of Higher Education’s Culture of Assessment

The We & They of Assessment

Let’s examine the “we” first. In a culture of assessment, this pronoun can refer to varied groups of people. We is often the program faculty who map curriculum and determine the most critical outcomes students must master. It can also refer to professional organizations who have established national standards within their disciplines. In both cases, it is a group who collaboratively identified and vetted standard benchmarks within a field of study.

The second pronoun, “they”, obviously refers to students. The important distinction is the reference to multiple students as opposed to a single student. This implies assessment is not synonymous with grades given to individual students.

While grades indicate the extent to which one student met a learning outcome, assessment provides indication of the aggregate performance of students in relationship to a learning outcome. They could be a collective of students within a single course, across multiple sections of a course, or from different courses which address the same outcome. Regardless of the makeup of the group, assessment results of a set of students inform the teaching and learning within a program.

Assessment Is: What WE DO

The processes of identifying outcomes, assessing performance, and making program decisions complete the assessment cycle. You can find different variations of the assessment cycle, but I use the graphic below because it accomplishes so much with a simple image and few words. The graphic defines program assessment as a collaborative and collective process, depicts the assessment cycle, and represents a culture of assessment.

Assessment is what we do_5.17.16

 

First, notice that what is taught is not reflected by a single faculty member or by an outside administrative entity. What is taught is determined by a collective and collaborative we. This may reflect the shared expertise of program faculty or, in some instances, professionals within a discipline.

In both cases, a team of subject specific authorities come to a consensus regarding student learning outcomes. They identify what students should know and be able to do at the completion of a program. This means responsibility for student learning is not isolated at the individual course level. There is a collective program responsibility.

Second, assessment results are labelled by an additional WE – Weighed Evidence. This implies not all evidence is equal, and some results may hold greater value for a program. While grading results of individual students is important, the graphic conveys the assessment results which are most influential to program decisions are the collective evidence of a group of students.

Third, Data Ownership infers assessment results have an owner who will DO something with the data. The owner is not a single person such as a chair or central administrator. The owner is the entire program team. In order to have continuous program improvement, faculty must have access to assessment results and review them as a faculty group rather than as individual course instructors. Doing so will expose potential curriculum gaps across the program. Only then will assessment influence decisions and actions which impact teaching and learning throughout the entire program.

Finally, the graphic captures what is meant by a culture of assessment. When a culture of assessment exists, assessment becomes the natural order of business and faculty view assessment as inherent to their work. To faculty, assessment simply becomes, “What WE DO.”

What does this all mean? Assessment is a team sport and as such, should not be relegated to administrators. Involvement of the entire faculty is critical to all components of the assessment cycle, and program assessment is dependent on the collective expertise and collaborative efforts of faculty.

Administrators do have an important role in building and sustaining a culture of assessment. Administrators lead the faculty team by defining roles and providing routines, resources, and recognition related to assessment efforts. These topics will be the focus of future blog posts.

References:

Suskie, L. (2016). Taking a fresh look at assessment. [Webinar PowerPoint slides, March 14,        2016, University of Nebraska Omaha].

JFDbloglink

Author

Connie SchafferDr. Connie Schaffer is an Assistant Professor in the Teacher Education Department at the University of Nebraska Omaha (UNO). She serves as the College of Education Assessment Coordinator and is involved in campus-wide assessment efforts at UNO. Her research interests include urban education and field experiences of pre-service teachers. She co-authored Questioning Assumptions and Challenging Perceptions: Becoming an Effective Teacher in Urban Environments (with Meg White and Corine Meredith Brown, 2016).

Share It!

Applying CRISP to Change Higher Education Campus Culture

In chapter nine of our Achieving Excellence in Teaching (2014), we explain the importance of using CRISP* as an organizational principle for effective classroom instruction in order to help students learn deeply. As we say there, “CRISP is an acronym for classroom methodology based on unity of purpose as an organizational principle; the process involves five ordered and inter-related steps: Contextualize, Review, Iterate, Summarize, and Preview” (pp. 54-55).

Recently, we have also discovered that the same five-step process of organization that promotes deep learning in the classroom can be effectively applied to other campus initiatives to bring about deep learning in stakeholders.

In short, by applying CRISP to these initiatives, we feel we have contributed to changing the campus climate.

Related Reading: Achieving Excellence in Teaching

3 Examples of Using CRISP to Change Campus Culture

Example One. As Co-Directors of the Teaching & Learning Center (TLC), Hal and I were often asked, “What do you guys do over there anyway?”  Relying on our CRISP principle, we knew we had to come up with a context, a succinct fundamental and powerful concept that both encapsulated our mission and was easily remembered. Our first response showed up on our website as our motto, “Helping Teachers Help Students Learn.”

Then, when we sat down to write Achieving Excellence in Teaching, we asked ourselves not only what were the major characteristics of a terrific teacher, but what was the end purpose of all these strategies? Life-long proponents of Roethke’s “I learn by going where I have to go,” we centered the book’s chapter three on our answer–the key reason for these strategies, deep learning. That insight caused us to make a subtle change in our TLC motto to “Helping Teachers Help Students Learn Deeply.”

Example Two. As all three of us facilitate our institution’s New Faculty Orientation (NFO), we realized that we were providing five days of orientation that went in a dozen directions:  campus ID, campus tour, tour of the 22-county service region, getting a laptop, meeting with chairs and deans, talking with HR about benefits, etc. But what was our focus? When we tried to be all things to new faculty, we found our message was dispersed in many ways, and like a lecture that’s all over the place, we wondered what the new faculty was learning. Did they have a dominant take-away from NFO? Taking a cue from our motto, we knew we had to once again contextualize.

After brainstorming on what should be our key concept, we came up with another succinct statement: “Excellence in Teaching Is Job One.” With that statement as our guiding light, we reduced New Faculty Orientation to three days and, more importantly, achieved our context.  We now begin on a Wednesday with a morning consisting of two workshops: one on Best Pedagogical Practices (which we not only talk about but demonstrate) and another on What We Teach.

As a result, our NFO evaluations have gone through the roof as the new faculty received a concentrated message on the importance our institution places on good teaching.  Moreover, we’ve taken a suggestion from our Operations Specialist, and at the bottom of all campus emails we send we broadcast the fundamental and powerful concept that “Excellence in Teaching Is Job One.”

Example Three. When we finally convinced the provost that our institution needed a Faculty Excellence in Teaching Program (FETP), one of the reasons we were able to accomplish this task was the constant iteration through saturation in our emails, presentations, and workshops of our previous two fundamental and powerful ideas.  Our FETP provided an application of the two ideas and used review to tie to them (new knowledge is built upon old).

In summary, our point is simple.  Whether as a teacher or as part of a larger unit, if you want to get your message across for some future endeavor—preview—try to be CRISP about it.

*This chapter is based on:  Blythe, H. & Sweet, C. (2008).  “Keeping You Class C.R.I.S.P.”  NEA Higher Education Advocate 26 (2):  5-8.  Print.

 

JFDbloglink

Author

Author Charlie Sweet EKUCharlie Sweet is currently Co-Director of the Teaching & Learning Center (2007+) at Eastern Kentucky University. Before going over to the dark side of administration, for 37 years he taught American Lit and Creative Writing in EKU’s Department of English & Theatre, where he also served as chair (2003-2006). Collabo-writing with Hal Blythe, he has published well over 1000 items, including 15 books; of his 11 books with New Forums. Meet Charlie.

 

Share It!

The Ins and Outs of Higher Education’s Culture of Assessment

Mention a “culture of assessment” to your colleagues in higher education and their responses to you may likely be an eye roll, sigh, or perhaps even a groan. This is, of course, if they aren’t running away from you. If we’re being honest, we’ll acknowledge this phrase has become a cliché. Many consider it synonymous with accountability, compliance, or bureaucratic accreditation processes – thus, the less than enthusiastic response from your peers.

Before we decide to discard “culture of assessment” from our vocabulary, it deserves a close examination. What are the ins and outs of this phrase? In other words, what does this phrase imply for faculty and students within a program as well as for outside stakeholders such as central administrators, accreditors, prospective students, and funders?

Assessment for Those Within A Higher Ed Program

Many consider the phrase “culture of assessment” synonymous with accountability, compliance, or bureaucratic accreditation processes.

For those on the inside, consider Mahatma Gandhi’s definition of culture as something residing in the heart and soul of a group. Although Gandhi was referring to the culture of a nation, the concept of assessment as an ingrained norm is applicable in higher education. From this perspective, assessment is a force within a program or department. It is the subconscious pulse of its existence. Assessment exists as a natural and ubiquitous phenomenon, seamlessly woven into teaching and learning.

A culture of assessment within a program implies faculty routinely establish program outcomes, and student learning is evaluated based on those outcomes. By evaluating outcomes, both students and faculty identify the concepts or skills that have been mastered. Faculty use this information to inform and improve their teaching, and this in turn improves student learning. A culture of assessment equates to a culture of improvement. Assessment, from this perspective, is a deeply rooted and continuous process. When a culture of assessment exists within a program, assessment is so embedded in teaching and learning it may go unnoticed by faculty and students.

Assessment for Stakeholders Outside A Higher Ed Program

But what about the reality of accountability? What does a culture of assessment imply for a program’s outside stakeholders? External stakeholders, whether policymakers, professional accreditors, campus administrators, parents, or funders have a vested interest in a program. From their perspectives, assessment is a process by which a program confirms it is meeting its teaching responsibilities and achieving its student learning goals.

For outside stakeholders a culture of assessments implies a culture of assurance or evidence. To be assured a program is fulfilling its responsibilities, stakeholders look for evidence of student learning. By definition, evidence offers visible proof something has occurred. In this case, evidence is the outwardly visible indicators of what students have learned. For those outside of a program, a culture of assessment exists only if there is visible and tangible evidence of student learning.

The Dichotomy Facing Higher Education

The dichotomy facing those in higher education is this – provide visible proof of what should be embedded in the heart and soul of a program. If we wish to foster a culture of assessment, we must find a way for assessment to permeate teaching and learning to an extent it goes almost unnoticed by those within the program, while at the same time make certain evidence of assessment is conspicuously available to those outside the program.

The outside demands are real and increasing. We must make assessment results at both the course and program level accessible to stakeholders. This provides evidence of student learning as well as evidence of assessment efforts. To external parties, this is clear and convincing evidence indicative of a culture of assessment.

However, if this is the only thing we do, internal parties will continue to view assessment as yet another administrative distraction. In order to build a culture of assessment within a program, discussions of assessment results must become routine and systematic. “Assessment” should appear as a standard item on meeting agendas, and all faculty members should be involved and recognized for their role in collecting and discussing data. Initially, these types of assessment efforts may seem feigned. However, consistency and perseverance will gradually make what was once contrived a natural order of business, and internal parties will come to view assessment as something beyond a response to accountability mandates.

Developing an internal culture of assessment is no small task. Upcoming blog posts will address three critical components related to building an internal culture of assessment: the collaborative and collective nature of assessment, the use of data to inform program improvement, and removing barriers to faculty involvement.

JFDbloglink

Author

Connie SchafferDr. Connie Schaffer is an Assistant Professor in the Teacher Education Department at the University of Nebraska Omaha (UNO). She serves as the College of Education Assessment Coordinator and is involved in campus-wide assessment efforts at UNO. Her research interests include urban education and field experiences of pre-service teachers. She co-authored Questioning Assumptions and Challenging Perceptions: Becoming an Effective Teacher in Urban Environments (with Meg White and Corine Meredith Brown, 2016).

Share It!

An Innovative Plan for Assessing Faculty Development

Sunday night we had supper with assessment guru Peggy Maki, author of the forthcoming Real-Time Assessment, and while she was picking apart her eggplant parmigiana, we were picking her brain on how to assess faculty development. While we didn’t learn anything startling, we received sufficient help so that next year we can try a new form of professional development assessment. One caveat. Traditionally, assessment types focus on student learning, so we have had to translate Peggy’s thoughts into faculty learning.

Related reading: How Design Thinking Helps Innovate Faculty Development, Part 2

A Very Short History of Assessing Faculty Development

When we started in faculty development at the beginning of this century, the go-to form of assessment was quantitative analysis. Administrators asked for the number of seats filled in each event and how many faculty were reached during the year (typically 10%). The next phase was the Satisfaction Survey. Along with books and take-aways/hand-outs, faculty were given a short assessment tool with a Likert scale and asked to rate their satisfaction with the event from 1-5. From satisfaction we moved to learning surveys. Same idea, same scale, but with questions such as:

  • Did you learn anything valuable at today’s workshop?
  • Do you plan to implement anything you learned today?

The major problem was our lack of follow-up. Regrettably, a year or two later, we never asked the workshop’s original participants if anything they learned helped their students learn.

Now as postsecondary education is being required by the public, accrediting agencies, and even state governments to demonstrate student learning, so too professional developers are asked to show that faculty participants learned something, they did something, and greater student learning resulted.

The problem has always been: how do we find evidence that links faculty learning from centers for teaching and learning (CTLs) directly to student learning? Thanks to Peggy we have some ideas.

Related Reading: Using Creative Thinking to Innovate Faculty Development

Some Guidelines

Sunday night Peggy introduced us to another belief of current assessment experts—quick feedback. While providing that feedback is most important for students, especially those in danger of flunking a course/flunking out of school, faculty likewise need some sort of systematic appraisal of their teaching immediately. In short, real-time assessment has become a necessity—the sooner the message is delivered, the faster the professor can aid students.

  1. Obviously, a professional development assessment needs to be immediate. Faculty must get some fast feedback. Early and timely interventions can head off problems later, reinforce good ideas, and point out problems with approaches being used.
  2. Create a manageable cohort. Try to find a group that you can be tracked without a lot of hard work and that has some reason to be thought of together. Having early and constant access to the group helps.
  3. Focus on one thing or just a few things to assess. Make it/them fundamental and powerful concepts. Too many assessments try to accomplish too much. Start small.
  4. Faculty like student learners need frequent iteration of the fundamental and powerful concept(s). If at all possible try to theme a semester or even a year.
  5. Emphasize the application of the idea over simply knowing the idea. Figure out a way to evaluate the concept applied/in action.
  6. In dealing with a cohort, try to develop a common language. Use the same words in various opportunities.

A Tentative Plan

Well, as Alice says in Wonderland, “It seems very pretty,” but how do we translate these guidelines into a workable plan? If there is such a thing as real-time assessment, then there’s also real-time planning, which is how you are receiving this material—in real time as we sort through the ideas. Admittedly, our plans aren’t fleshed out, but here’s what we’d like to do in the next academic year.

  1. Immediacy. We’d like to implement a faculty development program next fall that would give our cohort feedback during the same time frame.
  2. Manageable Cohort. Our hope is to use the new faculty that will be joining us in the fall. They are manageable in that the cohort is usually less than fifty, they have a commonality in their newness, and as purveyors of New Faculty Orientation, we usually see them first and collect an email list of them before they even arrive on campus.
  3. One-Thing Focus. As we have just written a book for New Forums called Transforming Your Students into Deep Learners (2016) and provide eight excellent strategies for so doing, a theme of deep learning would be in our wheelhouse. Besides, in her presentation to the faculty, Peggy discussed deep learning, especially in the sense of being able to transfer knowledge, as the goal of higher education.
  4. Frequent Iteration. Since we helm the Teaching & Learning Innovations Series of workshops and most series consist of ten events, each workshop could focus on one deep learning strategy. And we have eight strategies to form the basis for eight workshops.
  5. Emphasize Application. Whatever we present, we would have to persuade our cohort to apply during the fall. How do we assess the student learning resulting from that application? As we said, this plan is a work in progress, and here is the stickiest point, but luckily we are running a professional learning community (PLC) this semester on the latest on peer observation.
  6. Common Language. This guideline is easy to fulfill. We can hand out our book during New Faculty orientation and even go over the basic concepts and definitions before the semester starts. Transforming Your Students into Deep Learners can function simultaneously as our dictionary and sourcebook. Maybe we should run the entire cohort of new faculty like a PLC.

Conclusion

We have a basic plan, four months, an able body of instructors—ten Faculty Innovators (FIs) to help us—and a retreat with the FIs next month. Don’t you just love it when a good plan starts to come together?

JFDbloglink

Author

Russell CarpenterDr. Russell Carpenter is director of the Noel Studio for Academic Creativity and Program Director of the Minor in Applied Creative Thinking at Eastern Kentucky University. He is also Assistant Professor of English. Dr. Carpenter has published on the topic of creative thinking, among other areas, including two texts by New Forums Press. In addition, he has taught courses in creative thinking in EKU’s Minor in Applied Creative Thinking, which was featured in the New York Times in February 2014. Meet Russell.

Share It!