Journal of the NACAA
ISSN 2158-9429
Volume 7, Issue 1 - May, 2014

Editor:

Evaluation Behaviors, Skills and Needs of Cooperative Extension Agricultural and Resource Management Field Faculty and Staff in New Jersey

Kluchinski, D., County Agent I (Professor) and Department Chair, Rutgers NJAES Cooperative Extension

ABSTRACT

Field-based Cooperative Extension agriculture and resource management personnel in New Jersey were surveyed to determine their program evaluation behaviors, attitudes, skills and needs.  Nearly 80% reported recently conducting some level of program evaluation, and were motivated and willing to conduct such evaluations. Time to do so was the greatest barrier.  Their self-reported skills and confidence levels were less on upper levels of Bennett’s Hierarchy; evaluation behavior frequencies mirrored this.  Respondents need more training on a variety of evaluation skills, including selecting evaluation methods, developing survey instruments, and preparing evaluation reports.  Preferred training methods and modes included face-to-face group sessions, web-based materials and webinars.  This needs assessment aided in planning educational training, and gathering and developing relevant resources on program evaluation.


INTRODUCTION

Cooperative Extension continually needs to demonstrate relevance, impact and return on investment to traditional and new funders, especially in an era of declining funding (Decker, 1990; Kelsey, 2008; O’Neill, 1998; Rennekemp & Engle, 2008).  Beyond this institutional requirement, Extension professionals have opportunities to share results and impact for their professional life (Norman, 2001) and for their personal satisfaction of knowing what impact their work has on others.  These efforts require skills competency in program development, evaluation and assessment. 

Extension program impact assessment has been studied by Bennett (Bennett, 1975; Rockwell & Bennett, 2004), who developed a hierarchy of seven incremental assessment levels - each providing evidence of accomplishment or outcomes. The levels include:

  • Level 1: Inputs such as staff and volunteer time, salaries and resources.
  • Level 2: Activities such as educational events and promotional activities.
  • Level 3: Participation or people reached and frequency of participation.
  • Level 4: Reactions including interest level, positive or negative feelings about the program, and rating of program activities.
  • Level 5: Learning or KASA (knowledge, awareness, skills or aspirations) changes.
  • Level 6: Actions or behaviors, such as practice changes, decisions or actions taken, or technologies used as a result of what was learned.
  • Level 7: Impact on social, economic, civic, and environmental conditions intended as end result.

 

While significant numbers of educational materials, resources and training have been developed and utilized nationwide, the level of reporting of program impact still needs improvement.  A quantitative content analysis (Workman and Scheer, 2012) showed only 27% of Journal of Extension (JOE) “Feature Articles” and “Research in Brief” articles published between 1965-2009 reported practice change (Level 6) as the highest level of evidence; only 5.6% reported end results / long term changes (Level 7). 

Many factors may influence Extension educators’ ability to conduct such evaluations, including time and resource constraints, and lack of confidence (Arnold, 2006).  Training may increase evaluation skills but does not necessarily enhance long term evaluation practices; therefore, on-going training using various methodologies is necessary to build required skills and increase their use (Kelsey, 2008).  It is imperative that needs assessment research be conducted when developing collaborative learning environments to make sure learners are integrally involved in determining content (Locke, 2006; Sobrero, 2008). 

Of interest to this author in his role as Department Chair was how to enhance program evaluation skills and activities of agricultural and resource management field personnel in New Jersey.  What capacity did they process?  What skills did they need?  How did they prefer to learn? What materials or methods were needed?

To that end, a needs assessment survey of Rutgers NJAES Cooperative Extension field faculty and staff in the Department of Agricultural and Resource Management Agents was conducted to: 1) determine program evaluation behaviors, opinions and motivations; 2) determine personal interest, confidence and need for information in specific program evaluation and impact assessment skill sets, and; 3) assess preferred methodologies and modes for personal knowledge gain and training.  These data would help to develop training on appropriate content using relevant methodologies to enhance the skills and participation of personnel in program evaluation activities.

 

MATERIALS AND METHODS

The research study was conducted between March 15 and April 19, 2012.  Field-based faculty and professional staff employed with Rutgers NJAES Cooperative Extension’s Department of Agricultural and Resource Management Agents were invited to participate.  A 16 question survey was administered electronically.  Question focused on a range of topics, including:

  • Frequency of conducting program evaluations
  • Attitude towards evaluation / motivations
  • Perceived organizational commitment to evaluation
  • Sources of program evaluation information
  • Level of confidence/skills in conducting evaluation
  • Desire / need to learn more
  • Barriers to program evaluation
  • Training preferences

 

A modified survey method (Dillman, 2007) was used to conduct the study. Specifically, the subjects received an e-mailed survey solicitation / informed consent message seeking their participation.  The survey URL link was included.  Forty-two (N) subjects were invited to participate.  After the initial distribution, reminder e-mails were sent to non-respondents to solicit their participation each week for four weeks.  Thirty-four (n) subjects responded for a response rate of 81%.  Basic data analyses were conducted and composite data are presented. The survey questions form the basis of discussion presented below.

 

RESULTS AND DISCUSSION

Frequency of Program Evaluation

Seventy-nine percent (79%) of respondents indicated over the 12 months prior to the survey that they conducted some kind of program evaluation, while 21% had not.  Survey recipients were asked to indicate how frequently over the past 12 months they had conducted specific levels of program evaluation in Bennett's Hierarchy (Bennett, 1972; Rockwell & Bennett, 2004) (Table 1).  Of particular interest was how many had conducted some evaluation “about half the time” or with greater frequency (“often”, “most of the time” or “always”). A majority (79%) of respondents indicated they had evaluated participants' reactions to a program (Level 4) within these levels of frequency.  In this study Level 5 Bennett Hierarchy was divided into 5a=KASA change evaluation at end of program, 5b=contacting participants after time had passed to assess KASA change. Fifty-five (55%) percent had also evaluated the program participants’ changes in knowledge, attitudes, skills or aspirations (KASA) at the end of an event (Level 5a).  However, only 29% re-contacted program participants to assess changes in KASA (Level 5b) sometime after the program was completed.   Forty-one percent (41%) conducted follow up evaluations to measure changes in behaviors or practices of program participants that have occurred as a direct result of their programs (Level 6).  Twenty-one percent (21%) measured changes in long term conditions (Level 7) within these frequencies; although self-reported, this exceeds the frequency in JOE articles as reported by Workman and Scheer (2012).

 

 

Frequency of Conducting in the Past 12 Months

Percent (%) of Total Respondents Selecting the Option

       
Bennett's Hierarchy Level of Program Evaluation   Never 

Rarely or Not Often 

About Half the Time Often or Most of the Time  Always
Level 4. Asked participants for their reaction to a program at the end of an educational event.
n=34
8.8% 11.7% 20.6% 47.0% 11.8%
Level 5a. Measured changes in KASA of program participants at the end of an event.
n=33
12.1%  33.3% 21.2% 30.3%  3.0%
Level 5b. Re-contacted program participants sometime after my program was completed to assess changes in KASA.
n=34 
29.4% 41.1% 14.7% 14.7% 0.0%
Level 6. Measured changes in behaviors or practices of program participants that have occurred as a direct result of my programs.
n=34
14.7% 44.1% 17.6% 23.5% 0.0%
Level 7. Measured changes in long term conditions that have occurred as a direct result of my programs.
n=33
27.3% 51.5%  12.1% 9.1%

0.0%

 

Table 1. Frequency of program evaluation by Bennett's Hierarchy level over the past 12 months.

 

Clearly, a high number of respondents are evaluating participant reactions to their programs, and a lesser but substantial number are evaluating KASA at the end of their educational programs.  This may be because program participants are available in real time to complete the evaluations, the convenience of “right here, right now” evaluation.  However, as time since the program session was held increases and need for a coordinated effort to administer a follow-up survey is required, the frequency of evaluation dropped. 

Motivation to Conduct Evaluations

When asked what factors motivated their evaluation of programs, 85% of respondents indicated external factors, such as promotion and tenure, performance appraisals and grant reporting requirements, and/or internal factors (73%) including their personal interest in knowing about their program quality or to improve their skills.  When asked which of these two types of factors were most motivating, 44% said external factors, 21% said internal factors, while 35% said they were both equally motivating.  Therefore, it appears that motivation to conduct program evaluations is not limiting factor.

Organizational Commitment to Evaluation

It is important that Cooperative Extension professionals believe their organization is supportive and has a commitment to program evaluation.  Nearly 59% strongly agreed or agreed (Table 2) that program evaluation is deeply rooted in the values of Rutgers NJAES Cooperative Extension, while nearly 18% did not.  Fifty-three percent (53%) strongly agreed or agreed that program evaluation is a normal part of the organization's culture and practice.  Therefore, a simple majority of respondents agreed with the organization’s commitment and is putting this into practice.  Nearly a quarter of respondents were ambivalent on these two statements – neither agreeing nor disagreeing.  While 24% strongly agreed or agreed that they are positively rewarded for conducting program evaluations, 29% did not (disagreed or strongly disagreed).  Nearly 15% felt they are not expected to conduct program evaluations are part of their programs.   These levels of dissatisfaction and lack of clarity regarding job expectations identifies both a need and opportunity for enhanced messaging by administration.

 

 

Rating of Statements About Organizational Commitment to Program Evaluation

Percent (%) of Total Respondents Selecting the Option

   
Statement  Strongly Agree or
Agree 
Neither Agree or Disagree  Disagree or Strongly Disagree
Program evaluation is deeply rooted in the administrative values of my organization.  58.8% 23.5% 17.9%
I am expected to conduct evaluations as part of my programs.  76.5% 8.8% 14.7%
My organization positively rewards Extension professionals for conducting program evaluations. 23.5%  47.1% 29.4%
My administrators are committed to building evaluation capacity within the organization. 58.9% 29.4%  11.8% 
Program evaluation is a normal part of my organization's culture and practice. 53.0% 29.4%  17.6%
There are adequate resources within my organization to assist me with program evaluation when I need it.  20.6%  35.3%  44.1%

 

Table 2.  Ratings of various statements about Rutgers NJAES Cooperative Extension's organizational commitment to program evaluation.

 

Opinions on Program Evaluation and Specific Program Evaluation Behaviors

Survey recipients were asked to share their opinion on several statements related to program evaluation (Table 3).  Ninety-one percent (91%) strongly agreed or agreed that program evaluation is an important part of Extension work, and 82% believed that using the results of program evaluation helps improve Extension programs.  Seventy-nine percent (79%) strongly agreed or agreed that conducting program evaluation contributes to the meaning of one's extension work.  Only 9% agreed with the statement that they rely on others in the organization to worry about program evaluation.  A philosophical commitment to program evaluation is demonstrated and the motivation to conduct evaluations is clear.  Only 15% stated they would not conduct program evaluations if it was not expected, while 53% strongly agreed or agreed that they would; 32% were uncertain, neither agreeing nor disagreeing with the premise.

When asked about actual program evaluation behaviors, 53% indicated they usually drag their feet about doing program evaluations, and 30% agreed or strongly agreed that doing program evaluation takes time away from the “real” work of Extension.  A strong majority (71%) believed there are consequences of not doing program evaluation (not defined as personal, professional or programmatic).  Only 18% indicated performance assessments of Extension professionals should not include criteria related to program evaluation.  Therefore, the majority opinion agrees to the construct of being required to conduct such appraisals.  However, 27% agreed or strongly agreed that Extension professionals are not recognized for conducting program evaluations; a focus group discussion should occur to identify this source of discontent.

 

 

Rating of Program Evaluation Statements

Percent (%) of Total Respondents Selecting the Option

   
Statement
 
Strongly Agree or
Agree
Neither Agree or Disagree Disagree or
Strongly Disagree
I believe that program evaluation is an important part of Extension work. 91.2% 5.9% 2.9%
I usually drag my feet about doing program evaluations.  52.9% 17.6% 29.4%
I believe that using the results of program evaluation helps improve Extension programs. 82.4% 14.7% 2.9%
Doing program evaluation takes time away from the "real" work of Extension.  29.4% 23.5% 47.1%
Performance assessments for Extension professionals should not include criteria related to program evaluation. 17.6% 17.6%  64.7%
I believe conducting program evaluation contributes to the meaning of one's extension work.  78.8% 15.2%  6.1%
I believe there are consequences of not doing program evaluation. 70.6% 20.6%  8.8%
Extension professionals in my state are not recognized for conducting program evaluations. 26.5% 44.1% 29.4%
I would conduct program evaluations even if it was not expected of me.  52.9%  32.4% 14.7%
I rely on others in the organization to worry about program evaluation. 8.8% 23.5%  67.6%

 

Table 3. Opinions on statements about program evaluation and specific program evaluation behaviors.

 

Sources of Program Evaluation Information

A majority (88%) of survey respondents sought information on program evaluation over the past 12 months from a variety of sources (Table 4), both internal (32%) and external (15%) to the organization.  Internal assistance came from colleagues in their office (38%) or others, while external assistance included Extension evaluation specialists from outside the organization (6%).  The greatest source of advice was from others on their programmatic teams (47%).  Respondents used printed materials that were provided to them (50%), that they owned (29%) or they borrowed (12%).  A high number (38%) used published reports to mirror their evaluation efforts, while about one-quarter (26%) sought information from evaluation websites or webinars. 

 

Source of Evaluation Information  %
of Respondents
Reference materials provided to me. 50%
Relied on expertise of others on my programmatic teams.   47%
Asked colleagues in my office for help. 38%
Used published reports to mirror evaluation efforts. 38%
Contacted personnel from Rutgers for evaluation assistance. 32%
Reference materials that I own. 29%
Program evaluation websites or webinars. 26%
Contacted personnel outside of Rutgers for evaluation assistance. 15%
Other, please specify. 15%
None. 12%
Sought Extension Evaluation Specialists outside of RCE. 6%
Reference materials that I borrowed. 1%

  

Table 4.  Sources of program evaluation information sought in the past 12 months.

 

Confidence in Conducting Program Evaluation

Survey recipients were asked about their level of confidence in conducting Bennett’s Hierarchy Levels 4 to 7 assessments (Table 5).  Eighty-seven (87%) percent were moderately or very confident in conducting Level 4 evaluations (program participant reaction), 69% for Level 5a (KASA assessment at end of educational event), 51% for Level 5b (time delayed post-test KASA evaluation), 51% for Level 6 (measured changes in behaviors or practices), and only 36% for Level 7 (measured changes in long term conditions).  Confidence levels mirrored the trends related to level of actual practice reported in Table 1.

 

 

Confidence Level

 

Percent (%) of Total Respondents Selecting the Option
 

   
Bennett Hierarchy Level of Program Evaluation  Very or  Moderately Unconfident Neither Confident or Unconfident  Moderately or
Very Confident
Level 4. Asked participants for their reaction to a program at the end of an educational event. 6.0%  6.1% 87.9%
Level 5a. Measured changes in KASA of program participants at the end of an event. 12.1% 18.2% 69.7%
Level 5b. Re-contacted program participants sometime after my program was completed to assess changes in KASA. 30.3% 18.2% 51.5%
Level 6. Measured changes in behaviors or practices of program participants that have occurred as a direct result of my programs. 30.3% 18.2% 51.5%
Level 7. Measured changes in long term conditions that have occurred as a direct result of my programs. 39.4% 24.2% 36.3%

 

Table 5. Rating of confidence in conducting evaluation by Bennett's Hierarchy levels.

 

Barriers to Performing Program Evaluation

The majority of respondents indicated that time prioritization issues (44%) was the greatest barrier to performing program evaluations, similar to data reported for Ohio 4-H Educators (Lekies & Bennett, 2011).  Secondary barriers cited included lack of funding, lack of evaluation specialists who could provide support, and other reasons such as “the audience gets annoyed filling out forms”.  The lack of an organizational system for evaluation, and skepticism about the value of formal evaluations, and the lack of skills, interest or incentives/rewards for doing evaluations were cited as lesser barriers (Table 6).

 

Barriers % of Total
Time prioritization issues. 44%
Lack of funding. 12%
Lack of evaluation specialists who can provide needed support. 12%
Other, please specify. 12%
Skepticism about the value of formal evaluation. 6%
Lack of an organized system for evaluation. 6%
Lack of skills. 3%
Lack of interest. 3%
Lack of incentives/rewards for doing evaluations. 3%
Total            100%

 

Table 6. Identification of greatest barrier in performing program evaluations.

 

Interest in Future Skills Development

Survey respondents expressed desire to learn more based on their current skill level, and in most cases they expressed that they “need a bit more skill” on a range of practices (Table 7).  These included conducting needs assessments, writing measurable objectives, developing evaluation plans, selecting evaluation methods, developing a survey instrument, choosing sampling techniques, analyzing evaluation data, using evaluation results and preparing evaluation reports.  Only for 3 skills did the majority express they “need a lot more skill”, specifically getting survey protocol through institutional Internal Review Board approval (45%), testing a survey instrument and conducting focus group interviews (both 50%).

 

 

Desire to Learn More

Percent (%) of Total Respondents Selecting the Option

   
Skill 

I Know Enough Now 

I Need a bit More Skill I Need a lot More Skill
Conducting needs assessments. 29% 53% 18%
Writing measureable objectives. 26%  41% 32%
Developing evaluation plans. 12% 47% 41%
Selecting evaluation methods. 9% 65%  26%
Developing a survey instrument. 12% 59% 29%
Choosing sampling techniques. 12%  47% 41%
Testing a survey instrument. 3% 47% 50%
Getting review and approval of your survey protocol through Rutgers' Internal Review Board. 30% 24% 45%
Conducting focus group interviews. 9%  41%  50%
Analyzing evaluation data. 15% 50%  35%
Using evaluation results. 24% 50%  26%
Preparing evaluation reports. 12%  59% 29%

 

Table 7.  Rating of desire to learn various program evaluation practices based on current skill level.

 

Preferred Training Methods and Modes

Personal preferences on the modes and types of additional training, materials or tools for program evaluation were assessed (Table 8).  Face-to-face group training (65%) was favored over web-based reference materials (35%), web-based training such as webinars (32%), or one-on-one, face-to-face training (26%).  Respondents desired step-by-step guides on how to develop evaluation instruments and protocols (59%) and an effort to integrate evaluation training and planning in existing subject matter teams or working groups (38%) rather than the establishment of a Rutgers NJAES Cooperative Extension evaluation working group (24%).  Nearly 30% favored a workshop on getting review and approval of survey protocol through Rutgers’ Internal Review Board. 

 

Preferred Methods and Modes

% of Total

Attend face-to-face group training.   65%
Step-by-step guides on how to develop evaluation instruments and protocols. 59%
Integrate evaluation training and planning in existing subject matter teams/working groups. 38%
Web-based reference materials on program evaluation. 35%
Web-based training sessions such as webinars. 32%
Workshop on getting review and approval of survey protocol through RU's Internal Review Board (IRB). 29%
Attend one-on-one, face-to-face training.   26%
Establishment of an RCE evaluation working group. 24%

 

Table 8. Preferences for additional program evaluation training materials and tools, and modes of delivery.

 

OUTCOMES

These data provided direction and focus in developing various educational efforts within Rutgers NJAES Cooperative Extension’s Department of Agricultural and Resource Management Agents to enhance personal and institutional program evaluation skills and techniques.  Several face-to-face and web-based in-service training programs were held and are planned for the future.  Subject matter teams/working groups were tasked with developing measurable objectives and assessment schema.  In addition, these survey findings were one basis for conceptualization and development of a website (www.rce.rutgers.edu/evaluation/resources) that contains over 175 annotated resources related to program evaluation and impact assessment.  This repository of training and resource materials, grouped in 12 skill areas, provides a cache of information for use in self-learning, individual consultation sessions and group learning.  These resources may be of particular value to Cooperative Extension professionals nationwide who wish to learn more and mentoring others in enhancing their program evaluation skills and behaviors.  These efforts provide an opportunity for future evaluation and assessment of changes in evaluation behaviors and skills, and future additional educational needs.

 

REFERENCES

Arnold, M. E. (2006). Developing evaluation capacity in Extension 4-H field faculty: A framework for success. American Journal of Evaluation, 27, 257-269.

Bennett, C. (1975). Up the hierarchy. Journal of Extension [On-line], 13(2). Available at: http://www.joe.org/joe/1975march/1975-2-a1.pdf

Decker, D. J., & Yerka, B. L. (1990). Organizational philosophy for program evaluation. Journal of Extension [On-line], 28(2) Article 2FRM1. Available at: http://www.joe.org/joe/1990summer/f1.php

Dillman, D. A. (2007).  Mail and Internet Surveys: The Tailored Design Method. (second ed.), John Wiley Co., Hoboken, New Jersey

Kelsey, K. D. (2008). Do workshops work for building evaluation capacity among Cooperative Extension Service faculty? Journal of Extension [On-line], 46(6) Article 6RIB4. Available at: http://www.joe.org/joe/2008december/rb4.shtml

Lekies, K. S., & Bennett, A. M. (2011).  The Evaluation Attitudes and Practices of 4-H Educators. Journal of Extension [On-line], 49(1) Article 1RIB2.  Available at: http://www.joe.org/joe/2011february/rb2.php

Lock, J.V. (2006). A new image: Online communities to facilitate teacher professional development. Journal of Technology Teacher Education, 14(4)4, 663-678.

Norman, C. L. (2001). The challenge of Extension scholarship. Journal of Extension [On-line], 39(1) Article 1COM1. Available at: http://www.joe.org/joe/2001february/comm1.html

O'Neill, B. (1998). Money talks: Documenting the economic impact of Extension personal finance programs. Journal of Extension [On-line], 36(5) Article 5FEA2. Available at: http://www.joe.org/joe/1998october/a2.php

Rennekamp, R. A., & Engle, M. (2008). A case study in organizational change: Evaluation in Cooperative Extension. In M.T. Braverman, M. Engle, M.E. Arnold, & R. A. Rennekamp (Eds.), Program evaluation in complex organizational system: Lessons from Cooperative Extension. New Directions for Evaluation, 120, 15-26.

Rockwell, K., & Bennett, C.  (2004). Targeting Outcomes of Programs: A Hierarchy for
Targeting Outcomes and Evaluating Their Achievement. Faculty Publications: Agricultural Leadership, Education & Communication Department. Paper 48. Available at:
http://digitalcommons.unl.edu/aglecfacpub/48

Sobrero, P. M.  (2008). Essential components for successful virtual learning communities.  Journal of Extension [On-line], 46(4) Article 4FEA1. Available at: http://www.joe.org/joe/2008august/a1p.shtml

Workman, J. D., & Scheer, S. D. (2012). Evidence of impact: Examination of evaluation studies published in the Journal of Extension. Journal of Extension [On-line], 50(2) Article 2FEA1. Available at: http://www.joe.org/joe/2012april/a1.php