keyresultarea200x200The first two questions in presidential performance accountability

How can boards relate effectively to presidents? In my previous post, I offered four key board-CEO relationship questions. Let’s take a brief look now at the first two questions:

  1. Are we clear about the CEO’s job product?
  2. Are we agreed about how we will assess the extent to which key result areas are being achieved?

Job product

In answer to the first question, I recommend that boards take time to develop a written narrative that comprises mutual understanding of the conditions anticipated when the president is effective. This could serve as the preamble for the president’s written position description which—I also urge—should be cast in terms of responsibilities rather than duties. Accomplishment, not activity, should be emphasized in board-president supervision.

Here is an example of a brief “job product” statement:

The president will be considered to be effective when the institution is faithfully pursuing its mission; attracts and retains high mission quality trustees, administrators, faculty, staff, and students; exhibits trends of financial and enrollment stability; fulfills public and external peer review quality assurance requirements; and evidences a healthy organizational climate.  

Key result areas

The second question asks, Are we agreed about how we will assess the extent to which key result areas are being achieved?

What is a key result area? Take a closer look at the sample preamble statement above and I’ll bet you can identify the key result areas that board would need to find ways to monitor.

  • Mission fidelity: probably perceptual data from major stakeholder groups, including alumni and employers of graduates
  • Trustees:
    • profile (wisdom, witness, work, wealth)
    • attendance/engagement, tenure report
    • board assessment
  • Student enrollment:
    • trends (quantitative and qualitative in terms of “mission quality”)
    • student satisfaction/retention trends (compared to appropriate benchmarks)
    • financial aid “discount” rate trend, benchmark comparison
  • Faculty:
    • mutually agreed-upon “faculty quality” benchmarks (hint: not merely “academic credentials”)
    • faculty compensation and retention (per benchmarks)
    • global student evaluations (i.e., general, non-course specific measures of faculty effectiveness)
  • Staff/administration:
    • Numbers (ratios) & qualifications per benchmarks
    • Compensation & retention (per benchmarks)
  • Organizational climate assessment/benchmarks (e.g., Best Christian Workplaces Inventory)
  • Financial trends:
    • Audit reports, including “governance letter” citations and recommendations
    • Financial stability trend ratios, composite score (as used by USDE)
      • Primary reserve ratio
      • Equity ratio
      • Net income ratio
    • Gift income support
    • Endowment growth
    • Deferred maintenance
  • Accreditation/regulatory reviews: recommendations/resolution status
  • Complaints/litigation (if any)

For each key result area, you should ask the president and his/her team to propose how and in what intervals each of the key result areas will be measured and reviewed. The sum of all these measures comprises a board dashboard that is updated at each meeting (or annually). The dashboard should be short, focused, and graphically summarized. Trends are more important than snap shots; they should be easy to observe.

Remember, generally speaking, data quantity is inversely related to governance clarity.

Ends, ends, ends …

The more a board and president come to agreement as to how to monitor key results (i.e., ends) achievement, the more the board can and should grant the president discretions regarding means. In other words, the solution to ambiguity of presidential authority is to seal accountability leaks.

In my next post, I’ll attempt to offer some practical guidance on the subject of presidential evaluation.