top of page

ABOUT US

Orange detour sign with left arrow on sidewalk, near grass and curb. Nearby orange barrier, signaling a pedestrian detour.

When grant project budgets shift, hiring stalls, or partnerships change mid-year, the temptation is to treat performance measurement as “on hold until things stabilize.” That’s risky—especially for multi-year discretionary grants where your annual performance report is the basis for continuation funding decisions. The better move is to redesign your performance measure framework so it can absorb change by measuring functions (what the work is meant to accomplish), protecting core components, and documenting adaptations with a simple decision log you can reuse in performance report narratives, continuation requests, and audits.


How project pivots can disrupt performance reporting

Most evaluation and performance frameworks are “activity-locked”: they assume the workplan stays intact (two convenings, one cohort launch, X workshops by month Y). But the federal reporting expectation doesn’t pause when implementation pivots. Most federal performance reporting guidance calls for demonstrating whether substantial progress is being made toward project objectives and program performance measures, since that's what most federal funders use to determine continuation eligibility.


The mismatch between fixed performance measures and implementation pivots creates three common failures in your performance reporting:

  • Your indicators no longer map to what was actually delivered, so performance reporting becomes a “variance explanation” instead of evidence of progress.

  • Your performance report narrative drifts into qualitative anecdotes, even though most federal guidance emphasizes accurate, valid, reliable data and clear statements about the level of success achieved (and contributing factors when goals aren’t fully met).

  • You miss the grants-management side of the pivot: some changes trigger prior written approval requirements, even when you’re not rebudgeting.


Reframe performance measurement around functions and separate core from adaptable components

A pivot-proof performance measurement plan starts with a simple reframing: measure the function of an activity, not the activity itself. For example, instead of “host two employer roundtables,” define the function as “increase access to work-based learning and employer feedback loops.” You can meet that function through roundtables, project sprints, site visits, virtual showcases, or competency-based reviews without blowing up your measurement plan. This aligns with the expectation of federal Uniform Guidance that performance reporting relates accomplishments (and, when required, cost information) to the award’s goals and objectives—and that reports include comparisons against standards and explanations when goals aren’t met.


Then separate core components from what’s adaptable. Core components are your non-negotiables—the population you serve, the essential elements of the service, minimum levels of participation, and any required reporting or quality controls. Adaptable components are how you deliver the work—the sequence, format, tools, and partner roles. When a pivot occurs, be explicit about what is driving the change (e.g., staffing gaps, partner capacity, funding constraints) and document how you are adjusting delivery while protecting the core. This makes your rationale clear, defensible, and much easier to explain in performance reports and to program officers.


Use a decision log for grants management and performance measurement

If you want a single lightweight practice that improves both performance measurement and grant management, use a decision log.


Uniform Guidance requires notifying funders about significant developments that affect milestones/objectives, and documenting corrective action plans when problems or delays arise. And if the pivot touches scope/objectives or key personnel, 2 CFR 200.308 spells out when prior written approval is required.


A decision log (see example below) makes those requirements practical and gives you ready-to-paste language for federal performance reporting.


What this means for APRs, continuation, and audits

For continuation of your federal grant, 34 CFR 75.253 is your north star: you either demonstrate substantial progress or obtain approval for changes that still enable you to meet goals/targets without changing scope/objectives.


For audit-readiness, project pivots raises the stakes on documentation. Records generally must be retained for three years from submission of the final financial report (with extensions for litigation/audit findings). And at closeout, final performance and financial reports are due within 120 calendar days after the period of performance ends. A disciplined decision log plus well-defined measures is one of the easiest ways to make “why we changed course” legible years later.


Decision log template (short example)

Date

What changed

Trigger

What stays core

Eval implication

Approvals / artifacts

2026-01-18

Shift cohort intake from spring to rolling

Navigator hiring delay

Eligibility + required advising touchpoints

Report partial cohort; redefine “served” threshold

Email to PO; revised timeline; hiring record

2026-02-07

Replace 2 convenings with 6 virtual employer sprints

Travel/ partner capacity

Employer engagementfunction + project-based exposure

Use sprint rubric + participationrate as proxy

Agenda, attendance, rubric scores, partner letter

Looking for an evaluation partner who understands federal grants? Contact Shaffer Evaluation Group and request an evaluation proposal for your federal grant.

Audra Nikolajski, Research Associate


Last year, schools across the United States spent $981.57 billion on education. These funds support students, maintain and improve school facilities, fund programs, and launch new initiatives designed to improve learning outcomes. The scale of this investment reflects not only the work of educators and administrators, but also the combined efforts of policymakers at the local, state, and federal levels.


At the same time, policymakers face a difficult challenge. Education is filled with competing ideas about best practices, promising pilot programs, and anecdotal success stories. Each of these intiatives needs funding. With so many voices and perspectives in the conversation, determining which policies truly benefit students can be difficult.


Educational evaluators help address this challenge. By systematically studying programs from implementation to outcome, evaluators provide evidence about what is working, what is not, and how programs can be improved.

 

Who Shapes Education Policy?

A wide range of stakeholders influence the creation and implementation of education policy.


Citizens and Local Governments. Parents, students, teachers, administrators, homeowners, employers, and other community members all have perspectives on the policies and programs affecting their local schools. These perspectives are often communicated to school-level leaders, district administrators, and elected school boards. School boards play a key role in shaping local education systems by approving budgets, adopting policies, and determining which programs and initiatives will be implemented within their districts.


State Governments. State governments play a central role in education policy. State departments of education develop statewide initiatives and use research and data to guide policy decisions. They also establish academic standards, oversee statewide assessment systems, and set accountability requirements for schools and districts. In many cases, states also support policy implementation through funding mechanisms such as grants, formula funding, and targeted programs.


The Federal Government. The federal government plays a more limited role in education policy. The U.S. Department of Education primarily focuses on protecting students’ civil rights, administering federal education funding, and collecting national education data. Rather than directly controlling school policy, the federal government influences education priorities by establishing initiatives and providing grants or funding programs that states and districts can choose to implement.


Researchers. With multiple levels of government and a wide range of stakeholders involved in education policymaking, strong evidence is essential to inform effective decisions. Researchers, including academics, think tanks, nonprofits, and research institutions, play an important role in generating this evidence. Through methods like program evaluation, experimental research, comparative analysis, and policy studies, these organizations help identify what works in education. They also contribute by publishing research, presenting findings at conferences, and facilitating conversations between policymakers and practitioners.

 

What is an Educational Evaluator?

An educational evaluator is a third-party who assesses the outcomes of an educational intervention using data collection, stakeholder feedback, project documentation, and comparative analysis. Interventions can include policies, programs, projects, or initiatives aimed at improving student success or experience in some way. Evaluators examine whether a program was implemented as intended and whether the outcomes identified in its logic model were achieved. In short, evaluators determine whether a program produced its intended results.

 

Why are Evaluators Vital to Education Policy?

Without evaluation, policymakers and practitioners cannot know whether policies or programs are achieving their goals. Evaluators can give direct insight into whether new initiatives are working on the ground and performing as expected. They are involved through the life of a project and can identify factors affecting implementation, areas to expand in future programs, and where more resources may be needed.


Policymakers operate at a high level, focusing on broad goals and desired outcomes. Evaluators, by contrast, examine how policies are implemented on the ground and therein can give insight into ways to sculpt and shape a policy moving forward based on the evidence and testimony they gather.


For example, imagine a state launches a new literacy initiative and offers grant funding to districts that develop programs aligned with the initiative’s goals. A district might create a high-dosage tutoring program to improve reading outcomes and hire an evaluator to monitor its implementation.


If the program proves successful, the evaluator’s report can serve as a valuable resource for other districts. By documenting which components were most effective, which elements were unnecessary, and what barriers emerged during implementation, the evaluation provides a roadmap that policymakers and educators can use to guide future programs.


Evaluators therefore play a direct role in shaping education policy. By documenting both the process and outcomes of educational interventions, they help policymakers understand what works and how successful programs can be replicated or scaled.



If you are a policymaker seeking to understand the outcomes of a policy initiative, or a program leader preparing to apply for a grant or evaluate an existing project, consider working with Shaffer Evaluation Group. Contact us today for a free 30-minute consultation at seg@shafferevaluation.com.


Individual holding an open book in their right hand and a small globe topped by a graduation cap in their left hand.

Patricia Moore Shaffer, Ph.D., Principal & Lead Evaluator


In 2025, many minority-serving higher education institutions (MSIs) found themselves navigating an operating environment few had anticipated: federally funded capacity-building grants, including Title III and V awards, that were modified, disrupted, or canceled outright after implementation had already begun. For project leaders, this introduced a level of uncertainty that went far beyond routine grant management and into core questions of staffing, institutional risk, and trust with communities.


Across multiple projects, our team observed institutions making rapid, consequential decisions—sometimes within weeks—about whether to scale back, pause, redesign, or absorb grant-funded activities. These decisions were not failures of planning or leadership but responses to a funding context that shifted after commitments were made.


Looking across these experiences, several operational lessons stand out.


  1. Grant cancellation risk must now be treated as a design constraint, not a remote possibility. Historically, most capacity-building applications assumed that once awarded, funding would persist through the project period barring major compliance failures. In 2025, that assumption no longer held. Institutions that weathered cancellations most effectively were those that had avoided hard dependencies on grant funds for essential personnel or student services, and that had sequenced activities so early investments retained value even if funding was curtailed.


  1. Grant staffing models must be designed for adaptability. In several disrupted or canceled grants, staffing structures were among the least flexible elements of project design. Grant-specific positions left institutions with limited options when funding conditions changed. Institutions that adapted more effectively treated grant staffing as modular. They relied on internal reallocations, phased hiring, and cross-training, and they defined core functions separately from specific positions. This allowed essential work to continue even as grant staff positions were cut.


  1. Over-engineered project designs proved brittle under disruption. Complex activity structures, layered deliverables, and tightly coupled timelines left little room to adapt when funding conditions changed. In contrast, projects built around a small number of clearly defined core components were easier to scale down while preserving institutional learning. Simpler designs were not weaker; they were more resilient.


  1. Sustainability planning mattered for all projects. In canceled projects, sustainability planning became an immediate concern. Institutions that had already identified which roles, practices, or partnerships were worth sustaining could make faster, more defensible decisions. Where sustainability had been treated primarily as a narrative requirement in the grant application and not revisited after award, institutions were left improvising under pressure.


  1. Trust with communities is built—or eroded—through how institutions manage disruption. Grant cancellations did not occur in a vacuum. Many projects were embedded in long-standing relationships with students, families, communities of color, and regional partners. When activities were paused or withdrawn, communities experienced these changes not as abstract funding shifts, but as broken commitments. Institutions that maintained trust were those that communicated early and honestly about changes, acknowledged impacts on participants, and involved community partners in decisions about what could be preserved or adapted.


What these experiences suggest for minority-serving institutions

The 2025 funding environment underscored a difficult truth: capacity-building grants are increasingly operating in conditions of volatility, even as expectations for compliance, outcomes, and documentation remain high. For MSIs, this compounds long-standing capacity constraints and places additional strain on staff and leadership.


The institutions that navigated cancellations most effectively were not those with the most ambitious designs, but those with the clearest operational priorities. They treated grant plans as adaptable rather than fixed scripts, and they made early design choices that preserved their initiative's integrity when funding circumstances changed.


As an evaluation firm, this is where our work has increasingly concentrated—supporting institutions not just in designing and evaluating grant projects, but in making operational decisions about project implementation when conditions shift. As federal funding landscapes continue to evolve, capacity-building will depend as much on resilience and judgment as on alignment with funder priorities. Those lessons, while hard-won, are now shaping how institutions think about risk, sustainability, and what it truly means to build capacity in uncertain times.


Neoclassical building with American flag

Anchor 1

Shaffer Evaluation Group, 1311 Jamestown Road, Suite 101, Williamsburg, VA 23185   833.650.3825

Contact Us     © Shaffer Evaluation Group LLC 2020 

  • LinkedIn - White Circle
  • Twitter - White Circle
bottom of page