top of page

ABOUT US

Audra Nikolajski, Research Associate


Last year, schools across the United States spent $981.57 billion on education. These funds support students, maintain and improve school facilities, fund programs, and launch new initiatives designed to improve learning outcomes. The scale of this investment reflects not only the work of educators and administrators, but also the combined efforts of policymakers at the local, state, and federal levels.


At the same time, policymakers face a difficult challenge. Education is filled with competing ideas about best practices, promising pilot programs, and anecdotal success stories. Each of these intiatives needs funding. With so many voices and perspectives in the conversation, determining which policies truly benefit students can be difficult.


Educational evaluators help address this challenge. By systematically studying programs from implementation to outcome, evaluators provide evidence about what is working, what is not, and how programs can be improved.

 

Who Shapes Education Policy?

A wide range of stakeholders influence the creation and implementation of education policy.


Citizens and Local Governments. Parents, students, teachers, administrators, homeowners, employers, and other community members all have perspectives on the policies and programs affecting their local schools. These perspectives are often communicated to school-level leaders, district administrators, and elected school boards. School boards play a key role in shaping local education systems by approving budgets, adopting policies, and determining which programs and initiatives will be implemented within their districts.


State Governments. State governments play a central role in education policy. State departments of education develop statewide initiatives and use research and data to guide policy decisions. They also establish academic standards, oversee statewide assessment systems, and set accountability requirements for schools and districts. In many cases, states also support policy implementation through funding mechanisms such as grants, formula funding, and targeted programs.


The Federal Government. The federal government plays a more limited role in education policy. The U.S. Department of Education primarily focuses on protecting students’ civil rights, administering federal education funding, and collecting national education data. Rather than directly controlling school policy, the federal government influences education priorities by establishing initiatives and providing grants or funding programs that states and districts can choose to implement.


Researchers. With multiple levels of government and a wide range of stakeholders involved in education policymaking, strong evidence is essential to inform effective decisions. Researchers, including academics, think tanks, nonprofits, and research institutions, play an important role in generating this evidence. Through methods like program evaluation, experimental research, comparative analysis, and policy studies, these organizations help identify what works in education. They also contribute by publishing research, presenting findings at conferences, and facilitating conversations between policymakers and practitioners.

 

What is an Educational Evaluator?

An educational evaluator is a third-party who assesses the outcomes of an educational intervention using data collection, stakeholder feedback, project documentation, and comparative analysis. Interventions can include policies, programs, projects, or initiatives aimed at improving student success or experience in some way. Evaluators examine whether a program was implemented as intended and whether the outcomes identified in its logic model were achieved. In short, evaluators determine whether a program produced its intended results.

 

Why are Evaluators Vital to Education Policy?

Without evaluation, policymakers and practitioners cannot know whether policies or programs are achieving their goals. Evaluators can give direct insight into whether new initiatives are working on the ground and performing as expected. They are involved through the life of a project and can identify factors affecting implementation, areas to expand in future programs, and where more resources may be needed.


Policymakers operate at a high level, focusing on broad goals and desired outcomes. Evaluators, by contrast, examine how policies are implemented on the ground and therein can give insight into ways to sculpt and shape a policy moving forward based on the evidence and testimony they gather.


For example, imagine a state launches a new literacy initiative and offers grant funding to districts that develop programs aligned with the initiative’s goals. A district might create a high-dosage tutoring program to improve reading outcomes and hire an evaluator to monitor its implementation.


If the program proves successful, the evaluator’s report can serve as a valuable resource for other districts. By documenting which components were most effective, which elements were unnecessary, and what barriers emerged during implementation, the evaluation provides a roadmap that policymakers and educators can use to guide future programs.


Evaluators therefore play a direct role in shaping education policy. By documenting both the process and outcomes of educational interventions, they help policymakers understand what works and how successful programs can be replicated or scaled.



If you are a policymaker seeking to understand the outcomes of a policy initiative, or a program leader preparing to apply for a grant or evaluate an existing project, consider working with Shaffer Evaluation Group. Contact us today for a free 30-minute consultation at seg@shafferevaluation.com.


Individual holding an open book in their right hand and a small globe topped by a graduation cap in their left hand.

Patricia Moore Shaffer, Ph.D., Principal & Lead Evaluator


In 2025, many minority-serving higher education institutions (MSIs) found themselves navigating an operating environment few had anticipated: federally funded capacity-building grants, including Title III and V awards, that were modified, disrupted, or canceled outright after implementation had already begun. For project leaders, this introduced a level of uncertainty that went far beyond routine grant management and into core questions of staffing, institutional risk, and trust with communities.


Across multiple projects, our team observed institutions making rapid, consequential decisions—sometimes within weeks—about whether to scale back, pause, redesign, or absorb grant-funded activities. These decisions were not failures of planning or leadership but responses to a funding context that shifted after commitments were made.


Looking across these experiences, several operational lessons stand out.


  1. Grant cancellation risk must now be treated as a design constraint, not a remote possibility. Historically, most capacity-building applications assumed that once awarded, funding would persist through the project period barring major compliance failures. In 2025, that assumption no longer held. Institutions that weathered cancellations most effectively were those that had avoided hard dependencies on grant funds for essential personnel or student services, and that had sequenced activities so early investments retained value even if funding was curtailed.


  1. Grant staffing models must be designed for adaptability. In several disrupted or canceled grants, staffing structures were among the least flexible elements of project design. Grant-specific positions left institutions with limited options when funding conditions changed. Institutions that adapted more effectively treated grant staffing as modular. They relied on internal reallocations, phased hiring, and cross-training, and they defined core functions separately from specific positions. This allowed essential work to continue even as grant staff positions were cut.


  1. Over-engineered project designs proved brittle under disruption. Complex activity structures, layered deliverables, and tightly coupled timelines left little room to adapt when funding conditions changed. In contrast, projects built around a small number of clearly defined core components were easier to scale down while preserving institutional learning. Simpler designs were not weaker; they were more resilient.


  1. Sustainability planning mattered for all projects. In canceled projects, sustainability planning became an immediate concern. Institutions that had already identified which roles, practices, or partnerships were worth sustaining could make faster, more defensible decisions. Where sustainability had been treated primarily as a narrative requirement in the grant application and not revisited after award, institutions were left improvising under pressure.


  1. Trust with communities is built—or eroded—through how institutions manage disruption. Grant cancellations did not occur in a vacuum. Many projects were embedded in long-standing relationships with students, families, communities of color, and regional partners. When activities were paused or withdrawn, communities experienced these changes not as abstract funding shifts, but as broken commitments. Institutions that maintained trust were those that communicated early and honestly about changes, acknowledged impacts on participants, and involved community partners in decisions about what could be preserved or adapted.


What these experiences suggest for minority-serving institutions

The 2025 funding environment underscored a difficult truth: capacity-building grants are increasingly operating in conditions of volatility, even as expectations for compliance, outcomes, and documentation remain high. For MSIs, this compounds long-standing capacity constraints and places additional strain on staff and leadership.


The institutions that navigated cancellations most effectively were not those with the most ambitious designs, but those with the clearest operational priorities. They treated grant plans as adaptable rather than fixed scripts, and they made early design choices that preserved their initiative's integrity when funding circumstances changed.


As an evaluation firm, this is where our work has increasingly concentrated—supporting institutions not just in designing and evaluating grant projects, but in making operational decisions about project implementation when conditions shift. As federal funding landscapes continue to evolve, capacity-building will depend as much on resilience and judgment as on alignment with funder priorities. Those lessons, while hard-won, are now shaping how institutions think about risk, sustainability, and what it truly means to build capacity in uncertain times.


Neoclassical building with American flag

Audra Nikolajski, Research Associate


What is Workforce Pell?

In our August blog post, we declared that workforce development was the ultimate focus of federal funding in 2025. This year, we see the implementation of legislative efforts which affirm this focus. In July 2025, Congress passed H.R.1, colloquially known as the One Big Beautiful Bill. Among myriad provisions affecting higher education, school accountability, and student funding, H.R.1 requires the Department of Education to award Workforce Pell Grants to students enrolled in eligible workforce training programs.


Eligible programs provide between 150 and 600 clock hours of instruction delivered over a minimum eight-week period. This is a significant expansion of the Pell Grant structure. Previously, Pell Grants were limited to undergraduate students without bachelor’s or professional degrees who were enrolled in programs lasting at least 15 weeks and 600 clock hours. Under Workforce Pell, students who already hold an undergraduate degree may still qualify for funding, provided they meet other requirements.

 

What Does This Mean for Institutions?

H.R.1 mandates that Workforce Pell be implemented by July 1st of this year, which creates some uncertainty for students, government employees, and educational institutions as regulatory and operational details continue to emerge. On January 9, the Higher Education and Access through Demand-driven Workforce Pell (AHEAD) Committee concluded its negotiated rulemaking process to define how eligible programs will be identified and evaluated.


The committee outlined a set of accountability requirements tied to program completion, employment outcomes, and post-completion earnings. These measures will be combined into a single earnings premium test, under which programs that fail to meet standards in two out of three years will lose Workforce Pell eligibility. Accrediting agencies will also be required to review Workforce Pell programs to ensure they meet quality standards. In addition to federal requirements, institutions will need to comply with state-specific criteria determining which programs align with workforce priorities.

 

How Should Institutions Prepare?

1. Evaluate existing programs - Programs lasting at least eight weeks and 150 clock hours may already meet Workforce Pell eligibility thresholds. Institutions should identify programs that qualify, those that may require modification, and gaps in current offerings.


2. Assess program outcomes and data capacity - Eligibility depends on completion, employment, and earnings outcomes. Institutions should assess whether they can reliably track these metrics, particularly for non-credit programs, and identify gaps in data systems or reporting processes.


3. Align programs with state workforce priorities - States will determine which programs qualify as high-skill, high-wage, or in-demand. Reviewing labor market data and employment needs will be vital for successful programming.


4. Plan for program development and funding opportunities - Institutions without eligible programs may consider developing new workforce pathways. Federal, state, and philanthropic grants are likely to support this work, many of which will require clearly defined outcomes and evaluation plans.


As your institution considers evaluating or expanding existing programs, or applying to grants to support your project, consider working with Shaffer Evaluation Group. Contact us today for a free 30-minute consultation: seg@shafferevaluation.com.


Students in blue work uniforms working with machinery in a classroom setting.

 

Anchor 1

Shaffer Evaluation Group, 1311 Jamestown Road, Suite 101, Williamsburg, VA 23185   833.650.3825

Contact Us     © Shaffer Evaluation Group LLC 2020 

  • LinkedIn - White Circle
  • Twitter - White Circle
bottom of page