top of page

ABOUT US

Patricia Moore Shaffer, Ph.D., Principal & Lead Evaluator


In 2025, many minority-serving higher education institutions (MSIs) found themselves navigating an operating environment few had anticipated: federally funded capacity-building grants, including Title III and V awards, that were modified, disrupted, or canceled outright after implementation had already begun. For project leaders, this introduced a level of uncertainty that went far beyond routine grant management and into core questions of staffing, institutional risk, and trust with communities.


Across multiple projects, our team observed institutions making rapid, consequential decisions—sometimes within weeks—about whether to scale back, pause, redesign, or absorb grant-funded activities. These decisions were not failures of planning or leadership but responses to a funding context that shifted after commitments were made.


Looking across these experiences, several operational lessons stand out.


  1. Grant cancellation risk must now be treated as a design constraint, not a remote possibility. Historically, most capacity-building applications assumed that once awarded, funding would persist through the project period barring major compliance failures. In 2025, that assumption no longer held. Institutions that weathered cancellations most effectively were those that had avoided hard dependencies on grant funds for essential personnel or student services, and that had sequenced activities so early investments retained value even if funding was curtailed.


  1. Grant staffing models must be designed for adaptability. In several disrupted or canceled grants, staffing structures were among the least flexible elements of project design. Grant-specific positions left institutions with limited options when funding conditions changed. Institutions that adapted more effectively treated grant staffing as modular. They relied on internal reallocations, phased hiring, and cross-training, and they defined core functions separately from specific positions. This allowed essential work to continue even as grant staff positions were cut.


  1. Over-engineered project designs proved brittle under disruption. Complex activity structures, layered deliverables, and tightly coupled timelines left little room to adapt when funding conditions changed. In contrast, projects built around a small number of clearly defined core components were easier to scale down while preserving institutional learning. Simpler designs were not weaker; they were more resilient.


  1. Sustainability planning mattered for all projects. In canceled projects, sustainability planning became an immediate concern. Institutions that had already identified which roles, practices, or partnerships were worth sustaining could make faster, more defensible decisions. Where sustainability had been treated primarily as a narrative requirement in the grant application and not revisited after award, institutions were left improvising under pressure.


  1. Trust with communities is built—or eroded—through how institutions manage disruption. Grant cancellations did not occur in a vacuum. Many projects were embedded in long-standing relationships with students, families, communities of color, and regional partners. When activities were paused or withdrawn, communities experienced these changes not as abstract funding shifts, but as broken commitments. Institutions that maintained trust were those that communicated early and honestly about changes, acknowledged impacts on participants, and involved community partners in decisions about what could be preserved or adapted.


What these experiences suggest for minority-serving institutions

The 2025 funding environment underscored a difficult truth: capacity-building grants are increasingly operating in conditions of volatility, even as expectations for compliance, outcomes, and documentation remain high. For MSIs, this compounds long-standing capacity constraints and places additional strain on staff and leadership.


The institutions that navigated cancellations most effectively were not those with the most ambitious designs, but those with the clearest operational priorities. They treated grant plans as adaptable rather than fixed scripts, and they made early design choices that preserved their initiative's integrity when funding circumstances changed.


As an evaluation firm, this is where our work has increasingly concentrated—supporting institutions not just in designing and evaluating grant projects, but in making operational decisions about project implementation when conditions shift. As federal funding landscapes continue to evolve, capacity-building will depend as much on resilience and judgment as on alignment with funder priorities. Those lessons, while hard-won, are now shaping how institutions think about risk, sustainability, and what it truly means to build capacity in uncertain times.


Neoclassical building with American flag

Audra Nikolajski, Research Associate


What is Workforce Pell?

In our August blog post, we declared that workforce development was the ultimate focus of federal funding in 2025. This year, we see the implementation of legislative efforts which affirm this focus. In July 2025, Congress passed H.R.1, colloquially known as the One Big Beautiful Bill. Among myriad provisions affecting higher education, school accountability, and student funding, H.R.1 requires the Department of Education to award Workforce Pell Grants to students enrolled in eligible workforce training programs.


Eligible programs provide between 150 and 600 clock hours of instruction delivered over a minimum eight-week period. This is a significant expansion of the Pell Grant structure. Previously, Pell Grants were limited to undergraduate students without bachelor’s or professional degrees who were enrolled in programs lasting at least 15 weeks and 600 clock hours. Under Workforce Pell, students who already hold an undergraduate degree may still qualify for funding, provided they meet other requirements.

 

What Does This Mean for Institutions?

H.R.1 mandates that Workforce Pell be implemented by July 1st of this year, which creates some uncertainty for students, government employees, and educational institutions as regulatory and operational details continue to emerge. On January 9, the Higher Education and Access through Demand-driven Workforce Pell (AHEAD) Committee concluded its negotiated rulemaking process to define how eligible programs will be identified and evaluated.


The committee outlined a set of accountability requirements tied to program completion, employment outcomes, and post-completion earnings. These measures will be combined into a single earnings premium test, under which programs that fail to meet standards in two out of three years will lose Workforce Pell eligibility. Accrediting agencies will also be required to review Workforce Pell programs to ensure they meet quality standards. In addition to federal requirements, institutions will need to comply with state-specific criteria determining which programs align with workforce priorities.

 

How Should Institutions Prepare?

1. Evaluate existing programs - Programs lasting at least eight weeks and 150 clock hours may already meet Workforce Pell eligibility thresholds. Institutions should identify programs that qualify, those that may require modification, and gaps in current offerings.


2. Assess program outcomes and data capacity - Eligibility depends on completion, employment, and earnings outcomes. Institutions should assess whether they can reliably track these metrics, particularly for non-credit programs, and identify gaps in data systems or reporting processes.


3. Align programs with state workforce priorities - States will determine which programs qualify as high-skill, high-wage, or in-demand. Reviewing labor market data and employment needs will be vital for successful programming.


4. Plan for program development and funding opportunities - Institutions without eligible programs may consider developing new workforce pathways. Federal, state, and philanthropic grants are likely to support this work, many of which will require clearly defined outcomes and evaluation plans.


As your institution considers evaluating or expanding existing programs, or applying to grants to support your project, consider working with Shaffer Evaluation Group. Contact us today for a free 30-minute consultation: seg@shafferevaluation.com.


Students in blue work uniforms working with machinery in a classroom setting.

 

Updated: 2 days ago

Patricia Moore Shaffer, Principal & Lead Evaluator


Across many of the grant evaluations we’ve conducted, which span capacity-building, workforce initiatives, and K-12 education projects, we've observed some common practices associated with successful projects. How many of these practices do you use with your grant? Use this quick quiz to find out.


How to Score


For each item, give your project:

2 = Yes, consistently

1 = Sometimes / partly in place

0 = Not yet


The 10-Practice Grant Success Quiz


1) We Have a Simple Logic Model or Theory of Change That Staff Can Explain in Plain Language


This is the “map” that keeps a project from becoming a long list of disconnected tasks. The best versions are short, visual, and actively used—not just filed away with the grant application. Logic models are widely recommended as tools for planning, communicating, and evaluating how activities connect to intended outcomes.


2) We’ve Narrowed Our Performance Measures to a Small Set That We Actually Use


Many grants struggle under the weight of too many indicators. Strong projects identify a manageable handful that directly reflect the outcomes in the logic model. This approach to performance measurement also improves reporting clarity and supports real improvement during implementation.


3) Our Roles and Decision-Making Are Clear


Successful grants rarely depend on heroic effort from one person. They succeed on clear lines of responsibility: who participates in decision-making, who owns each grant deliverable, and who is accountable when timelines slip.


4) We Run the Project with a Steady Cadence (Not Just Bursts Before Reports)


A predictable rhythm—monthly check-ins, short action logs, and visible next steps—keeps momentum and reduces last-minute chaos. This aligns with continuous improvement approaches that emphasize routine documentation and follow-through.


5) We Track Risks Early and Revisit Them


Staff turnover, procurement delays, partner shifts, and seasonal constraints can derail even well-designed projects. Proactive risk management is a recognized grant management practice to protect timelines, budgets, and compliance.


6) We Know Our “Core Components” and Protect Them


Strong programs are consistent where it matters most. That means identifying the few essential elements that must be delivered with fidelity, even if other parts of the program vary by site or participants.


7) We Allow Smart Local Adaptation—and Document It


The strongest projects don’t confuse flexibility with drift. They distinguish between acceptable adaptations and changes that would weaken the model. This “fidelity and fit” balance is especially emphasized in education grant contexts.


8) Our Data Process Is Realistic for Our Staffing and Context


Good data systems are simple, routine, and built to survive busy seasons, staff changes, and competing priorities. The best ones focus on a small set of measures (see #2), assign clear ownership, and follow a predictable schedule so data is ready well before reporting deadlines.


9) Partners Are Integrated into Implementation—not Just Listed in the Proposal


Effective partnerships show up in the work, not just the application narrative. Roles are clear, timelines are shared, and partners have regular touchpoints where they help solve problems and shape adjustments. When partners are truly embedded, they expand capacity and increase the odds that key activities will continue after funding ends.


10) We Began Sustainability Planning Early


This is one of the most consistent predictors of long-term impact. Sustainability should be planned from the start of a grant, not in the final year.


Your Score (0–20)


13–16: Good Foundation with a Few Stress Points. Your project is likely to deliver most core outcomes, but it may be vulnerable to turnover or scope creep.

0–8: High Risk, High Opportunity. The good news: small changes can make a big difference quickly.


If your score suggests a few of these practices need tightening, that’s good news. Most grants don’t need a redesign—they need a sharper operating system.


Partnering for Success


Shaffer Evaluation Group can help you build it. We support grantees with practical, right-sized evaluation and implementation tools, such as logic models that get used, lean measurement plans, and early sustainability roadmaps. Whether you need a quick mid-course tune-up or a full external evaluation, we’ll help you turn a good idea into a project that runs smoothly, proves its value, and lasts.



Anchor 1

Shaffer Evaluation Group, 1311 Jamestown Road, Suite 101, Williamsburg, VA 23185   833.650.3825

Contact Us     © Shaffer Evaluation Group LLC 2020 

  • LinkedIn - White Circle
  • Twitter - White Circle
bottom of page