top of page

ABOUT US


Quantitative data analysis is an important part of the program evaluation process. Basic statistics help us understand and communicate educational program results.


There are several ways an evaluator can approach statistically analyzing their data, and this approach depends on (1) the research question and (2) the data at hand. For example, the type of statistical test an evaluator may use can depend on if they are interested in testing mean differences or predicting an outcome. Further, the statistical test an evaluator may use will also depend on the variable type (e.g., continuous, categorical, interval, etc.).


For most evaluation and outcome analysis, the most basic and useful analysis is to compare measurements taken at two points in time. A

meaningful measure will be a mean, or average, of the values collected. For example, for a survey administered at the beginning and end of an educational program that collects data on college students’ STEM efficacy, we could use a t-test to see if there is any significant difference between the two average values. An ANOVA will compare two or more means simultaneously and also takes into consideration the variance within a sample and between samples.


Basic statistics that an evaluator may use are:


1. Descriptive statistics: Descriptive statistics are useful for when an evaluator has data that they would like to describe. Tests within this category include means (averages), percentages, or range.


2. Inferential statistics: Inferential statistics allow the evaluator to make inferences (predictions) about the data.

a. Comparison tests: Comparison tests are useful for when an evaluator would like to test the group mean differences between or within participants. Common tests include: t-test, paired t-test (for pre- and post- test data), and ANOVA.

b. Correlational tests: Correlational statistics are useful for when an evaluator would like to examine the relationship between variables or predict an outcome. Common tests include: correlation, regression, and hierarchical linear modeling.


Tracking time and effort as part of federal grant administration is a best practice, as it provides documentary evidence of the amount of time project staff spend on grant activities. Time and effort forms for project personnel are used for tracking. These forms are usually organized by cost objectives that represent a program, function, activity, award, or work unit of which cost data are desired or required for reporting. Employees may be involved in several federally funded projects or locally funded programs that require tracking by cost objective.


Evaluators may ask federally funded project personnel to share documentation from their time and effort forms or to maintain an implementation log. Implementation logs place stronger emphasis on tracking grant-funded activities as opposed to project personnel time. Logs establish a documentation trail that serves both the project and evaluation team. Often created in an Excel spreadsheet or using an online form, project directors make dated entries in an implementation log for activities and include the time spent, participants and note relevant artifacts (see image). Some project directors place Google Drive links in the artifacts cell, while others provide a file name and location. While logs could be updated daily, some are updated on a more periodic basis.



The value of the implementation log becomes apparent when it is time to produce an annual or final grant report. By the time the reporting cycle occurs, so much has transpired that project details may be forgotten and take time to find. For evaluators, the implementation log establishes timeline documentation and provides evidence of project-related activities ranging from acquisition actions and internal meetings to public events. The log is a primary data source for tracking project implementation and assessing fidelity to the project plan. The time required to maintain an implementation log can be as little as 15-30 minutes a week – a small investment that yields powerful evidence of how federal funds were spent.



Updated: Oct 16, 2022

Grant proposals often ask applicants to address sustainability. Sustainability is defined as the ability to maintain programming and its benefits over time. Projects that demonstrate sustainability are valuable to funders, because the funding provided during the grant period results in long-term changes, programming, or benefits.


Whether you are planning to submit a grant application or you already received funding, planning for sustainability is a valuable activity. The Program Sustainability Assessment Tool, developed at the Center for Public Health Systems Science at Washington University in St. Louis, identifies seven key domains of sustainability. These include:

  • Environmental support: A supportive internal/external climate for the program contributes to sustainability. Examples of environmental support may include a school district fully adopting an instructional strategy that was implemented from a grant across schools.

  • Funding stability: Grants provide funding for projects/activities to be implemented. Funding stability is when there are consistent, non-grant funds to support a program. Examples of funding stability include a school district committing to sustain new devices that were purchased by replacing devices as they wear out or a school district adding funds for a position to their annual budget at the end of a grant.

  • Partnerships: Programs are more sustainable when they have developed partnerships with stakeholders. Partnerships can help to broaden possible resources and provide/sustain services. An example of partnerships would be partnering with the local military base and other organizations to offer an annual career fair.

  • Organizational capacity: Throughout a grant term, organizational capacity may be built. Organizational capacity can encompass many areas. One example would be transitioning from a hired professional learning contractor to an internal staff member in the district.

  • Evaluation capacity: Some grant programs require an external evaluator. When an organization builds evaluation capacity, the organization has learned to assess the program, interpret the data, and make data-based decisions on its own.

  • Program adaptation: Over time, the most effective components of a program should be sustained, and a program may need to adapt to allow for this change. For example, a program may involve implementing several instructional approaches, but only some are deemed effective. The program may then use the instructional approach that is working and adapt it to be used at a higher level.

  • Communications: Regular, strategic communication with stakeholders and the public about a program can help gain visibility and garner support. Examples may include developing websites for programs, starting a newsletter, or having regular meetings with community partners.

Project directors can review the domains of sustainability and have conversations with their team about actions that contribute to sustainability in each domain. It is important to note that a project may not demonstrate sustainability in all domains.


The Program Sustainability Assessment Tool (https://sustaintool.org/psat/) is an excellent resource for those interested in planning for sustainability. The Shaffer Evaluation Group uses this tool as part of our approach to evaluating federal grants. To learn more about our evaluation services, please visit our website.


Anchor 1

Shaffer Evaluation Group, 1311 Jamestown Road, Suite 101, Williamsburg, VA 23185   833.650.3825

Contact Us     © Shaffer Evaluation Group LLC 2020 

bottom of page