Skip to main content

Chasing Success: Chapter 7 - Sound Measurement Is Key to Success

Chasing Success
Chapter 7 - Sound Measurement Is Key to Success
    • Notifications
    • Privacy
  • Project HomeChasing Success
  • Projects
  • Learn more about Manifold

Notes

Show the following:

  • Annotations
  • Resources
Search within:

Adjust appearance:

  • font
    Font style
  • color scheme
  • Margins
table of contents
  1. Front cover
  2. Frontmatter
  3. Contents
  4. Preface
  5. Chapter 1 - The Scientific and Social Context from Whence We Came
  6. Chapter 2 - Community Leadership and the Creation of ECS
  7. Chapter 3 - Design and Essence of ECS
  8. Chapter 4 - Evolution and Relevance
  9. Chapter 5 - Collaboration Is Difficult but Crucial to Success
  10. Chapter 6 - Working with the Private Sector
  11. Chapter 7 - Sound Measurement Is Key to Success
  12. Chapter 8 - Funding for Nonprofits Is Complex and Challenging
  13. Chapter 9 - Supporting Nonprofits to Address Social Challenges
  14. Appendix A: General References
  15. Appendix B: Every Child Succeeds References
  16. About the Author

Chapter 7
Sound Measurement Is Key to Success

Centering Families

This section about sound measurement needs to begin with a reflection from our then evaluation/now ECS scientific director, Robert Ammerman, from a presentation he made to the United Way. Here is how he described the experience:

I was armed with charts and tables, quantifying our work and our achievements. Also presenting was a mother enrolled in ECS. I was struck by the contrast between her poignant and earnest testimony about how ECS had helped her—being more confident as a parent, feeling hopeful about the future, anticipating the healthy development of her baby as compared with the numbers and lines on slides that comprised my presentation. Both of our presentations were important and helpful in describing ECS. I came away reminded of the human and personal impact of our work and a recognition that behind the numbers are stories of thousands of parents and children making meaningful changes in their lives.

With mothers and children at the center of the science and measurement paradigm, there were at least nine primary stakeholder groups across the public and private sectors that influenced the work of ECS: families themselves, our board and staff, provider agencies, home visitors, health care providers, the local community, the scientific community, governmental policy makers, and funders.

Measurement was not only a central focus for ECS, but it was also germane to our success and to understanding what constituted success for families. We believed in transparency and eagerly sought input from our families and our home visitors regarding what was working well, what wasn’t working, and what questions needed to be answered. This work was grounded in the original stipulation from Cincinnati Children’s then Chair of Pediatrics, Thomas Boat: Children’s will only be part of the ECS home visiting initiative if it had a strong data component.

To that end, we aligned our performance and process measures with defined outcomes that we and our families wanted to achieve. We surveyed families to ensure that we were including the voice of the customer. We asked both the families and the ECS team: What surprises you? What is going well? What challenges exist? What could be done better? Community nonprofits should ask themselves and those they serve similar questions to increase transparency, accountability, and community engagement. Program participation and long-term success depend on delivering the right service to meet the expectations and needs of those they aim to serve.

In addition, over the years, we used information collected by our home visitors and our agencies about the needs, plans, and outcomes of families to make decisions about program operations.

To gauge progress, we monitored the performance of each agency in the organization. In the case of ECS, agencies knew in advance what we would be measuring, so each year when we reviewed their performance, there were no surprises. These periodic reviews were important as we worked to maintain system excellence and stability. Everyone understood that these performance review meetings were fundamental to delivering the ECS mission. Most of the meetings were opportunities to laud success and/or to look for new strategies to address a specific problem. However, on three occasions and after warnings and remediation, we did not renew contracts with agencies not performing at the level we set for the organization. Those were never easy discussions. As an organization, there must be a willingness to confront adverse outcomes, be it a strategy that isn’t working or an agency that isn’t performing up to standards—standards that put the mission at the center of the work.

What We Did: Optimizing the “Three Faces of Measurement”

Measurement has been central to ECS from the day that we enrolled our first family, and that focus included aspects of measurement science brought to us by our partner Cincinnati Children’s. Functionally, we committed to use what the Institute for Healthcare Improvement (IHI) refers to as the “three faces of performance measurement,” that is, measuring for purposes of: 1) quality improvement (QI); 2) population-level performance monitoring (accountability); and 3) research and evaluation. These three faces of measurement are discussed throughout this chapter (Solberg et al 1997).

Based on the work of Leif I. Solberg, MD, a leader in quality measurement and an executive at HealthPartners Institute, the framework using three faces of measurement reminds us that the data we collect will be used for different purposes, toward different ends, and to demonstrate different types of accountabilities. So, for example, ECS used data collected by home visitors to design QI projects that would ensure families were receiving the intended components of a home visiting model or that would identify areas where additional training was needed. The same data used as population-level performance monitoring would tell us if our program was improving the rates of early prenatal care, immunizations, or breastfeeding, while reducing rates of smoking or depression. In addition, our data served as the basis for evaluation studies and helped guide development of research projects.

Notably, the Maternal, Infant, and Early Childhood Home Visiting (MIECHV) program is the only program for which federal law requires states to use all three types of measurement. Every state that accepts grants for federal home visiting must use the three faces of measurement. We were aligned with the federal approach and deeply committed to this approach. Our work on measurement met and exceeded these requirements for years before they became law. In the long run, this helped ECS and others who are supported by MIECHV funds.

ECS set out performance metrics, success criteria, forms-completion deadlines, quality improvement processes, systematic data collection and analysis. When possible, we added staff capacity to support this work, particularly people with QI skills, data analysts and researchers who could secure funds for projects to develop new knowledge and measure our efforts. Each component paid off for ECS and for the field of home visiting.

Our work on QI fit with the priorities of Cincinnati Children’s and was supported and sustained by their expertise in this area of measurement. Then initiated and led by Uma Raman Kotagal, MD, Cincinnati Children’s QI efforts in pediatrics were widely recognized as among the nation’s best. It drove the work and training of home visitors, as well as many of our community partnership efforts and our work with primary care providers.

For population-level performance monitoring, we both collected our data and linked it to vital records and overlays of census-tract data to understand service utilization and impact on community/population-level outcomes. The data system designed by ECS (eECS) ultimately became the basis for an improved statewide performance-data system for the state of Ohio.

Our agenda and our work were complex and informed by all three types of measurement. During my 20 years with ECS, the list included collaboration efforts such as home visiting and primary care, community health workers and home visitors, community early childhood teams, and community moms’ groups. We focused on measuring and improving the impact of our work on factors such as birth outcomes, parenting skills, child development, maternal depression, interpersonal violence, substance abuse, tobacco use, and readiness for pre-kindergarten and now, even early relational health. We pushed to understand precision in home visiting—what works for whom and how best to respond to the needs and plans of families. The measurement agenda also focused on aspects of program operation, including outreach and enrollment, participation and retention, home visitor training, equity and inclusion, and connections to other services. In each case, we used the tools of QI—performance monitoring and evaluation, and research—to advance knowledge of what works and to improve the work of ECS.

ECS operated with the understanding that “what gets measured gets done.” Our first board chair, Gibbs MacVeigh, a retired financial officer, said loudly and clearly at least once a month that if something wasn’t working, as shown in the data, we needed to stop doing it—whatever it was.

The key then was to know what wasn’t working that could be made visible in the data. With our strong mandate from Cincinnati Children’s to have robust data for QI, performance monitoring, and evaluation and research, our opportunity to hire an outstanding research staff, to collect reliable data into our unique data platform, and strong leadership from staff and advisors, we were in a position to know what needed to be improved.[pull quote]

We measured to gauge quality, program performance, effectiveness, impact, return on investment, and to test ideas that had relevance to the home visiting field writ large. The key questions were: What to measure? How to measure? And most critically, what constitutes evidence of success and what is actionable?

How We Built the Measurement Approach

Scientific Advisory Committee (SAC)

Soon after ECS was formed, we assembled a Scientific Advisory Committee (SAC) to guide our research activities. It was initially led by James Greenberg, MD, co-chair of the Perinatal Institute at Cincinnati Children’s and a longtime member of the ECS board. Experienced in all three faces of measurement, he was an excellent choice to chair this committee. Greenberg was joined by five other board members, community representatives, and affiliated faculty from Cincinnati Children’s and the University of Cincinnati College of Medicine. The affiliated faculty represented the areas of behavioral medicine and clinical psychology, biostatistics and epidemiology, biomedical informatics, general pediatrics, and speech-language development at the Reading and Literacy Discovery Center. Staff work was provided by the ECS research and evaluation team, led by Putnam and Ammerman.

We deliberately included community and business representatives who were not part of our academic community in our SAC contingent—we wanted the voice of the for-profit and community perspective as well as the voice of professionals who brought academic content knowledge and strategies for study design. The charge for the SAC was to oversee the scientific mission of ECS, including identifying future projects, involving other investigators, validating research requests coming to ECS from other individuals and organizations, and working with staff to prepare and monitor the research agenda.[pull quote]

The SAC allowed us to highlight the ways in which the ECS focus on measurement, including QI, performance monitoring, and research, was synergistic with other activities at Cincinnati Children’s. ECS brought valuable resources, including our family cohort, an extensive data file, research infrastructure, a collaborative approach, community contacts, and ability to obtain grants. We hoped to underscore our value to Cincinnati Children’s and find multiple ways to work together.

The ECS board’s challenge to the SAC was not trivial. We wanted them to help us generate research ideas and not just listen to us report about our current activities. They were asked to serve as emissaries from us to the Research Foundation at Cincinnati Children’s and to their colleagues across the country, letting them know about our trustworthy infrastructure and our value to Cincinnati Children’s and its research agenda. We were eager to foster additional collaboration and let SAC know that we needed their help.

At first, and as the group coalesced, their responses for us were largely observational, but as they—and we—learned more about each other, the recommendations, comments, and questions became more pointed and more challenging. They asked: Can you explain the what and why for findings that you highlight? Is your research work improving the home visiting field and/or improving ECS itself? Are you only examining specific outcomes or trying to understand why home visiting works? Do you know what level of service is needed to produce the outcomes you seek? Have you identified an investment strategy to fund ECS research and development, work that cannot be sustained without external funding? All were questions that focused on what can and should be done. The SAC developed a set of eight guidelines for proposals coming from scientists outside of ECS:

  1. The research must advance the field in a meaningful way.
  2. The proposed study must not put undue burden on home visitors and families.
  3. The research findings must be relevant to ECS and have implications for how we provide the service.
  4. The proposed study must not conflict with ongoing or planned ECS research.
  5. Expenses for the research must be covered by the grant.
  6. The scientific quality of the study will be high.
  7. The investigator and investigative team have a strong record of scholarship, grant writing, and publications.
  8. Working with the investigative team forges collaborative relationships that benefit ECS.

The SAC members debated whether a proposed study would be important for ECS, for funders, and for the home visiting field. They also debated whether obtaining the funds to do the work was feasible, whether the research line of inquiry was one where we had a track record and familiarity with the existing data, whether we had the infrastructure to do the work, whether we had or needed pilot data, and whether the ECS program and the home visitors would be able to support the undertaking.

They encouraged us to focus, to delve more deeply into a few home visiting areas rather than a wider range. But they, and we, knew that some of the questions we needed to answer were not optional, which meant that sometimes before we actually got into what we would like to pursue, we had to comply with mandates from funders.

Learning from Experts

The work of the SAC was supported by the ECS measurement team that monitored daily activities and the ECS evaluation committee, which was made up of representatives from ECS provider agencies. The agenda for the ECS measurement work was developed to meet the needs of the multiple constituencies invested in ECS and its outcomes.

In February of 2008, we contracted with Anne Duggan. Through her work based at Johns Hopkins University, Duggan had been a research and evaluation leader in the home visiting world for decades and served as the leader of the federally funded Home Visiting Applied Research Collaborative (HARC). We asked her to help us think creatively about how to address design and measurement issues for future ECS research. She understood that we were operating with a dual mission: high quality services for people in need and scientific rigor for program operation that would move the home visiting field forward. Duggan was one of the first researchers to reference what is called the “black box” of home visitation—what are the key elements needed in program implementation to ensure program success? Years later, there is still little agreement regarding what actually is in that black box.

Among Duggan’s recommendations for us, three became a major part of our research activity going forward: 1) Answer the questions about how home visiting works, for whom and how effective models can be taken to scale with fidelity; 2) Monitor the process of service delivery and the completeness and accuracy of data collection; 3) Design observational and intervention research using factors that influence fidelity and promote fidelity. Substantial work is still needed in all three areas.

As we wove together the types of measurement with the questions posed by Duggan, we embarked upon the sizable task of identifying key success factors for families—what are the essential outcomes for a family in a specific time period? Who defines success? Why are we in business?

Defining Success

What Constitutes Success?

We aimed to define success, at least for ECS and our families. Certain performance indicators and outcome measures were required by the federal and state programs, based on a review of science. Local programs such as ECS had to put these into practice in real-time measurement activities, which was one of the most important yet continually challenging activities for our measurement team. In addition, we knew it was essential to understand and consider what families themselves would view as success, as well as what outcomes were valued by the community. Taken together, we created a template for what to measure and even how to measure. Yet most difficult to answer were the companion questions—which outcomes or conditions are predictive of future health and well-being? What matters?

Outcome data are important for families so that they can gauge their progress and work toward goals that they set for themselves in conjunction with their home visitors. As we began to think more intentionally about what information would be most valuable for families, we embarked upon an activity that sounded simple, but as we explored the possibilities, the complexity became obvious. Which were the right success criteria? Should they be considered by the age of the child? Were we collecting enough data to make a valued finding? We had many meetings to discuss the facets of the process, the outcomes, the priorities.

Being a somewhat-atypical nonprofit, we formed an internal-success-criteria committee, charged to create four groups of success markers based upon the age of the child: prenatal, birth to 12 months, 13–24 months, and 25–36 months. For each age group we identified two to three dozen possible criteria to include. Discussing the merits of each criterion became the agenda for a series of challenging meetings. All of the criteria were important. How to choose? Finally, we voted and used the results of our vote to create what we called the Success Priority Checklist. The list provided a framework for home visitors to know what to concentrate on for each time period. The home visitors were to ask themselves, “If you can accomplish nothing else right now, what should you focus on?”

The Success Priority Checklist became a useful and unique management tool for the home visitors. We produced a family success report for each home visitor and her caseload and then a separate individual Success Criteria Report Card for each family. Home visitors and families could see graphically how they were progressing.

We moved from creating a full list of the success criteria to taking a more holistic approach to prioritizing the criteria by age of child to identifying the measures that would allow us to document performance. For example, what needs to be achieved by age one: it sounds simple, but it was extraordinarily complex, as there are so many factors that lead to success at any age.

The success criteria were important for ECS, as we endeavored to make each home visit as meaningful as possible. And the measures played a role in our evaluation so that when data were accumulated, it would allow us to holistically answer questions for the program. In addition, the measures were intended to help the family understand how well they were doing. It focused attention on what the family and the program aimed to accomplish, charging the home visitors to partner with families to reach those aims.

We wanted to be sure that when a family ended participation in ECS, they left being aware—and proud, we hoped—of what they accomplished. As a part of our success priority program, each family received a gold embossed certificate with personal signatures for each completed phase of the program, emphasizing accomplishment and celebration.

However, late in my tenure at ECS, we had to discontinue this activity, because most of our data were being entered into state systems rather than our eECS system, and we were unable to generate the reports needed for the family success certificates. We lost a valuable part of our family celebration when the certificates were no longer available. And knowing that many of our families displayed the certificates they had been awarded in an honored place in their home, one of our board members suggested that we create a colorful and engaging chart for each family to keep in their homes to measure progress. This added a visible roadmap of success and celebrated accomplishments—going from a problem to a solution, in practical and program co-design terms.

The idea behind the success criteria was to begin to concretize, in a way that could be measured, what we were doing, where we were having success, and what we needed to do better. This type of iterative thinking, and even “five whys” questioning that gets to root causes and better responses and performance, has value. Thinking intensively about the questions is needed to clearly define the problem and the effective solutions. Any strategic planning and quality improvement activities need to be guided by this type of questioning. While in our case it was about children, this general approach could be used in any community nonprofit endeavor.

Quality Improvement

Within ECS, we wanted to employ QI strategies to determine how to effect change—to correct what wasn’t working. But before we could even begin, we needed to make sure that we could obtain baseline data that was accurate and reliable so that we could identify gaps and problems. This meant that, in nearly all cases, the home visitors who would be collecting the data needed additional training because they were only nominally aware of QI methods. We engaged the agencies individually and as a group, sharing time for a woman from the Cincinnati Children’s James M. Anderson Center for Health System Excellence to be our quality improvement consultant and to lead the work. The home visitors were being asked once again to look at their work differently, but to their credit they were enthusiastic about what they might be able to learn.

We were treading on new ground for quality improvement as well because we would be implementing it with community-based agencies rather than in a more restricted hospital or clinic setting. We were dependent upon the commitment of the agencies and the home visitors to embrace this new concept and deploy it thoughtfully, adding another dimension to their already complex work schedules.

The fact that these agencies had been working with us for multiple years, conscientiously collecting data, meant that they had experience in that aspect of what the QI work would require. Further, they trusted that what we were asking them to do was worthwhile and would allow them to add an innovative aspect to their experience as professional home visitors. Over time, the list of projects that ECS was able to address is testament to the home visitors’ willingness to learn how to incorporate QI into their work and to help find answers to improve the program, and thus the family experience.

Many workers in a variety of community settings have now been trained by Cincinnati Children’s to use QI strategies. But when we began, we were alone in implementing QI across eight community-based agencies, using our team of home visitors. Julie Massie was the ECS quality assurance specialist. She functioned as a liaison with the agencies and home visitors to explain what we were doing, secure their cooperation, and provide tech support as needed for this effort. She was perfect for this role—warm, friendly, smart, and well-trained in QI. If Massie asked you to do something, even if you really didn’t want to, you did it rather than disappoint her. She was patient as the home visitors learned a new skill, and she was always willing to demonstrate the value in what they were being asked to do. At the same time, she brought the methodological rigor and focus needed in a QI consultant or team leader.

In any QI project, a key to success is to involve those closest to the work in a process of testing and learning. We hosted teams of home visitors and supervisors who met on a regular basis, combining learning about QI methods and sharing ways to integrate them into their daily routines. The teamwork needed to be fun as well as educational so that the work was seen as an opportunity rather than a burden. The teams shared best practices at our training fairs and put together what they called Activity and Concept Tables to show other ECS agencies what creative strategies and materials had worked for them.

In addition to the teamwork and training fairs, we initiated Cafe Conversations, used QI Tips of the Week, and What’s in Your Trunk where home visitors could show off what they were carrying around in their cars. They challenged each other to answer questions like, “What are ten things that you can do with a set of six blocks to encourage child development?” Or they played Grammy in the Room to share ideas about how to manage well-meaning friends and family who were providing incorrect advice for the mom and/or distracting her during the home visit.

As our QI work began, we needed a way to track results visibly and clearly. The Red Green Chart was born, brought to us by private sector ECS volunteer Alan Spector, formerly of P&G and with a career focused on quality improvement in private industry.

He informed us that there were six keys to success in quality assurance: unifying long-term direction and strategy; driving improvement with data; data transparency/visible accountability; continual improvement and breakthrough learning from mistakes; periodic assessment; and renewal. It was Spector who taught us that mistakes can be gifts, not failures.[pull quote]

It was also Spector who helped us think through the concept of transparency. What data do we share with ECS’ stakeholders such as agencies and board members? Do we share cumulative numbers for the full group of agencies or for each individual agency? We elected to begin by releasing individual agency information at a lead agency meeting. You can imagine the anxiety, ours and our agencies. Was it data for accountability or data for improvement? Across the top of the report were the agency names, and down the left column were the performance metrics with targets. Color blocks could be red, yellow, or green. Agency performance was clear—nearly in technicolor. With credit to our managers, they took the charts, which we updated quarterly, and used them as we had hoped and intended as performance-improvement documents. Examples of those charts and figures are shown on the next several pages.

Every Child Succeeds Sample Quality Indicator Charts

Q1 Trend Report Sample 2012

ECS Quality Indicator Report—Agency 1

Q1 = Jan-Mar, Run in May; Q2 = Apr-Jun, Run in Aug; Q3 = Jul-Sept, Run in Nov; Q4 = Oct-Dec, Run in Feb

** Iteration changed from 24 months to 21 months of age.

**Transition to ASQ 3rd edition July 2010. Data does not represent 12 months.

© Every Child Succeeds, Inc., 2012

Q1 Red Green Chart Sample 2012

Every Child Succeeds—Quality Indicators

04/01/11–03/31/12

The QI Red/Green Chart is an internal quality improvement document for use by ECS administration, agencies, and home visitors. It is not for general distribution outside of ECS, and it should not be interpreted as a summary of ECS outcomes. © Every Child Succeeds, Inc., 2012

At the time, that simple chart was revolutionary because, for the first time, we were recording the performance of our agencies on key metrics, and we were doing it quarterly, noting trends and progress toward the targets we had established for ourselves and sharing results. We built in an active feedback loop providing information to agencies that was specific for families, caseloads, agencies and the overall system. Cumulative numbers were shared at monthly lead agency meetings so agencies could see how they measured up against their peers. The Red Green Chart, a point-in-time measurement, became famous among us and worked for several years. It was the single most important document that we had to demonstrate that we were doing what we promised. It allowed us to know what was or was not working and where there was room for improvement. And it emphasized our commitment to transparency.

The broad release decision was an excellent one and grounded in what most serious efforts have found: Provider agencies learned from each other; healthy competition was engendered; and collaborative learning became a core process for us. When we moved to the more sophisticated, systematic, and ultimately more effective Institute for Healthcare Improvement (IHI) system, we knew that it was better, but many of us still valued the clarity and simplicity of the Red Green Chart.

We migrated to be compatible with Cincinnati Children’s child-data platform, reluctantly at first. For years, the Red Green Chart was how we measured quality and how we answered the “what” questions. Our data and efforts would help us with the “why”—finding the answers.

The IHI Model for Improvement allowed us to build upon the Red Green Chart by not only noting where targets were not being met but also offering a way to improve performance. Even though our unit might have initially been an “n of one,” or a small test of change, we knew that by moving slowly and carefully, we were constructing a firm foundation for modification. In the P&G vernacular, this was described as “make a little, sell a little, learn a lot.” And the IHI model prevented big mistakes. Better to make limited errors than to launch a community-wide change without being aware of the potential hazards. The IHI model for improvement asks three questions:

  1. What are you trying to accomplish? (Set smart aim) We identified specific, measurable, and time-bound projects that would help us solve a problem or identify an innovative approach that could be spread across other home visitors and agencies.
  2. How will we know that the change is an improvement? (Measure) We used small samples to provide quick results and minimize disruption to current work, and we collected data to track performance. The data helped us identify agencies with best-practice approaches and those that could be improved. Understanding the variation across the system helped guide project development and monitor small-scale testing.
  3. What changes can we make that will result in improvement? (Test) Typically, we tested ideas with one family at a time using the IHI plan-do-study-act (PDSA) cycles to learn what could reliably be replicated. Our initial cycles were on the smallest scale possible. And we followed the IHI guideline to try ideas with one family to help us know which ideas could be adapted, adopted, or abandoned.

For ECS, I added a fourth question: Is our QI work expanding our ability to apply actionable findings? To move from what we know to what we do?

For the QI work at ECS, we focused on one-to-two projects at a time, keeping in mind that the projects needed to be meaningful to the participants and amenable to changes that the home visitor could make. Further, we needed to have baseline data available to track pre- and post-QI results. The focus was always on outcomes, but initially we examined process measures to ensure that the infrastructure was sound for changes we might want to make. Along with Massie as our quality improvement consultant from Cincinnati Children’s and project lead, both research director Ammerman and I also were trained in quality improvement strategies by Cincinnati Children’s. This enabled us to assist and guide the work at ECS.

Results of QI efforts were reported on a specified schedule to the ECS board and the program committee using a dashboard structure for easy review. The report included the range of performance measures among agencies and descriptions of interventions tied to each of the results. The report and dashboard were part of our efforts to be transparent, show results, and learn from failures.

Population-Level Performance Monitoring

Measuring success for the whole population served is a critical facet of measurement for any child-and-family-service organization. At a minimum, these measures are typically used to show results, and in many cases, performance is tied to financial incentives or penalties. This is true for home visiting, particularly in Ohio.

Every nonprofit has its own set of metrics to judge performance along with its requirements for accountability and transparency. The point here is that whatever the measures, they must accurately reflect the organization’s activity and outcomes, and leadership must be willing to be candid so that funders and stakeholders know whether the organization is meeting its commitment. If not, how do they plan to improve? The importance of the quality of the organization’s data cannot be minimized or ignored.

Initially, our outcomes for performance were focused on birth outcomes for mothers and babies, child health, developmental progress, positive parent-child interaction, and achievement of life goals. For process we used referrals, engagement, retention, and operation. We continued to update the metrics quarterly and to share the results among all agencies. We were able to begin to add in our experience with our QI projects, turning trend numbers into implementation opportunities. Performance monitoring for the population served was an equally important aspect of measurement. The data collected for QI and population-level performance monitoring were the same, yet they were applied in different ways.[pull quote]

Here is a real-life example. In January 2015, we found that only 72% of children actively enrolled in ECS were fully immunized by age two. We aimed to improve that rate to 85% by June 2016. Past performance for the last three quarters had been below 70%, and monthly numbers ranged from 57% to 74%. At that time, roughly 30 ECS children turned two each month. Using QI small tests of change, we engaged in several activities to improve the immunization rate, including partnering with medical providers, improving access to and timely utilization of statewide databases, educating home visitors to be more aware of when children were due to be vaccinated, and reporting progress on the monthly ECS performance reports. This enabled us to meet our goal.

Over time, in line with federal and state-required performance measures and our own priorities for success, we monitored a broader set of measures. This enabled ECS to show impact on an array of important aspects of health and well-being for mothers, infants, and young children. This includes positive results in the following areas.

  • Child health and development: Compared to their peers, children in ECS had the advantage of better child health, including lower rates of infant mortality and prematurity, higher rates of on-time immunizations, fewer developmental delays, and more nurturing and learning with their parents.
  • Improved maternal well-being and life course: Mothers in ECS had the advantage of getting more timely prenatal care, support for smoking cessation, and maternal depression treatment. In line with their life plans, they were more likely than their peers to delay subsequent pregnancies, to complete high school, to be employed, and to have greater earnings.

What Results Were Achieved?

These results from 2017 were typical of what was achieved among the families who participated in ECS. The families served by home visiting fared better than comparison populations in terms of education, employment, maternal and infant health, mental health, early relationships, child development, and more.

More Education and Employment

Mothers in ECS were more likely to:

  • Complete high school: 80% of mothers had a high school degree by the time their child was 24 months.
  • Return to work or education following the birth of their baby: More than 70% of ECS mothers returned to school or work.
  • Be employed: 38% of ECS mothers reported employment at time of enrollment, increasing to 77% by the time children reached 15 months.
  • Have increased income: 25% of employed mothers in ECS reported an annual household income of less than $3,000 at enrollment; this rate of extreme poverty decreased to 15% by the time children are 15 months.

Improved Maternal Well-Being and Life Course

Mothers in ECS had the advantage of:

  • Being reached and served in communities with high levels of poverty and disinvestment: Using an intensive engagement strategy in a high-risk Cincinnati neighborhood, ECS increased retention in the program by 30%.
  • Getting depression treatment: Among mothers with major depressive disorder who received Moving Beyond Depression services in ECS, 70% were successfully treated.
  • Delayed subsequent pregnancies: Mothers with robust participation in ECS had a 33% lower risk for rapid, repeat pregnancy than mothers who had lower participation.

Better Child Health and Development

Compared to their peers, children in ECS had the advantage of:

  • Less infant mortality: Babies in families enrolled in ECS were 60% less likely to die as infants, in the first year of life.
  • Being born on time: More than 90% of ECS children were born on time, not prematurely.
  • Healthy infant development: Over 95% of ECS babies were developing normally on their first birthday, compared to 82% in similar populations. More completed home visits were linked to higher rates of optimal development.
  • Opportunities to learn at home: 95% of parents were actively involved in their child’s learning.
  • Being nurtured: By 15 months of age, 72% of children lived in homes with a high level of emotional support and nurturing early relationships.
  • Getting ready for school: 86% of families who graduated from ECS had a plan to send their children to preschool.

Source: Analysis of eECS internal data.

Showing results on multiple measures across the ECS population served reflects the purpose and essence of performance monitoring. Publicly funded programs delivered by nonprofit organizations in communities across the country, including home visiting programs, are expected to show results. Moving the needle for the ECS population served is a primary goal, and in some cases, such as in Ohio Help Me Grow, payments or incentives are tied to performance. At the same time, as discussed previously, with programs reaching only 20% of need, it is difficult to move the needle on outcomes for the community population overall. Since performance monitoring relies so heavily on collecting the right data of the highest possible quality, it is worth discussing how we got data to do our measurement.

Some Data Challenges

Finding Baseline Data

As a part of our business-oriented thinking, we knew that we needed to be able to quantify our work. We needed baseline data to assess where we were on day one and then, going forward, we needed a system to house and analyze the information we would collect to gauge our progress.

One story focuses on our search for data to shape the Task Force phase-one recommendations. David R. Walker, smart and senior at P&G, was helping us identify and collect information to construct recommendations for the report. He was stunned when he realized what data did not exist. He let us know that there is no way P&G could conduct its business given such sparse information, and he was sure that we had to look harder and be more aggressive. Finally, Walker decided to send a few P&G people to Washington, DC, to find what we were unable to produce. Sadly, they came back empty-handed. The data problem was not just a local one but rather it was—and continues to be, although it is improved—a national issue. Too often decisions are made without adequate information.

What the Washington, DC trip did was to underscore the need to require that the new ECS program at Cincinnati Children’s Hospital would have adequate funding to conduct QI, evaluation, and research to improve and guide decision-making for program operation, outcome monitoring, and return on investment.

Help came from Cincinnati Children’s, which insisted on good measurement, research, and evaluation as requirements for its participation. We were able, with the hospital’s support, to secure funding sufficient to develop a sound research focus, high-quality staff, and leadership from two invaluable, nationally known researchers focused on children and families—Ammerman, a PhD clinical psychologist, and Putnam, a physician, psychiatrist, and international expert in child abuse.

As time passed, what we as an organization encountered was not unique. Too often, even when we know what to do, we cannot do it, perhaps because of not enough funding, perhaps public attention is focused in a different direction, perhaps the need is not obvious.

Our response was just to keep learning, doing what we knew was right and making the case publicly as effectively as we could.

When P&G’s then-CEO Pepper provided testimony to the United States Senate in April 2014 to support The Strong Start for America’s Children Act (a piece of legislation designed to bolster Head Start, child care, pre-K, and home visiting programs), he said, “In business we rarely have the luxury of making an investment decision with as much evidence as we have to support the economic value of investing in early childhood development and education.” So, the case for the value of early childhood investment in children and families for economic development and family well-being was clear.

In 2014, while the new federal home visiting program, MIECHV, had been launched and would collect data and information on QI, performance monitoring, and evaluation for the thousands of home visiting programs now supported by grants to states, the data were not yet in. Many studies of home visiting models pointed to the effectiveness of the services under research conditions (Duggan et al. 2022; Supplee et al. 2021; Green et al. 2020; Greenwood et al. 2018; American Academy of Pediatrics 2017; Minkovitz et al. 2016; Avellar and Supplee 2013; Duggan et al. 2013; Goyal et al. 2013; Olds et al. 1997; Olds et al. 1988). At the same time, Walker found as he looked for data about the effectiveness of specific programs operating in communities that the data/information on their effectiveness were lacking.

Yet, we succeeded in collecting data to an extent we could not have initially envisioned. Since our inception in 1999, ECS had accumulated extensive data for over 700,000 home visits. Carefully collected and validated for accuracy, this data file, we believed, would be a treasure for scientists. We moved the file to Cincinnati Children’s where it became part of a Maternal and Child Health Data Hub at the Children’s Research Foundation and over time to researchers outside of our institution. The file is expected to continue to grow, including not only the initial 700,000 visits but additional data for the approximately 30,000 visits that are made annually to families in ECS. The hope is that it will in time become the kind of data file that Walker searched for over 20 years ago. Such data could serve as groundwork for early childhood system efforts.

The Need for Good Data

Having clean, reliable data for decision-making is fundamental. Yet, many nonprofit agencies have little or no history of collecting, collating, and analyzing data. Nor do they have staff people trained in those disciplines to help. There are two reasons: First, historically many nonprofits were not asked for data to show results. Second, in general, nonprofits simply did not have the funds to pay for data collection and management. Many nonprofits were faced with the need to document their effectiveness but lacked the personnel or the resources to do so.

Increasingly, government entities and philanthropists want to see results from their investments and, without an ability to use data to document what is happening, not only are agencies unable to answer the key questions, they also cannot compete well for funds. Gone are the days when an agency can point to happy clients as evidence of success. Funders and policy makers legitimately want to know what happened, who was served, and what were the outcomes? Was it cost effective? Were improved outcomes sustained? At ECS, we were fortunate because, under the guidance of Cincinnati Children’s and with funding from the United Way, we were able to build and sustain a vigorous evaluation and research component for our program.

The data issue had two components: Do you have the data? And is it reliable? Data integrity was frequently subject to being compromised, because as data were entered into multiple data-collection systems, entry errors were possible. As a recipient of both public and private dollars, there were multiple opportunities for error by ECS. Each transaction had the potential for a mistake. Further, the data collection systems had varying levels of compatibility, and the person entering the data was often dependent on someone else, in our case home visitors and agencies, for information. Repeated checks and balances were required, and staff was needed to do the work.

We needed reliable data for analysis, and therein lies a challenge. Within ECS, we could enlist our provider agencies individually and collectively to check and check again before they submitted data to us. And then we could go over it again. However, not all organizations are able to engage in this labor-intensive process. So, when it comes time to compare or combine data from one organization to that of another, the data may not reflect the same rigor. As data are entered into various systems, the opportunity for error increases, and we were submitting data to federal and state governments, home visiting program models, funders, grant allocators, and program partners.

To offer some idea of what this looks like, in addition to reports required for grants and fundraising, annually we collected and analyzed data for, at a minimum, 52 reports, many of them mandatory. We maintained a report activity timeline with the following information:

  1. Activity/core evaluation
  2. Description
  3. Purpose
  4. Key tasks
  5. Stakeholders
  6. Why important
  7. Impact if ECS doesn’t do it
  8. Priority
  9. Estimated effort
  10. Other resources needed
  11. Timeline
  12. What/how could ECS improve how this activity is done

For ECS, data and information from these reports were used to monitor agency compliance and performance, to support research grants and evaluation projects, to meet program requirements, and to comply with local, state, and federal regulations. I remember a conversation from early in my tenure when we were just beginning to receive a sizable amount of data, and our agencies/home visitors were finally comfortable with the data collection processes. Every Monday morning, we had an evaluation team meeting. We presented reports from information that had been compiled by the agencies during the previous week. In this case, the focus was on the number of children who had died while in our program and how we had recorded those deaths. The report was a part of our careful monitoring of our infant mortality experience. Our scientific director and our evaluation director asked—as they always did—dozens of questions about the data: How were they collected? What was missing? Were the numbers duplicated or unduplicated? Who recorded them? Have they been verified? It was a barrage of questions, yet typical of how we analyzed all of our findings so that we could issue complete and accurate reports based on careful review of our forms and inventories.

In the case of the infant death data, the information became a central part of our infant mortality study, ultimately published as an article in the journal Pediatrics (Donovan et al. 2007). It also reinforced the realization that evidence-based home visiting properly implemented can be an important element in reducing the unacceptably high rates of infant mortality in our community. (This was not a finding in other home visiting research published at the time.) As we examined our findings more closely, we realized that not only were our mortality rates low, but there was no racial disparity, bringing to mind a crucial question—why, when we have carefully validated a finding, and know how to deliver the program that leads to that finding, isn’t the program expanded to allow more mothers and more infants to benefit? The answer continues to elude me. It’s clear the science is not being followed, or worse.

A Web-Based System—eECS

Another important facet of the ECS data story is about the creation—and possible dissemination—of a web-based platform that would house our data, create the data files, and allow analysis of the information we would collect. This was essential to managing the volume of data we collected and wanted to use for QI, performance measurement, and research and evaluation.

We were able to secure support—using United Way funds—to contract with the University of Cincinnati to build a web-based data-collection system to meet our data collection and analysis needs. Having looked all over the country for a system compatible with our program, we realized that what we wanted did not exist. Although ECS now works with multiple systems from different states, programs, and research holdings, the locally created eECS seems to be the gold standard.

Jonathan Kopke, a technology genius—he would object to my use of the word—worked with our multiple stakeholders to create a system that he defined as “user-friendly.” Countless hours of testing were needed before even the smallest change was made to eECS. The exhaustive process was replicated over and over again, which is why it worked so well. Often the recommended changes came from the home visitors themselves as they offered new solutions to old problems, confirming the system was absolutely user-friendly. eECS has been the place we visited when we have really needed to understand what was happening with our families (Kopke et al. 2003).

Well after we started using eECS, Kopke admitted something that I hadn’t known. When he started on the project in February 2000, he had never created a web-based system. Parts of the ECS system had to be written in five different computer languages: Cold Fusion, CSS, T-SQL, HTML, and JavaScript, and he didn’t know any of them. Remarkably, he learned them all and gave us a system that was perfect for our work, user-friendly, and the envy of other home visiting programs. He calls the years with us the best years of his career. Twenty-one years after its creation, he was still supporting the ECS software. A replacement system being developed by the Biomedical Informatics people at Cincinnati Children’s was in the works as of this writing.

Kopke recently told me:

When we launched eECS, many of the home visitors were uncomfortable about having to use the computer for the first time, so, to ease their minds, I always insisted that if the ECS software crashed, it was my fault and not theirs. And, if any home visitor experienced a crash, I would send them a note saying “thank you for helping us perfect our software. To me, you are worth a hundred grand,” and I would enclose a 100 Grand candy bar. The postage cost more than the candy, but I always imagined the recipient going around to her friends at her agency and saying, “Look at this!” Then they all knew that it was okay to be relaxed about using the software, even if that led to a crash.

Over time, the home visitors came to appreciate eECS as a resource. One of our home visitors commented that her peers across the country did not have access to an eECS-type resource, and in many cases were still using spreadsheets or their state database. Kopke reflected on his time with ECS by saying, “Over all of my years with ECS, there may have been a few bad computer days, but there’s never been a single bad person day.” This is another example of infusing new thinking and professionalism into the role of the home visitor.

In terms of missed opportunities, we believed that eECS had the potential to be replicated for home visitation programs across the country. We tried to sell it, but there were many obstacles that prevented the sale and subsequent development of a source of income for ECS. It was clear that for us to monetize and sell eECS, we needed upfront money without any guarantee that there was a market. Organizations like ours don’t have access to monies that are not specifically allocated to service delivery or program operation. And, of course, we were not a tech company and discovered that we would need other people/organizations to do most of the work. The role and rewards for ECS likely would be small.

Further, an essential question that we asked ourselves was whether we would be trying to sell a product to entities that didn’t have the money to buy it, and did we have the financial and personnel resources to support growth and continued maintenance of the product? eECS needed a bigger play with an organization able to fully develop it, take it to market, and promote it. It was our idea and we not only created it but also effectively used it for two decades. But the opportunity to make it available to others, to build upon a tested/tried/validated resource was not available to us. This underscored one of the primary reasons that nonprofits are so limited in their ability to create independent revenue streams and therefore have to rely on funding from government entities or private donors.

Late in my tenure, ECS continued to record data from visits into eECS, but far less than in earlier years. State systems have improved, and if we were to continue to use both systems, their protocol would require our home visitors to enter data twice—once into the state system and again into eECS.

Our eECS provided a flexible independent system for our QI, performance, and evaluation work. The eECS system was scheduled to be replaced with the more state-centric approach, which is not ideal, but necessary. This may be a good example of being careful about what you wish for: For many years we exhorted the states to do a better job of data collection and offered eECS—a tried-and-tested option that worked. However, over time, the states accessed other resources and slowly built their own systems. They operate effectively, and ECS will be required to use them as a condition of accepting state funds. The situation was, as Ammerman aptly described it, “a dog-eat-dog world and we were a teacup Chihuahua.”

Actionable Research

A basic tenet of our three-pronged approach to measurement using QI, performance measurement, research and evaluation efforts also came from the private sector—most notably P&G—who let us know that research was not done to sit upon a shelf but to be acted upon. With our roots in an academic medical institution, we knew about the science, but to support our mission to ensure that all children had an optimal start, we also had to move from the “bench to the bedside,” which in our case was the community. We needed to incorporate what we learned into improved service delivery and/or improved family outcomes. Because we are serving families, this is fundamentally different than the type of research and development that might go into a product such as new pharmaceuticals or vaccines.[pull quote]

Although it may seem elementary, it was essential to us that measurement in the context of ECS concentrate on actionable information and new knowledge. With limited resources, prioritizing investigative work that can actually lead to program changes is imperative (Goyal et al. 2016). Spector, our quality improvement consultant, cautioned us to ask ourselves whether what we planned to study had the potential to be usable and whether the findings would be helpful for home visitors and families.

Once our original quality improvement questions were answered, and we could define for ourselves and other stakeholders that we were on the right path, something discouraging but not unexpected occurred. In several cases, we identified an actionable finding that would have improved services for families, but scale-up and/or replication did not occur because we didn’t have the funds for implementation. It is axiomatic to repeat: What we wanted to do and what we were able to do was proportional to the resources that were available to us and whether we complied with federal, state, and model requirements.

Three Faces of Measurement in Action

The three faces of measurement have been used and have been important to the success of ECS from the beginning. They are complementary yet different. QI helped us focus on whether we were doing the right things and how to improve day-to-day operations. Performance monitoring let us know if we were improving outcomes for the group of families served. Evaluation and research constitute a broader inquiry that is not always directed to specific program impact questions. However, the new knowledge generated through research can and should inform improvement.

Data collection and data analyses are further complicated by the near impossibility of using gold-standard, randomized clinical trial research designs because the time frame is too long (children grow up), the expense is high, and most important, clinical trials by definition mean that some children would receive no help at all. So, absent the gold standard but focused and committed to answering questions in a reliable way, we applied all our measurement tools in search of increased understanding.

We kept the three types of measurement in mind—quality improvement, performance and results accountability, and evaluation and research. And to that we added: seeking measurement that leads to an opportunity to use our findings in actionable ways. The challenge for a program like ECS was to use measurement work to guide program operation and deliver better outcomes for families. The RAND Corporation said it best in its 2017 research brief, noting, “The research creates a path for policy makers and a road for researchers (Karoly et al. 2017).” Focusing on actionable research and quality improvement methods showed us how to do it.

While ECS had much learning with relevance to other home visiting programs, most opportunities to scale those findings within our community and with other cities and states were unrealized. Some key actionable findings include those on the following list, but it should be noted that although we found areas ripe for change, we were not always able to move forward to apply them for lasting change to our program or the field of home visiting.

ECS Actionable Results from Outcome Data

  1. Moms who join ECS at less than 26 weeks gestation and receive at least eight prenatal home visits have a 60% reduced risk of preterm birth.

    Our Response: Work more systematically to identify and enroll moms earlier in pregnancy. Using data from our eECS database, we found that in calendar year 2018, 75% of the moms enrolled by 28 weeks received an average of 11 prenatal home visits. This is a good example of an actionable finding that should have been adopted by programs state- and nationwide because what we had demonstrated presented an opportunity to improve the outcomes for moms and babies by reducing preterm birth. Although we disseminated the finding widely, nothing changed at either the governmental or program level.

  2. Moms who became aware of the ECS program as a result of our concentrated work in the Avondale community joined ECS earlier and stayed longer, but we also found that earlier enrollment was not associated with improved parenting.

    Our Response: Restore and expand the community initiative, and develop more-focused parenting interventions. We were able to concentrate more intentionally on parenting, but we were not able to locate public or private funds to grow the community intervention.

  3. Moms with even a moderate adherence to the recommended home visiting schedule display a 30% reduction in repeat pregnancy prior to 18 months.

    Our Response: Use QI strategies to increase the number of moms who accept more home visits. Using small tests of change to test and modify how home visitors delivered services and engaged families might have helped to increase the number of participants receiving the recommended number of visits.

  4. Children in ECS often struggle with literacy and language skills as identified through scores on the standardized tests and measures.

    Our Response: Adopt a more systematic approach into the curriculum to boost vocabulary acquisition and literacy. This problem offers insight into a different aspect of an inability to move on known findings because funds were not available to follow up. Research tells us that words and conversation matter even at the earliest ages. We found grant money to contract with the LENA organization to engage families in the important back-and-forth vocalization with infants (one part of what is sometimes called “serve and return” interactions) to shape the developing brain. We were able to conduct the grant work but once again, there was no money to spread the strategies within ECS or to other programs within Ohio or Kentucky.

  5. On-track development for children is associated with lower levels of parenting stress. Parenting stress, trauma, and family violence occur at high rates and are significant issues within ECS.

    Our Response: Elevate addressing trauma as a priority within the delivery of home visiting services, both locally and with home visiting as a discipline. Work to reduce stress more systematically for the mom, especially during the first year of life, by providing trauma-informed training for all ECS home visitors. In this case, the state of Ohio also instituted statewide training for home visitors who were encountering magnified stress levels among parents as a result of the COVID-19 pandemic.

  6. Moms who have experienced trauma are more likely to exhibit depressive symptoms and have low levels of social support. Mom’s depression frequently affects the child as her ability to parent is diminished.

    Our Response: In addition to providing Moving Beyond Depression services through a trained therapist, ECS now offers trauma-informed training for home visitors. Remember, however, that the Moving Beyond Depression set of services is only available where there are funds to pay for it. Even though the efficacy of this new model has been established through random trials and experiential evidence, public funds are often not available to pay for the treatment.

  7. Moms are highly responsive to affirmation of their parenting skills and grateful for time spent addressing their needs and desire more help as they return to school or work.

    Our Response: Incorporate a more positive approach to the interaction of the home visitor and the mom, concentrating on a plan and hope for the future. ECS continues to do this and one visible manifestation could be developing a clever charting system to highlight achievement of goals and mark progress. Brentley did this with vigor during the Avondale mom’s group meetings. Each meeting began with a celebration and that celebration could be for such things as a new tooth or a full-term pregnancy. It was less important what was being celebrated; the point was to highlight the positive experiences of the families and the successes of these incredible moms.

  8. Attrition rates in home visiting remain high and stable over time. Motivational interviewing and qualitative projects have not improved retention: Although moms say that they do not want reduced interaction, only 25% of moms remain engaged until the end of the program when the child is three years old. This finding is endemic to the entire field of home visiting.

    Our Response: Conduct additional focus groups and interviews with individual moms for further learning. We found that moms leaving the program early might actually be a positive response, with moms reporting that they felt strong enough and capable enough to “fly” on their own. In addition, more extensive research was needed to determine “what works best for whom” to link program offerings more effectively with the needs of the individual family. One national program, the Nurse-Family Partnership, has begun offering families an opportunity to work with home visitors to make a plan for frequency and length of visits with that specific family.

  9. Moms who are enrolled in ECS are more likely to access early intervention services for developmental risks and delays than comparable families who did not participate in the program.

    Our Response: Continue to encourage families to work with their home visitors and to act when developmental delays are suspected. The relationship between home visiting and early intervention is a good example of coordination that is effective.

  10. Bringing fathers into the program intentionally led to more effective co-parenting.

Our Response: Funding to continue sought but not secured. This is another instance of having new and important information about a better way to deliver home visiting that we were unable to incorporate into the program because funding was not available.

Actionable Results from ECS Quality Improvement

The following examples show greater detail about the interaction of different measurement tools and approaches in a process of improving ECS services. Having a wider array of skills, staff capacity, and data available enhanced our ability to respond to challenges within our service delivery system and were meaningful for the field at large.

Greater Use of Trauma-Informed Care

We became aware of the prevalence (about 70%) of experienced or perceived interpersonal trauma affecting our families, and we knew that there was a high likelihood that the trauma would subsequently affect the foundational early relationships and social/emotional development of the child (website for SAMHSA, “Understanding Child Trauma”). Hence, we obtained a grant to train our home visitors so that they could more effectively respond to the trauma they were seeing. We provided a six-month professionally led course in trauma-informed care for all home visitors. Our anticipated benefits for families included more responsive parenting, improved safety of home environments, and enhanced child development. And importantly, our home visitors themselves expressed great personal interest in this course, too, as it responded to similar needs in their own lives.

Our QI tools helped us understand whether the training had the desired impact on home-visitor service delivery. Over time, performance monitoring would tell us if family context and early relationships were improved. Our evaluation instruments helped us document success, while our research offered depth and explanation. And, because multiple facets of measurement were hallmarks of ECS, we had such breadth of approaches ready to respond to this important insight and need for program improvement.

The work of ECS also provided some new insights related to complex outcomes, such as infant-mortality reduction, literacy and language development, and social/emotional development. These important learnings for the field were made public through the ECS annual report; frequent data briefs; local, state, and national presentations; peer-reviewed articles; board, staff, and agency reports; and media notices, as warranted.

Yet challenges remained for the ECS program and the field in general for empiric knowledge around which outcomes were clearly and substantively improved by enrolling in home visiting.

This research agenda has now been defined as: What works best for whom, and how many visits (what dosage) of what content are needed to achieve specific outcomes? Supports to answer those and similar questions are typically funded through research grants and/or private funders. They are important to confirm the role of home visiting in the important continuum of services for children.

Expanding Outreach and Enrollment

Outreach and enrollment of moms was always a challenge. There were those families who sought us out and joined because they recognized the benefits. Others who might benefit faced barriers to enrollment. We had to ask ourselves: Were they aware of the program? How could we reach the moms who might be the ones who could benefit yet were not enrolled? Did some women find the program threatening or objectionable? Did they perceive it to be culturally sensitive and responsive? Did we ask them for an unrealistic time commitment? Were they afraid or cautious about having someone come into their homes? Were they skeptical about the strategy itself or perceive it as unlikely to be helpful to them? Were there implicit bias factors that were real but not obvious to ECS?

We did some exploration through qualitative research, convening numerous focus groups, individual interviews, and ethnographic studies to attempt to answer two seemingly simple questions: Why don’t women who could benefit from the program join? And why do families who join typically only stay 18 months? These were basic questions on the surface but far more complicated and worthy of sincere and honest study. Various investigations conducted by Neera Goyal, MD, a pediatrician with a special interest in perinatal outcomes, clearly understood “bench to bedside.” She led us to greater understanding of the timing of enrollment and duration of participation (Goyal et al. 2018; Goyal et al. 2017; Goyal et al. 2016; Goyal et al. 2014; Goyal et al. 2013).

Unexpectedly, as mentioned previously, we gained one valuable insight into the questions about length of program enrollment: we began to understand that moms who were leaving the program at 18 months were not leaving because we had failed, but rather, because we and they had succeeded. Moms told us that they had developed confidence in themselves and were ready to go forth on their own. They had become better positioned to take action for themselves and their children, they were going back to school and back to work, and they were doing what we encouraged and aimed to support—being independent and using self-agency to achieve their goals.

Work Left Undone

Other topics have appeared that additionally needed strong, actionable measurement and research. These include: the potential for improving the relationship between home visiting and pediatric primary care; the opportunity to support tiered community teams that would include home visitors along with community health workers and other outreach workers; and the focus on creating a two-generational and community-building approach that would lead to better coordinated systems of services for families.

Importantly, what underlies the possibility of success for any of these services and initiatives is both understanding and taking action to reduce the role of racism in families’ lived experiences with home visiting services, health care, trauma, depression, and other factors that affect daily life. We saw the importance of changing structural racism to maximize the opportunity for success among the families and children served. Measurement has been identified as a key part of state and national efforts to reduce racism and bias in child and family services (Wien et al. 2023; Dyer et al. 2022; Hardeman et al. 2022; Condon et al. 2021; Zephryin 2021; Bruner 2017; Ellis and Dietz 2017; Johnson and Theberge 2007).

ECS Measurement Contributions to the Field of Home Visiting

The ECS measurement strategies and research agenda brought many new insights to the field of home visiting, including topics related to impact on birth outcomes, why mothers engage or continue participation in home visiting, the attributes of effective home visitors, and the value of community engagement and enhancements. As previously discussed, one research project, led by Putnam and Ammerman, resulted in the development of a new evidence-based in-home cognitive behavioral therapy program called Moving Beyond Depression, which works in tandem with home visiting to effectively treat maternal depression.

Here, guidance came to us from Kay Johnson, a national expert in maternal and child health, who consulted with ECS for nearly 20 years. Using her decades of national experience, Johnson urged us to ask penetrating and often uncomfortable questions about what we were doing and why. With her nearly encyclopedic knowledge of home visiting policies and procedures, Johnson helped us place our concerns and issues within the context of agendas larger than our own. We wanted to place this line of thinking on the national stage, to offer a format for scientists, program and business leaders, funders, and policy makers working in the home visiting space to consider new ideas and new findings together.

It was Johnson who worked with us and the Pew Charitable Trusts Home Visiting Campaign in 2011 to implement our idea to establish the first National Summit on Quality in Home Visiting. We called it a “marketplace of ideas,” and what we envisioned was a meeting where our colleagues could gather to learn about what we were learning, how legislation might affect our work, and how public and private funders were supporting home visiting as a prevention strategy for children birth-to-three years of age. After the first five years of annual Home Visiting Summits, the distinguished Ounce of Prevention Fund (and now renamed Start Early) assumed leadership for the summit, and in 2021 the summit celebrated its 10th year with over 1,200 attendees. What began in 2011 with 400 people grew to become an important national forum, maintaining the original concepts of learning from each other, supporting the crucial role of science and policy in home visiting, and determining how home visiting fit within the larger context of early childhood programming.

We envisioned that a national summit could be held to address the seminal questions that the RAND Corporation first asked in its 1997 report, Investing in Our Children. RAND analysts expressed concern that “it is unclear what will happen to these programs (home visiting) once the media spotlight moves on and budgets tighten.” The report highlighted the difficulty in understanding why successful programs work—what are optimal program designs? How can prevention and early intervention programs best target those who would benefit most? Can scaled programs produce the same results? What is the full range of benefits? How will the programs be affected by the changing social safety net? In short, RAND advocated for increased research to determine why programs work, for building and expanding the evidence base, not just resting upon it.

ECS as an Innovation Lab

The experience of ECS in its efforts to become a laboratory for innovation further illuminates why nonprofits are so often stymied in their quest to improve and grow. When our for-profit colleagues identify a product or a service with potential, they can finance their proof-of-concept activities, sometimes by using internal research and development funds, or support from outside investors or seed money. They typically have research and development components within their business structure, specifically focused on finding new products or upgrading existing products. The lifeblood of the company is maintaining market share, and that is done by continuing to produce something a consumer would want to buy or needs to use. Typically, money within for-profit organizations is allocated for growth and expansion. As examples, the toothpaste that seemed fine this year is improved next year with additional whitening properties or ingredients to maintain gum health. The dog food is upgraded to be more balanced and healthier. These are improvements that not only add value to the consumer but also bring market value to the company.

What happens in the nonprofit world is that funds are rarely available for such improvements, or sufficiently resourced for growth and development. Rather, budgets are built with minimal overhead and organizations are lauded for spending the lowest amount possible even for basic service, in our case the home visit. What might help an organization like ours to improve family outcomes or to disseminate a new strategy to other home visiting programs is not supported. The concept of entrepreneurial thinking and the excitement of innovation are thus often lost. With them goes the opportunity to improve our intervention, to bring new dollars to the enterprise and simply to be stronger. The growth equation does require initial money for development and testing, subsequent funding to bring the idea to market, and finally support for sustainability. Grants can work at steps one and two, but the long-term sustainability issue largely falls to public-sector funders, which is usually harder to move.

For example: we used public and private grant money to create the Moving Beyond Depression intervention and minimally create public awareness. What we never had was the public-sector money that would have allowed us to scale Moving Beyond Depression more broadly to states and large home visiting programs and to sustain the effort, even though we had demonstrated that mothers improved dramatically with the intervention. And that although there was an initial cost, reduced expenditures may be possible as the mother’s depression was treated and she was able to be a better parent, less dependent over the years on the health care system.

The idea for taking ECS from a local/regional program to an innovation lab with national impact came up in 2018 and 2019. To begin to understand where we might fit in the landscape, we did some brand comparisons of other think tank or innovation lab type organizations working on early childhood and/or home visiting issues. The scope and scale of our innovations in program delivery and research were unique. We had lessons to share, tools that could add value for others, and evidence-based practices to disseminate. This direction fit with what we had been doing. Yet how could we make the leap, given limited resources that were primarily dedicated to direct services in our community? How could we be both a highly successful home visiting program and an innovation lab without substantial new resources?

So, here is where we aimed with the ECS innovation lab. Our ECS staff, working with our board and the scientific advisory committee in 2020–2021, recognized the need for improved focus and specificity for our initiatives. Further, we wanted to capitalize on what we saw as our role in the home visiting field. We were a laboratory for innovation in home visiting and used three evidence-based home visiting models, three types of measurement (QI, performance, and evaluation and research), and innovative augmentations while supporting strong parenting and child development annually for nearly 2,000 families.

We knew that over the years, we had become much more sophisticated and that there were an estimated 4,000 to 5,000 local home visiting programs operating in agencies across the country. In essence, we had a self-imposed “identity crisis” as we analyzed who we were now and what ECS could/should become in the next 20 years, building on our strengths and learning from our failures.

ECS was at a crossroads. For more than 20 years, we had a robust service-delivery program, with measurement and research efforts that contributed to improving services for the local program and programs nationally. Yet we did not think that we had achieved our full potential. Moreover, the home visiting field itself had changed considerably, especially since 2014 when the federal MIECHV home visiting program had been fully implemented. Funding options shifted, more questions emerged about how home visiting fit into early childhood systems, a clearer picture emerged of the limits of existing knowledge about home visiting, and new questions and approaches to guide research were being considered. As a result, we saw an opportunity for ECS to reconceptualize its efforts and to assume a position of innovation and leadership by taking the opportunity to answer important questions in the field, leading to substantive program improvements.

But to be seen as a leader in generating ideas for improving home visitation, we required national recognition for our work and the branding to provide awareness. We saw the designation of ECS as an innovation lab as a path to establish our role on the national stage, leveraging our expertise and that of Cincinnati Children’s and building upon our learnings and our accomplishments.

To make that happen three things were needed: funding to support an innovation lab infrastructure, concurrence for the initiative among our stakeholders, and a commitment to focused work.[pull quote]

We knew that we would initially be unable to compete with the large and well-funded initiatives at the Harvard Center on the Developing Child, the federally funded Home Visiting Applied Research Collaborative (HARC) at Johns Hopkins Bloomberg School of Public Health, the Children’s Hospital of Philadelphia (CHOP), and the University of Chicago’s Chapin Hall. Yet with our own data infrastructure, service capacity, and the expertise at Cincinnati Children’s, we knew the right pieces were in place for what we wanted to do.

The locus of our strength was our broad vision not just for what might appeal to a funder but also what we could do to uncover what makes home visiting effective and what makes it relevant. We knew that ECS was impactful, but it could be better. We were in a perfect place to begin to answer these questions because we were one of the few places in the country where home visiting was grounded in an outstanding children’s medical center, where multiple models of service were used, and where the focus on innovation for improved outcomes had been a tradition. Moreover, we had used the three types of measurement and had a large database and set of skills from which to build.

We agreed that both the strength and weakness of most evidence-based home visiting work was the breadth of what the program hoped to accomplish with families, but we acknowledged that we could not do it all. We knew that we must accept the role of home visiting as part of a solution, part of a continuum of services for families and young children. We acknowledged continued success for all of us was dependent upon collaboration and cooperation with the community—how well we worked together to solve problems and improve our intervention.

Without discarding the work we were doing, we believed that if we could concentrate on one clearly defined, measurable outcome that was important to families in home visiting programs, we could “move the needle” for that outcome. We chose Ready for Pre-K as the focus. The aim was to ensure that all children leaving ECS would be on track on all areas of development, ready to thrive in the preschool setting, and with the start that they needed to succeed in school and in life.

If we were able to make definitive and verifiable statements about how home visiting contributes to achieving that outcome, we could substantially add to the reasons for supporting home visiting at all funding levels.

The innovation lab, we argued, was the right structure to engage in that kind of focused, deliberate work. Yet we knew that the lab could not operate without a budget for infrastructure, including content experts and staff. Becoming an innovation lab would position us as a trusted organization that could be relied upon to advance an innovative, focused mission, as well as providing high-quality services for families.[pull quote]

We wanted the innovation lab to be chaired by a business executive or someone who would have a realistic view of the entire system. We stressed the importance of a clear, concise mission and focused meetings.

As I retired from my role as president of ECS, the innovation lab concept was still being considered by the staff and the board. The idea had not been implemented, even though support for the idea remained strong and the possibilities were recognized. Without funding for the infrastructure, a vibrant innovation lab cannot exist.

The Need for Evidence

There is an important discussion occurring at a national level that has the potential to change the way in which nonprofits define evidence to support the effectiveness of their programming. Think tanks and foundations, including the Brookings Institute, Pew Charitable Trust, American Enterprise Institute, Harvard Center on the Developing Child, Urban Institute, and several government entities (e.g., Government Accountability Office) are addressing the essential question: What really constitutes evidence to guide public investments?

The word evidence has different meanings for different people. Definitions have become facile, and the word itself has been so overused that when we say that a home visiting program, for example, is evidence-based, it is not altogether clear what that means. This is a particularly important question, since federal law requires that MIECHV funds be used primarily to fund evidence-based home visiting.

Katharine B. Stevens, PhD, then a scholar at the American Enterprise Institute, wrote in her unpublished manuscript titled “Why We Need New Evidence Standards for Publicly Funded Social Programs” that “A distinct set of standards, uniquely relevant to research to guide policy decision making, is badly needed. The central question those standards must answer is: What effect size on which outcomes must be demonstrated to provide policymakers with a high degree of confidence that people’s lives will be positively impacted in a substantial and meaningful way?” She calls for increased focus on significant impact, outcomes, study quality, and numbers of studies (Personal communication, unpublished manuscript).

At ECS, the challenge to define those terms has been central to our work because, without knowing what was working—and, importantly, why—the value of our work could not be ascertained. We began with a focused effort to determine how we could obtain accurate and timely data for decision-making. With our strong foundation in measurement, we were able to create the mechanisms to collect, verify, and analyze our data. We hired staff to work on those assignments, we became well-positioned to operate in a data-informed world. We were fortunate to have the resources for these activities, but many nonprofits do not have the staff or the systems to manage the data needs for an evidence-based program. We have done a thorough job identifying the “what” part of the data equation. Answering the “why” questions has been more elusive, as is often the case.

We have needed data for multiple reasons: Our home visiting program models have data requirements which may or may not be concurrent with what we are being asked to provide to the state and federal governments. Some states have a central repository for data while others do not. Some funded programs are required to provide data while others aren’t. And beyond just having the data itself, a way to ensure that the organization’s data are accurate is sorely needed. Further, sometimes, decisions are made using the data, but sometimes decisions are made politically rather than based on verified data that supports effectiveness. The accountability that has been given a voice is not heard, and consequently we don’t build upon what is working.

And there is still so much that we don’t know at ECS and in the field of home visiting. Our studies don’t yet tell us all we want to know about what works for whom. Systematic reviews and national evaluations have not found highly significant impact on all the promised outcomes of home visiting. Does the data give us the evidence to answer such questions as: Are we starting early enough? Does what we do to encourage people to continue for the first two years do that? Could we effectively provide a few visits to all new mothers with more intensive services for those who want and need more support? Are we meaningfully engaging families as partners? Can we achieve the impact we want without strong early childhood systems, including transitions to quality early care and education at age three?

Nonprofits must have the opportunity to do more research and better use the data they collect as they try to solve big, complex problems, not only for themselves but for the communities they serve.[pull quote]

The basic questions are: 1) What is sound and reliable evidence for a finding? 2) Under what conditions will policy and funding decisions be based on documented outcomes and the ability to implement the finding? These are not new concerns. The relevant literature is replete with examples of evidence being minimized and programs that are working not growing or not being disseminated to reach their potential. Rather, new programs often then come along with new promises and new leadership and public enthusiasm—and they do not always work out, either.

We need an investment vision to effectively triage and support programs that have been proven to work and to eliminate or improve those that don’t. We need investments for a system of coordinated community-wide services based upon transparency, accountability, and documented evidence of effectiveness, and the learning agenda to demonstrate clear progress.

Stevens characterizes it by asking what standards should be applied to research to guide public funding of interventions for early childhood home visiting programs that aim to improve the well-being of children and adults. Specifically, what are the appropriate criteria for the design and quality of studies used to inform policy decisions about home visiting, and what constitutes a sufficiently large effect on key outcomes to warrant public spending?

In 2018, the Early Childhood Data Collaborative, convened by the national organization Child Trends, reported that for investments in early childhood, policymakers and other stakeholders need “access to data about the use and quality of early childhood services (Jordan et al. 2018). Yet early childhood data are often disconnected and housed within multiple state agencies. As a consequence, decision makers frequently lack the comprehensive information they need.” And obtaining that information rests upon finding a commonly accepted definition of evidence and a willingness within policy and philanthropic organizations to really use that information to make public policy and funding decisions.[pull quote]

The consideration of evidence and data responds to an early and continued query from our business colleagues: What will be the most compelling measures that we will have in the next few years that will document that this program is making a major difference? And are those findings actionable? These two questions are, to me, the animating factor for our entire research and evaluation work. We have made forward strides, and found many encouraging answers, but more definitive and next-stage evidence is needed. That requires funding to continue building on what we have in place.

Lessons

  1. Align and co-design measurement efforts with input from key stakeholders. Engage the board, staff, and families when designing strategies for measurement, data collection, and defining success. The work must be respectful and not put undue burden on families, communities, or the workforce and should not conflict with organizational values and mission.
  2. Use the three faces of measurement: quality improvement (QI using small tests of change), population-level performance monitoring (impact on population-level outcomes), and research and evaluation (to study what works and innovate). Using all three will help the nonprofit understand and improve program operation and program outcomes. The combination of these three approaches uses your data in different ways and creates a comprehensive approach to measurement.
  3. Clearly define what is meant by the term evidence in the context of your work. Differentiate among the terms evidence, evidence-based, and evidence-informed. Don’t forget that innovation generates new evidence that allows programs to grow and adapt as families, communities, and policies change.
  4. Ensure that your measurement and research is actionable, that you can apply what you learn. Use the data you collect for making decisions, solving problems, and improving outcomes. Strive always to not only report what happened, but also why it happened so that appropriate changes can be made.
  5. Secure dedicated funding for the administration of the measurement and research and development (R&D) infrastructure so that these efforts can be productive and on a firm foundation. Remember that grants and/or contracts rarely cover the total cost of data, QI, and evaluation.

Remember, research and development functions in the nonprofit world are typically funded by grants and/or contracts, leaving little time or support for innovation and iterative learning.[pull quote]

The key questions are: What to measure? How to measure? What constitutes evidence of success? What is actionable?

Bring academic and for-profit community perspective together.

Mistakes can be gifts, not failures.

Leadership must be willing to be candid so that funders and stakeholders know whether the organization is meeting its commitment.

Research is not done to sit upon a shelf but to be acted upon.

For an organization to assume a position of innovation and leadership, it needs national recognition for the work it is doing and branding to provide awareness.

Becoming an innovation lab would position us as a trusted organization that could be relied upon to advance an innovative, focused mission, as well as providing high-quality services for families.

Nonprofits must have the opportunity to do more research and better use the data they collect as they try to solve big, complex problems, not only for themselves but for the communities they serve.

Find a commonly accepted definition of evidence and a willingness within policy and philanthropic organizations to really use that information.

Research and development functions in the nonprofit world are typically funded by grants and/or contracts, leaving little time or support for innovation and iterative learning.

Annotate

Next Chapter
Chapter 8 - Funding for Nonprofits Is Complex and Challenging
PreviousNext
All rights reserved.
Powered by Manifold Scholarship. Learn more at
Opens in new tab or windowmanifoldapp.org