Measuring What Matters

When I first became an Executive Director of a small agency in 2015, for the first time I realized what it meant to be responsible for the livelihood of others— specifically, my staff. I never wanted to lose a single funded contract because that could mean having to let people go, which would mean that I had failed them and their families. I had a “crisis of consciousness” though, when I realized that I wasn’t really sure if those contracts were worth keeping. Other agencies had similar contracts and seemed to be performing better. Having too many work streams made it tough to communicate our mission. And most importantly, I couldn’t even tell if the programs we were managing made a difference to our participants.

Don’t we all want to know if our programs are making a difference? Or shouldn’t we? I would hope our funders do too, as well as my colleagues in the field. I’m positive most front-line workers I’ve met, even though they hardly ever receive true feedback to very whether it’s true or not, WANT to know that their work is making a difference. Unfortunately, whether it’s a small grant or portion of a multi-billion dollar program, the nonprofit sector lacks sufficient enabling conditions to know what’s working, or even agree what “working” looks like.

Here’s an example of what I mean:

I moved to Toronto in 2019 and took a job at a multi-service agency where one of my responsibilities was managing a workforce development which is part of a multi-billion dollar Provincial system called Employment Ontario (EO). Like most North American workforce systems, EO funds training programs, case workers, job coaches, and supportive services. In theory, programs like this are the government solution to moving people off public benefits. It makes sense on paper— if someone can get a job, they won’t need public benefits anymore. In general, work-able people on social assistance tend to find jobs and move off of social assistance. A 2002 study by the Brookings Institution found that 65-75% of welfare recipients find work and leave social assistance on their own. Do employment programs have an impact? In Ontario, Employment Assistance programs helped 10-13% of participants leave social assistance (Ontario Works) from 2013-2018. Is that a number that indicates success or program effectiveness?

How do we even measure “success” in a situation like this. It’s hard to know whether someone would have found work without an employment program (unless the training feeds directly to an employer). People on welfare in Canada use similar metrics to the US to measure program success— basically, if your participants are employed after 90 days of training or coaching, either full or part-time, at any wage, or have they enrolled in an educational program, and you do this at rate above, say 75%, then you are successful. “Success” means you get to keep your funding.

But what about the success of the participant? What does this look like? Is that minimum wage job family-sustaining? Do they still have it at day 91? Is the job just a job, or is it on a career pathway? Do they have to work other jobs to supplement their income? Were they displaced to the next town over because their wage can’t pay rent in their “first-place” community? Are they spending more time commuting and less time on healthy habits, or spending less time with loved ones and friends? Did this move require a vehicle purchase at a predatory rate of 29.99%? If they get a flat tire, can they repair the tire to get to work, or will they lose their job and have their car repossessed? Do they even like the job?!

What we measure currently is what we call an “output.” How many people did you serve? Did they do something, or didn’t they do something? Quantify by time, total somethings, or dollar amounts.. or “yes/no.” This is all that most systems have the capacity to measure. Funders provide software to measure these outputs and nothing more. Agencies are funded just enough to enable them to hire front line workers who can serve just enough people to reach targets set-forth by funders if they spend just enough time with participants so that they get a job for at least 90 days at minimum wage. If 75% are “successful,” then front line workers keep their jobs, agencies keep their funding, system intermediaries keep their funding, government agencies can say “this is working” and receive renewed allocations from federal/state/provincial annual budgets, more or less.

To measure better, systems need some very important enablers, such as inexpensive and customizable data platforms, interoperability between service providers, training and technical assistance for front-line workers, measures that most service providers can agree on (or common data standards that can be aligned), and time. Time is essential for longitudinal measurement, A program cannot be measured as successful if they lose contact with participants after 90 days. Programs must provide relationship-based wrap around supports to stay connected with participants so that they can collect information at intervals for at least 18 months. Similarly, front line workers need smaller case load expectations so they can spend more time with participants, and participants need financial supports like childcare or rental supplements so that they can spend more time in training programs aligned with their career ambitions and pathways to higher-wage jobs.

Most of all, we need to understand what’s important to communities and individuals receiving supports. What does success look like to participants? Can systems measure whether we achieve the end user’s goals, not just the goals of the system?

This approach is more expensive. More time with each participant means fewer participants served by one FTE, and this more in-depth program design is only a worthwhile investment if you can show that it makes a difference. The good part is, if you see evidence that your programs are effective, then you know you’ve made your ROI more efficient and can even show you how your participants move from stabilization to self-sufficiency.

In Ontario, a small group of us are pioneering some of these solutions. We’re starting with integrating long-term, one-on-one financial counselling into workforce programs to look at the relationship between reducing financial volatility (reducing debt, increasing savings, growing net worth, increasing credit scores, achieving financial goals) and workforce outcomes like job retention, wage progression, and career journeys. Because this work is relational, we see the opportunity to measure the relationship between income/wealth and quality of life, so we are also introducing a wellbeing survey. All data is collected at a baseline and retested at intervals for progress and associations between interventions and outcomes (not outputs). With funding from JPMorgan Chase, we are developing a low-cost, low-code data platform that financial counsellors can use for case management, delivering surveys, and to produce dashboards of key indicators.

This approach has been proven to be effective in the US through LISC’s Center for Strong Families model. This Canadian partners are working to establish the case in Canada for integrating financial counselling into other sectors and programs beyond workforce programs. The software solution is open source, and with backing of Ontario Trillium Foundation, CFH Consulting is working with West Neighbourhood House, Prosper Canada, EBO, Building Up, and others to explore what partnerships grow from this effort, and what learnings emerge over the next two years. Will these models and tools be adopted? Can we contribute to interoperability discussions? Are there applications for social prescribing? Can we create test cases for nonprofit data portability? Can we blaze a path for data sovereignty and participants controlling their own data? Can part of what we’re building solve CRM needs of tens of thousands of under-resourced small and medium-sized agencies?

Thoughts? Comments? Questions?

If you are interested in connecting with or supporting West Neighbourhood House, Prosper Canada, EBO, LogicalOutcomes, or other agencies involved in this work, please email me at jeff@cfhconsulting.net.

Previous
Previous

How We Did It