Tag Archives: causal relationship

My ever expanding theory of change for nonprofit data and evaluation

Every nonprofit needs a theory of change for its technology. . .and for its evaluation process

if then

I’ve spent a lot of my professional life (thus far) thinking about the missions of nonprofit organizations, and about information/communication technologies for nonprofits.

In the past few years, it’s become fashionable to talk about the importance of a “theory of change” for nonprofits.  This is merely a way of underlining the importance of making an explicit statement about the causal relationship between what a nonprofit organization does and the impact that it has promised to deliver.  I applaud this!  It’s crucial to say, “if we take all of the following resources, and do all of the following actions, then we will get all of the following results.”  An organization that lacks the capacity to marshal those resources and take those actions needs to reconsider, because it is on track to fail. If its capacity is not aligned with its commitment, it should acquire the resources or change its commitment to results.  Of course, it some cases, it will merely need to revise its theory of change.  In any case, it will have to work backward from its mission, and understand how each component contributes to achieving it.

This kind of thinking has lead to a lot of conversations (and a lot of anxiety) in the nonprofit sector about performance measurement, outcomes management, evaluation, and impact assessment.

I’d love to have some of this conversation focus on the information/communication technologies that nonprofit organizations are using.  In other word, it’s time to be explicit about a theory of change that explains in detail how every component of the technology an organization uses contributes (directly or indirectly) to its ability to deliver a specific kind of social, cultural, or environmental impact.

Likewise, I’d love to have the conversation address the ways in which the efforts of a nonprofit organization’s performance measurement, outcomes management, evaluation, or impact assessment team contributes (directly or indirectly) to its ability to deliver the kind of impact that it promised its stakeholders.

 

 

“We count our successes in lives”

Brent James

Brent James is one of my new heroes.  He’s a physician, a researcher, and the chief quality officer of Intermountain Healthcare’s Institute for Health Care Delivery Research.

We had a very inspiring telephone conversation this afternoon, about whether the lessons learned from evidence-based medicine could be applied to nonprofits that are seeking to manage their outcomes.  We also swapped some stories and jokes about the ongoing struggle to document a causal relationship between what a health care organization (or a social service agency, or an arts group, or an environmental coalition, for that matter) does and what the organization’s stated aims are.  In fact, documenting that an organization is doing more good than harm, and less harm than doing nothing at all, continues to be a perplexing problem.  The truth may be less than obvious – in fact, it may be completely counter-intuitive.

In this phone conversation, we also waded into deep epistemological waters, reflecting on how we know we have succeeded, and also on the disturbing gap between efficacy and effectiveness.

It’s not merely a philosophical challenge, but a political one, to understand where the power lies to define success and to set the standards of proof.

I doubt that this is what William James (no relation to Brent, as far as I know) had in mind when he referred to success as “the bitch-goddess,” but there’s no doubt that defining, measuring, and reporting on one’s programmatic success is a bitch for any nonprofit professional with intellectual and professional integrity.  It’s both difficult and urgent.

What particularly struck me during my conversation with Brent was his remark about Intermountain Healthcare:

“We count our successes in lives.”

On the surface, that approach to counting successes seems simple and dramatic.  The lives of patients are on the line.  They either live or die, with the help of Intermountain Healthcare.  But it’s really a very intricate question, once we start asking whether Intermountain’s contribution is a positive one, enabling the patients to live the lives and die the deaths that are congruent with their wishes and values.

These questions are very poignant for me, and not just because I’m cancer patient myself, and not just because yesterday I attended the funeral of a revered colleague and friend who died very unexpectedly.  These questions hit me where I live professionally as well, because earlier this week, I met with the staff of a fantastic nonprofit that is striving to do programmatic outcomes measurement, and is faced with questions about how to define success in a way that can be empirically confirmed or disconfirmed.  Their mission states that they will help their clients excel in a specific industry and in their personal lives.  They have a coherent theory of change, and virtually all of their criteria of professional and personal success are quantifiable.  Their goals are bold but not vague. (This is a dream organization for anyone interested in outcomes management, not to mention that the staff members are smart and charming.)  However, it’s not entirely clear yet whether the goals that add up to success for each client are determined solely by the staff or by the client or some combination thereof.  I see it as a huge issue, not just on an operational level, but on a philosophical one; it’s the difference between self-determination and paternalism.  I applaud this organization’s staff for their willingness to explore the question.

When Brent talked about counting successes in terms of lives, I thought about this nonprofit organization, which defines its mission in terms of professional and personal success for its clients.  The staff members of that organization, like so many nonprofit professionals, are ultimately counting their successes in lives, though perhaps not as obviously as health care providers do.  Surgeons receive high pay and prestige for keeping cancer patients alive and well – for the most part, they fully deserve it.  But let’s also count the successes of the organization that helps a substantial number of people win jobs that offer a living wage and health insurance, along with other benefits such as G.E.D.s, citizenship, proficiency in English, home ownership, paid vacations, and college educations for the workers’ children. Nonprofit professionals who can deliver that are also my heroes, right up there with Brent James.  While we’re holding them to high standards of proof of success, I hope that we can find a way to offer them the high pay and prestige that we already grant to the medical profession.

Outcomes measurement for nonprofits: Who does the analysis?

I invite you to participate in this survey, bearing in mind that it is for recreational purposes, and has no scientific value:

There are many reasons that this survey is of dubious value, for example:

  • No pilot testing has been done to ensure that the choices offered are both exhaustive and mutually exclusive.

The list could go on, but I’ll leave it at that.  Although most of my training is in qualitative social research, I have taken undergraduate and graduate level courses on quantitative research, and the points I made about what’s wrong with my survey are what I could pull out of memory without consulting a standard text on statistics.

In other words, when it comes to quantitative analysis, I know just enough to be dangerous.

Meanwhile, I worry about nonprofit organizations that are under pressure to collect, analyze, and report data on the outcomes of their programs.  There are a lot of fantastic executive directors, program managers, and database administrators out there – but it’s very rare for a nonprofit professional who falls into any of those three categories to also have solid skills in quantitative analysis and social research methods.  Nevertheless, I know of plenty of nonprofit organizations where programmatic outcomes measurement is done by an executive director, program manager, or database administrator whose skill set is very different from what the task demands.  In many cases, even if they come up with a report, the nonprofit staff members may not even be aware that what have done is presented a lot of data, without actually showing that there is any causal relationship between the organization’s activities and the social good that they are in business to deliver.

Let’s not be too hasty in deprecating the efforts of these nonprofit professionals.  They are under a lot of pressure, especially from grantmaking foundations, to report on programmatic outcomes.  In many cases, they do the best they can to respond, even if they have neither the internal capacity to meet the task nor the money to hire a professional evaluator.

By the way, I was delighted to attend gathering this fall, in which I heard a highly-regarded philanthropic professional ask a room full of foundation officers, “are you requiring $50,000 worth of outcomes measurement for a $10,000 grant?” It’s not the only question we need to ask, but it’s an extremely cogent one!

I’d love to see nonprofit professionals, philanthropists, and experts in quantitative analysis work together to address this challenge.

We should also be learning lessons from the online tools that have already been developed to match skilled individuals with nonprofit professionals who need help and advice from experts.  Examples of such tools include the “Research Matchmaker,” and NPO Connect.

We can do better.  It’s going to take time, effort, money, creativity, and collaboration – but we can do better.

The state of nonprofit data: Uh-oh!

The Nonprofit Technology Network (NTEN) has released a report prepared by Idealware on the current state of nonprofit data.  Highly recommended!

Some of the news it contains is scary.  In our sector, we currently aren’t very successful at collecting and analyzing the most crucial data.  For example, only 50% of the respondents reported that their nonprofit organizations are tracking data about the outcomes of clients/constituents.

According to the survey respondents, there are daunting barriers to tracking and using data:

  • issues related to collecting and working with data (27 percent of responses).
  • lack of expertise (24 percent of responses)
  • issues of time and prioritization (22 percent of responses).
  • challenges with technology (23 percent).

Page 13 of the report features a chart that I find especially worrisome.  It displays of types of data that nonprofit organizations should or could be using, with large chunks falling into three chilling categories:

  • we don’t know how to track this
  • we don’t have the technology to effectively track this
  • we don’t have the time/money to effectively track this

In the case of data about outcomes, 17% lack the knowledge, 20% lack the technology, and 22% lack the time or money (or both) to track it.

Are you scared yet?  I confess that I am.  Perhaps half of all nonprofits surveyed don’t know – and don’t have the resources to find out – whether there is any causal relationship between what their activities and the social good that they are in business to achieve.

And that’s just programmatic outcomes.  The news is also not very encouraging when it comes to capturing data about organizational budgets, constituent participation in programs, and external trends in the issue areas being addressed by nonprofit organizations.

So much for the bad news.  The good news is that now we know.

It takes some courage to acknowledge that the baseline is so low.  I applaud Idealware and NTEN for creating and publishing this report.  Now that we know, we can address the problem and take effective action.