Tag Archives: impact assessment

Every nonprofit needs a theory of change for its technology. . .and for its evaluation process

if then

I’ve spent a lot of my professional life (thus far) thinking about the missions of nonprofit organizations, and about information/communication technologies for nonprofits.

In the past few years, it’s become fashionable to talk about the importance of a “theory of change” for nonprofits.  This is merely a way of underlining the importance of making an explicit statement about the causal relationship between what a nonprofit organization does and the impact that it has promised to deliver.  I applaud this!  It’s crucial to say, “if we take all of the following resources, and do all of the following actions, then we will get all of the following results.”  An organization that lacks the capacity to marshal those resources and take those actions needs to reconsider, because it is on track to fail. If its capacity is not aligned with its commitment, it should acquire the resources or change its commitment to results.  Of course, it some cases, it will merely need to revise its theory of change.  In any case, it will have to work backward from its mission, and understand how each component contributes to achieving it.

This kind of thinking has lead to a lot of conversations (and a lot of anxiety) in the nonprofit sector about performance measurement, outcomes management, evaluation, and impact assessment.

I’d love to have some of this conversation focus on the information/communication technologies that nonprofit organizations are using.  In other word, it’s time to be explicit about a theory of change that explains in detail how every component of the technology an organization uses contributes (directly or indirectly) to its ability to deliver a specific kind of social, cultural, or environmental impact.

Likewise, I’d love to have the conversation address the ways in which the efforts of a nonprofit organization’s performance measurement, outcomes management, evaluation, or impact assessment team contributes (directly or indirectly) to its ability to deliver the kind of impact that it promised its stakeholders.

 

 

The Massachusetts Institute of Nonprofit Technology: Let’s Do This!

Massachusetts Institute of Nonprofit Technology

 

We need a Massachusetts Institute of Nonprofit Technology, and I can tell you what degree program we need to establish first:  Bachelor of Nonprofit Data.

The inspiration for this comes from many conversations with many people, but I’d especially like to credit Susan Labandibar, Julia Gittleman, and Laura Beals for pointing out, in their different ways, that one of the most pressing real-life challenges in nonprofit technology today is finding people who can bridge between the outcomes / impact assessment / evaluation / research team (on one hand) and the information systems team (on the other hand) at a nonprofit organization.

Not that I’m a professional full-time data analyst myself, but if I were, I’d find the numbers, and start doing the math:

  • How many brilliant computer scientists are graduating right here in Massachusetts every year from our best high schools, colleges, and universities?
  • Of those graduates, what percentage have strong skills in database design, database development, database management, or data analysis?
  • Of those who have strong data skills, what percentage would be eager to use their geek skills for good, if they were offered an attractive career ladder?

That’s our applicant pool for the Massachusetts Institute of Nonprofit Technology.  (Or MINT, if you prefer.)

Now, let’s figure out the absolute minimum of additional knowledge that these computer science graduates would need in order to be the kind of data analysts who could bridge between the outcomes / impact assessment / evaluation / research team and the information systems team  at a nonprofit:

  • Outcomes measurement
  • Outcomes management
  • Impact assessment
  • Evaluation
  • Social research methods
  • Knowledge management
  • Organizational cultures of nonprofits
  • Nonprofit operations
  • Organizational cultures of philanthropic foundations

That’s our basic curriculum.

If we want to expand the curriculum beyond the basics, we can add these elective subjects:

  • Nonprofit budgeting
  • Group dynamics
  • Ethics
  • Etiquette
  • Negotiation
  • Project management
  • Appreciative inquiry
  • Meeting facilitation

All of these electives would pave the way for other degree programs, in which they would also be extremely useful:

  • Bachelor of Nonprofit Systems Engineering
  • Bachelor of Nonprofit Web Development
  • Bachelor of Nonprofit Help Desk Support
  • Bachelor of Nonprofit Hands On Tech Support
  • Bachelor of Nonprofit Social Media

I already have my eye on some great local colleagues who could be the faculty for the Bachelor of Nonprofit Data program.  In addition to Susan, Julia, and Laura, I’d want to recruit these folks:

Please note that three members of the TNB team top the list of potential faculty members.  Why?  Because I work there, and because TNB has set a Big Hairy Audacious Goal of developing the careers of 1,000 technology professionals. This undertaking would be very congruent with its vision!

However, setting up the Massachusetts Institute of Nonprofit Technology must be a collaborative effort.  It will take a strong network of colleagues and friends to make this happen.

Do you think that this is needed?  Do you think my plan needs a lot of work?  Do you have any ideas or resources that you’d like to suggest?  Please feel free to use the comments section here to share your thoughts.

How grantmakers and how nonprofits use information about outcomes

State of Evaluation 2012: Evaluation Practice and Capacity in the Nonprofit Sector, a report by the Innovation Network

I’m sitting here, reflecting on the Innovation Network’s “State of Evaluation 2012” report.

I encourage you to download it and read it for yourself; start with pages 14 and 15. These two pages display infographics that summarize what funders (also known as “grantors,” or if you’re Bob Penna, as “investors”) and nonprofits (also known as “grantees”) are reporting about why they do evaluation and what they are evaluating.

Regardless of whether you call it evaluation, impact assessment, outcomes management, performance measurement, or research – it’s really, really difficult to ascertain whether a mission-based organization is delivering the specific, positive, and sustainable change that it promises to its stakeholders. Many organizations do an excellent job at tracking outputs, but falter when it comes to managing outcomes. That’s in part because proving a causal relationship between what the nonprofit does and the specific goals that it promises to achieve is very costly in time, effort, expertise, and money.

But assuming that a mission-based organization is doing a rigorous evaluation, we still need to ask:  what is done with the findings, once the analysis is complete?

What the aforementioned infographics from the “State of Evalution 2012”  tell me is that both grantors and grantees typically say that the most important thing they can do with their outcome findings is to report them to their respective boards of directors.  Considering the depth of the moral and legal responsibility that is vested in board members, this is a pretty decent priority.  But it’s unclear to me what those boards actually do with the information.  Do they use it to guide the policies and operations of their respective organizations?  If so, does anything change for the better?

If you have an answer to the question of how boards use this information that is based on firsthand experience, then please feel to post a comment here.

What I learned about outcomes management from Robert Penna

Robert Penna

Yesterday, along with a number of colleagues and friends from Community TechKnowledge, I had the privilege of attending a training by Robert Penna, the author of The Nonprofit Outcomes Toolbox.

As you probably  know, I’ve been on a tear about outcomes measurement for a few months now; the current level of obsession began when I attended NTEN’s Nonprofit Data Summit in Boston in September.  I thought that the presenters at the NTEN summit did a great job addressing some difficult issues – such as how to overcome internal resistance to collecting organizational data, and how to reframe Excel spreadsheets moldering away in file servers as archival data.  However, I worked myself into a tizzy, worrying about the lack, in that day’s presentations, of any reference to the history and literature of quantitative analysis and social research.  I could not see how nonprofit professionals would be able to find the time and resources to get up to speed on those topics.

Thanks to Bob Penna, I feel a lot better now.  In yesterday’s training, he showed me and the CTK team just how far you can go by stripping away what is superfluous and focusing on what it really takes to use the best outcomes tools for job.  Never mind about graduate level statistics! Managing outcomes may be very, very difficult because it requires major changes in organizational culture – let’s not kid ourselves about that.  However, it’s not going to take years out of each nonprofit professional’s life to develop the skill set.

Here are some other insights and highlights of the day:

  • Mia Erichson, CTK’s brilliant new marketing manager, pointed out that at least one of the outcomes tools that Bob showed us could be easily mapped to a “marketing funnel” model.  This opens possibilities for aligning a nonprofits programmatic strategy with its marcomm strategy.
  • The way to go is prospective outcomes tracking, with real time updates allowing for course correction.  Purely retrospective outcomes assessment is not going to cut it.
  • There are several very strong outcomes tools, but they should be treated as we treated a software suite that comprises applications that are gems and applications that are junk.  We need to use the best of breed to meet each need.
  • If we want to live in Bob Penna’s universe, we’re going to have to change our vocabulary.  It’s not “outcomes measurement – it’s “outcomes management.” The terms “funder” and “grantmaker” are out – “investor” is in.

Even with these lessons learned, it’s not a Utopia out there waiting for nonprofits that become adept at outcomes management.  Not only is it difficult to shift to an organizational culture that fosters it, but we have to face continuing questions about how exactly the funders (oops! I should have said “investors”) use the data that they demand from nonprofit organizations.  (“Data” is of course a broad term, with connotations well beyond outcomes management.  But it’s somewhat fashionable these days for them to take an interest in data about programmatic outcomes.)

We should be asking ourselves, first of all, whether the sole or primary motivation for outcomes management in nonprofits should be the demands of investors.  Secondly, we should be revisiting the Gilbert Center’s report, Does Evidence Matter to Grantmakers? Data, Logic, and the Lack thereof in the Largest U.S. Foundations.We need to know this. Thirdly, we should be going in search of other motivations for introducing outcomes management.  I realize that most nonprofits go forward with it when they reach a point of pain (translation:  they won’t get money if they don’t report outcomes). 

During a break in Bob’s training, some of my CTK colleagues were discussing the likelihood that many nonprofit executives simply hate the concept of outcomes management.  Who wants to spend resources on it, if it subtracts from resources available for programmatic activities?  Who wants to risk finding out (or to risk having external stakeholders find out) that an organization’s programs are approximately as effective as doing nothing at all?  Very few – thus the need to find new motivations, such as the power to review progress and make corrections as we go.  I jokingly told my CTK colleagues, “the truth will make you free, but first it will make you miserable.”  Perhaps that’s more than a joke.

What if we had a pro bono training on outcomes measurement for nonprofit professionals in Massachusetts?

question mark

As you can probably guess, I spend a lot of time these days worrying about outcomes measurement for nonprofits; I also devote time to discussing this topic with experts and with nonprofit professionals.

As I talk to some of the most impressive mavens in this field I  sometimes ask, “would you travel to Massachusetts at your own expense, to give a free day-long training on outcomes measurement to nonprofit professionals here?

Nothing ventured, nothing gained – am I right?  (Or as my dear sister once put it, I am The Mouth That Knows No Fear.)

A few really stellar experts actually agreed to do it, if a training event could be arranged to suit their schedules and other reasonable needs.  Of course, I am stunned, overwhelmed with gratitude.  Never underestimate the kindness of mavens!

So now I turn to my nonprofit colleagues in Massachusetts, with another unscientific survey.  I want to get a sense of who would be interested in day-long free training.  This survey is for them.  If you’re not a nonprofit professional based in Massachusetts, please do the honorable thing, and refrain from participating in this survey.

Outcomes measurement for nonprofits: Who does the analysis?

I invite you to participate in this survey, bearing in mind that it is for recreational purposes, and has no scientific value:

There are many reasons that this survey is of dubious value, for example:

  • No pilot testing has been done to ensure that the choices offered are both exhaustive and mutually exclusive.

The list could go on, but I’ll leave it at that.  Although most of my training is in qualitative social research, I have taken undergraduate and graduate level courses on quantitative research, and the points I made about what’s wrong with my survey are what I could pull out of memory without consulting a standard text on statistics.

In other words, when it comes to quantitative analysis, I know just enough to be dangerous.

Meanwhile, I worry about nonprofit organizations that are under pressure to collect, analyze, and report data on the outcomes of their programs.  There are a lot of fantastic executive directors, program managers, and database administrators out there – but it’s very rare for a nonprofit professional who falls into any of those three categories to also have solid skills in quantitative analysis and social research methods.  Nevertheless, I know of plenty of nonprofit organizations where programmatic outcomes measurement is done by an executive director, program manager, or database administrator whose skill set is very different from what the task demands.  In many cases, even if they come up with a report, the nonprofit staff members may not even be aware that what have done is presented a lot of data, without actually showing that there is any causal relationship between the organization’s activities and the social good that they are in business to deliver.

Let’s not be too hasty in deprecating the efforts of these nonprofit professionals.  They are under a lot of pressure, especially from grantmaking foundations, to report on programmatic outcomes.  In many cases, they do the best they can to respond, even if they have neither the internal capacity to meet the task nor the money to hire a professional evaluator.

By the way, I was delighted to attend gathering this fall, in which I heard a highly-regarded philanthropic professional ask a room full of foundation officers, “are you requiring $50,000 worth of outcomes measurement for a $10,000 grant?” It’s not the only question we need to ask, but it’s an extremely cogent one!

I’d love to see nonprofit professionals, philanthropists, and experts in quantitative analysis work together to address this challenge.

We should also be learning lessons from the online tools that have already been developed to match skilled individuals with nonprofit professionals who need help and advice from experts.  Examples of such tools include the “Research Matchmaker,” and NPO Connect.

We can do better.  It’s going to take time, effort, money, creativity, and collaboration – but we can do better.