Tag Archives: skill set

Deborah’s Bargain Basement of Discount Consulting

bargain basement

I’ve been looking for an impetus to do this, and now I have one!

A couple of projects for clients have been postponed, and lest there be time on my hands in March, I’m offering a limited number of consultations at $75/hour to nonprofits in the Boston area.  My standard hourly rate is $200.00, so this represents a substantial discount.

Your organization might qualify for this bargain, if:

  • It is recognized as tax exempt by the U.S. Internal Revenue Service.
  • It is is accessible via public transportation in the Boston area.
  • It delivers an important social, artistic, or environmental benefit to the common good.
  • Its strategic challenges are a good match for my skills and interests.
  • It has not previously been a paying client of mine.

This offer is good for consulting services that are delivered before March 31, 2013.

 

If your organization is interested in this special consulting deal, please fill out the form shown below.

 

What I learned about outcomes management from Robert Penna

Robert Penna

Yesterday, along with a number of colleagues and friends from Community TechKnowledge, I had the privilege of attending a training by Robert Penna, the author of The Nonprofit Outcomes Toolbox.

As you probably  know, I’ve been on a tear about outcomes measurement for a few months now; the current level of obsession began when I attended NTEN’s Nonprofit Data Summit in Boston in September.  I thought that the presenters at the NTEN summit did a great job addressing some difficult issues – such as how to overcome internal resistance to collecting organizational data, and how to reframe Excel spreadsheets moldering away in file servers as archival data.  However, I worked myself into a tizzy, worrying about the lack, in that day’s presentations, of any reference to the history and literature of quantitative analysis and social research.  I could not see how nonprofit professionals would be able to find the time and resources to get up to speed on those topics.

Thanks to Bob Penna, I feel a lot better now.  In yesterday’s training, he showed me and the CTK team just how far you can go by stripping away what is superfluous and focusing on what it really takes to use the best outcomes tools for job.  Never mind about graduate level statistics! Managing outcomes may be very, very difficult because it requires major changes in organizational culture – let’s not kid ourselves about that.  However, it’s not going to take years out of each nonprofit professional’s life to develop the skill set.

Here are some other insights and highlights of the day:

  • Mia Erichson, CTK’s brilliant new marketing manager, pointed out that at least one of the outcomes tools that Bob showed us could be easily mapped to a “marketing funnel” model.  This opens possibilities for aligning a nonprofits programmatic strategy with its marcomm strategy.
  • The way to go is prospective outcomes tracking, with real time updates allowing for course correction.  Purely retrospective outcomes assessment is not going to cut it.
  • There are several very strong outcomes tools, but they should be treated as we treated a software suite that comprises applications that are gems and applications that are junk.  We need to use the best of breed to meet each need.
  • If we want to live in Bob Penna’s universe, we’re going to have to change our vocabulary.  It’s not “outcomes measurement – it’s “outcomes management.” The terms “funder” and “grantmaker” are out – “investor” is in.

Even with these lessons learned, it’s not a Utopia out there waiting for nonprofits that become adept at outcomes management.  Not only is it difficult to shift to an organizational culture that fosters it, but we have to face continuing questions about how exactly the funders (oops! I should have said “investors”) use the data that they demand from nonprofit organizations.  (“Data” is of course a broad term, with connotations well beyond outcomes management.  But it’s somewhat fashionable these days for them to take an interest in data about programmatic outcomes.)

We should be asking ourselves, first of all, whether the sole or primary motivation for outcomes management in nonprofits should be the demands of investors.  Secondly, we should be revisiting the Gilbert Center’s report, Does Evidence Matter to Grantmakers? Data, Logic, and the Lack thereof in the Largest U.S. Foundations.We need to know this. Thirdly, we should be going in search of other motivations for introducing outcomes management.  I realize that most nonprofits go forward with it when they reach a point of pain (translation:  they won’t get money if they don’t report outcomes). 

During a break in Bob’s training, some of my CTK colleagues were discussing the likelihood that many nonprofit executives simply hate the concept of outcomes management.  Who wants to spend resources on it, if it subtracts from resources available for programmatic activities?  Who wants to risk finding out (or to risk having external stakeholders find out) that an organization’s programs are approximately as effective as doing nothing at all?  Very few – thus the need to find new motivations, such as the power to review progress and make corrections as we go.  I jokingly told my CTK colleagues, “the truth will make you free, but first it will make you miserable.”  Perhaps that’s more than a joke.

Outcomes measurement for nonprofits: Who does the analysis?

I invite you to participate in this survey, bearing in mind that it is for recreational purposes, and has no scientific value:

There are many reasons that this survey is of dubious value, for example:

  • No pilot testing has been done to ensure that the choices offered are both exhaustive and mutually exclusive.

The list could go on, but I’ll leave it at that.  Although most of my training is in qualitative social research, I have taken undergraduate and graduate level courses on quantitative research, and the points I made about what’s wrong with my survey are what I could pull out of memory without consulting a standard text on statistics.

In other words, when it comes to quantitative analysis, I know just enough to be dangerous.

Meanwhile, I worry about nonprofit organizations that are under pressure to collect, analyze, and report data on the outcomes of their programs.  There are a lot of fantastic executive directors, program managers, and database administrators out there – but it’s very rare for a nonprofit professional who falls into any of those three categories to also have solid skills in quantitative analysis and social research methods.  Nevertheless, I know of plenty of nonprofit organizations where programmatic outcomes measurement is done by an executive director, program manager, or database administrator whose skill set is very different from what the task demands.  In many cases, even if they come up with a report, the nonprofit staff members may not even be aware that what have done is presented a lot of data, without actually showing that there is any causal relationship between the organization’s activities and the social good that they are in business to deliver.

Let’s not be too hasty in deprecating the efforts of these nonprofit professionals.  They are under a lot of pressure, especially from grantmaking foundations, to report on programmatic outcomes.  In many cases, they do the best they can to respond, even if they have neither the internal capacity to meet the task nor the money to hire a professional evaluator.

By the way, I was delighted to attend gathering this fall, in which I heard a highly-regarded philanthropic professional ask a room full of foundation officers, “are you requiring $50,000 worth of outcomes measurement for a $10,000 grant?” It’s not the only question we need to ask, but it’s an extremely cogent one!

I’d love to see nonprofit professionals, philanthropists, and experts in quantitative analysis work together to address this challenge.

We should also be learning lessons from the online tools that have already been developed to match skilled individuals with nonprofit professionals who need help and advice from experts.  Examples of such tools include the “Research Matchmaker,” and NPO Connect.

We can do better.  It’s going to take time, effort, money, creativity, and collaboration – but we can do better.

%d bloggers like this: