Tag Archives: funder

How grant makers and nonprofit grant recipients can do great things together with data and evaluation

This is not actually a photo from the dialogue series. We refrained from taking photos, because we wanted to foster an atmosphere of candor and comfort as grantors and grantees engaged in conversation about a difficult topic. However, it is a favorite photo from another recent Tech Networks of Boston event.

 

Oh, my!  It took Tech Networks of Networks almost two years to organize and implement a series of candid dialogues about data and evaluation for grantors and nonprofit grantees, and now it’s complete.  The process was a collaboration in itself, with TSNE MissionWorks, and Essential Partners serving as co-hosts. An advisory group and planning group gave crucial input about the strategy and tactics for this event.

What you see here are a few notes that reflect my individual experience. In this article, I am not speaking on behalf of any organization or individual.

As far as I can ascertain, this series was the first in which grant makers and nonprofit grant recipients came together in equal numbers and met as peers for reflective structured dialogue. World class facilitation and guidance was provided by Essential Partners, with the revered Dave Joseph serving as facilitator-in-chief.

Here’s how I’d characterize the three sessions:

  • June 2017:  Let’s get oriented. What is the heart of the matter for grantors and grantees?
  • September 2017:  You know, we really need to address the imbalance of power in the grantor/grantee relationship.
  • January 2018:  Ok, can we agree on some best practices how to address this as grantors and grantees? Why, yes. We can.

The plan is to make the recommendations that came out of the final dialogue publicly available online, to provide a starting point for a regional or even national conversation about data and evaluation.

Meanwhile, I’d like to offer my own recommendations.  Mine are based on what I learned during the dialogue series, and also on untold numbers of public and private conversations on the topic.

 

_____________________________________________________________________________

 

My Recommendations

 

Funders can help by: 

  • Understanding that nonprofits perceive funders as having not just money but also much more power.
  • Asking nonprofits to define their goals, their desired outcomes, and their quantitative measures of success – rather than telling them what these should be.
  • Factoring in the nonprofit organization’s size, capacity, and budget – making sure that the demand for data and evaluation is commensurate.
  • Understanding the real cost in dollars to grantees who provide the data reporting and evaluation that you request.  These dollar amounts might be for staff time, technology, training, an external consultant, or even for office supplies.
  • Providing financial support for any data or evaluation that the funder needs –  especially if the nonprofit does not have an internal need for that data or evaluation.    Items to support might include staff time, technology, training, or retaining an external consultant with the necessary skill set.
  • Putting an emphasis on listening.

 

Nonprofits can help by: 

  • Engaging in a quantitative analysis of their operations and capacity, and sharing this information with funders.
  • Understanding that grant makers are motivated to see nonprofit grant recipients succeed.
  • Understanding that grant makers are often under pressure from donors and their boards to deliver a portfolio of outcomes.
  • Integrating the use of data and evaluation into most areas of operation – this means building skills in data and evaluation across the entire organization.
  • Gathering with other nonprofits that have similar desired outcomes and comparing notes on failures and best practices.
  • Fostering a data-friendly, continuous learning culture within nonprofit organizations.

 

Both groups can help by: 

  • Engaging in self-scrutiny about how factors such as race and class affect how data is collected, categorized, analyzed, and reported.
  • Talking frankly about how power dynamics affect their relationships.
  • Engaging in ongoing dialogue that is facilitated by a third party who is experienced in creating a safe space.
  • Talking about and planning the evaluation process well before the grant begins.
  • Creating clear definitions of key terms pertaining to data and evaluation.
  • Making “I don’t know” an acceptable response to a question.
  • Measuring what you really value, rather than simply valuing what you can easily measure.
  • Working toward useful standards of measurement.  Not all programs and outcomes are identical, but very few are entirely sui generis.
  • Sharing responsibility for building the relationship.
  • Speaking with each other on a regular basis.
  • Studying (and implementing) community-based participatory research methods.

 

_____________________________________________________________________________

 

And now, because I can insert a poll here, I’m going to.

 

 

 

_____________________________________________________________________________

 

And now, because I can insert a contact form here, I’m going to.  Please feel free to let me know if you’re interested in being part of a regional or national conversation about how grantors and grantees can move forward and work constructively with data and evaluation.

 

 

_____________________________________________________________________________

 

Creative Commons License
Some rights reserved. This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License.

 

_____________________________________________________________________________

NPtech Labor Market Alert: The Big Job Title of 2015 Will Be “Data Analyst”

 

Disclaimer: This illustration is for entertainment purposes only. I am not a professional data analyst.

Disclaimer: This illustration is for entertainment purposes only. I am not a professional data analyst.

 

My training, such as it is, is heavily skewed toward qualitative methods; at the same time, I have a lot of respect for quantitative analysis.  However, my favorite form of research consists of staring off into space and letting ideas float into my head.  Sometimes I validate my findings by engaging in conversations in which I talk louder and louder until everyone agrees that I’m right.  It seems to work.

Lately, I’ve had a little time to stare off into space and let ideas float into my head; by this, I mean that I traveled to Austin, Texas for the Nonprofit Technology Conference (also known as #15ntc) and had some down time on the plane.  By the time I arrived in Austin, I had become convinced that “Data Analyst” would be this year’s standout job title in the field of nptech.  At the conference, I was able to confirm this – by which I mean that I didn’t meet anyone there who talks more loudly than I do.

What are the take-ways?  It depends on who you are:

  • For data analysts who are now working in the field of nonprofit technology:  prepare to be appreciated.
  • For data analysts now working in other sectors: think about whether this is a good moment to make a career shift in which you use your geek powers for good. But make sure you know what you’re getting into.
  • For nonprofit executives: don’t kid yourselves. Brilliant data analysts who want to work in the nonprofit sector aren’t going to be attracted by job announcements that indicate that the successful candidate will also be responsible for network administration, hands-on tech support, social media, and web development.
  • For workforce development professionals:  this is your cue. It’s time to put together a program for training computer science graduates to be nonprofit data geeks.
  • For donors, grantmakers, and other funders:  if you want reports from nonprofits are based on reliable and valid methods of analysis, then you will need to underwrite data analysts at nonprofits.  That means money for training, for salaries, and for appropriate technology.

If you don’t agree with my findings, please take a moment to share yours in the comments section.

How grantmakers and how nonprofits use information about outcomes

State of Evaluation 2012: Evaluation Practice and Capacity in the Nonprofit Sector, a report by the Innovation Network

I’m sitting here, reflecting on the Innovation Network’s “State of Evaluation 2012” report.

I encourage you to download it and read it for yourself; start with pages 14 and 15. These two pages display infographics that summarize what funders (also known as “grantors,” or if you’re Bob Penna, as “investors”) and nonprofits (also known as “grantees”) are reporting about why they do evaluation and what they are evaluating.

Regardless of whether you call it evaluation, impact assessment, outcomes management, performance measurement, or research – it’s really, really difficult to ascertain whether a mission-based organization is delivering the specific, positive, and sustainable change that it promises to its stakeholders. Many organizations do an excellent job at tracking outputs, but falter when it comes to managing outcomes. That’s in part because proving a causal relationship between what the nonprofit does and the specific goals that it promises to achieve is very costly in time, effort, expertise, and money.

But assuming that a mission-based organization is doing a rigorous evaluation, we still need to ask:  what is done with the findings, once the analysis is complete?

What the aforementioned infographics from the “State of Evalution 2012”  tell me is that both grantors and grantees typically say that the most important thing they can do with their outcome findings is to report them to their respective boards of directors.  Considering the depth of the moral and legal responsibility that is vested in board members, this is a pretty decent priority.  But it’s unclear to me what those boards actually do with the information.  Do they use it to guide the policies and operations of their respective organizations?  If so, does anything change for the better?

If you have an answer to the question of how boards use this information that is based on firsthand experience, then please feel to post a comment here.

What I learned about outcomes management from Robert Penna

Robert Penna

Yesterday, along with a number of colleagues and friends from Community TechKnowledge, I had the privilege of attending a training by Robert Penna, the author of The Nonprofit Outcomes Toolbox.

As you probably  know, I’ve been on a tear about outcomes measurement for a few months now; the current level of obsession began when I attended NTEN’s Nonprofit Data Summit in Boston in September.  I thought that the presenters at the NTEN summit did a great job addressing some difficult issues – such as how to overcome internal resistance to collecting organizational data, and how to reframe Excel spreadsheets moldering away in file servers as archival data.  However, I worked myself into a tizzy, worrying about the lack, in that day’s presentations, of any reference to the history and literature of quantitative analysis and social research.  I could not see how nonprofit professionals would be able to find the time and resources to get up to speed on those topics.

Thanks to Bob Penna, I feel a lot better now.  In yesterday’s training, he showed me and the CTK team just how far you can go by stripping away what is superfluous and focusing on what it really takes to use the best outcomes tools for job.  Never mind about graduate level statistics! Managing outcomes may be very, very difficult because it requires major changes in organizational culture – let’s not kid ourselves about that.  However, it’s not going to take years out of each nonprofit professional’s life to develop the skill set.

Here are some other insights and highlights of the day:

  • Mia Erichson, CTK’s brilliant new marketing manager, pointed out that at least one of the outcomes tools that Bob showed us could be easily mapped to a “marketing funnel” model.  This opens possibilities for aligning a nonprofits programmatic strategy with its marcomm strategy.
  • The way to go is prospective outcomes tracking, with real time updates allowing for course correction.  Purely retrospective outcomes assessment is not going to cut it.
  • There are several very strong outcomes tools, but they should be treated as we treated a software suite that comprises applications that are gems and applications that are junk.  We need to use the best of breed to meet each need.
  • If we want to live in Bob Penna’s universe, we’re going to have to change our vocabulary.  It’s not “outcomes measurement – it’s “outcomes management.” The terms “funder” and “grantmaker” are out – “investor” is in.

Even with these lessons learned, it’s not a Utopia out there waiting for nonprofits that become adept at outcomes management.  Not only is it difficult to shift to an organizational culture that fosters it, but we have to face continuing questions about how exactly the funders (oops! I should have said “investors”) use the data that they demand from nonprofit organizations.  (“Data” is of course a broad term, with connotations well beyond outcomes management.  But it’s somewhat fashionable these days for them to take an interest in data about programmatic outcomes.)

We should be asking ourselves, first of all, whether the sole or primary motivation for outcomes management in nonprofits should be the demands of investors.  Secondly, we should be revisiting the Gilbert Center’s report, Does Evidence Matter to Grantmakers? Data, Logic, and the Lack thereof in the Largest U.S. Foundations.We need to know this. Thirdly, we should be going in search of other motivations for introducing outcomes management.  I realize that most nonprofits go forward with it when they reach a point of pain (translation:  they won’t get money if they don’t report outcomes). 

During a break in Bob’s training, some of my CTK colleagues were discussing the likelihood that many nonprofit executives simply hate the concept of outcomes management.  Who wants to spend resources on it, if it subtracts from resources available for programmatic activities?  Who wants to risk finding out (or to risk having external stakeholders find out) that an organization’s programs are approximately as effective as doing nothing at all?  Very few – thus the need to find new motivations, such as the power to review progress and make corrections as we go.  I jokingly told my CTK colleagues, “the truth will make you free, but first it will make you miserable.”  Perhaps that’s more than a joke.

%d bloggers like this: