After five very productive years at Tech Networks of Boston (TNB), I am now looking for my next professional challenge. I’m ready for a career shift! I’ve notified the leadership at TNB, so this is not a covert search.
If you know about any job opportunities at organizations that need someone with my skill set, I’d love to hear about them. In my next job, I’d like to focus on some or all of the following:
Weaving networks among nonprofit organizations in order to build collaboration, peer learning, and communities of practice.
Building the capacity of philanthropic and nonprofit organizations to achieve and document their desired outcomes.
Fostering equity, inclusion, social justice, and corporate social responsibility.
Aiding philanthropic and nonprofit organizations in seamlessly matching resources with needs.
Establishing best practices in the strategic use of information and communication technologies among mission-based organizations.
Facilitating candid dialogue and successful collaborations between grantmakers and grantees.
This is not actually a photo from the dialogue series. We refrained from taking photos, because we wanted to foster an atmosphere of candor and comfort as grantors and grantees engaged in conversation about a difficult topic. However, it is a favorite photo from another recent Tech Networks of Boston event.
Oh, my! It took Tech Networks of Networks almost two years to organize and implement a series of candid dialogues about data and evaluation for grantors and nonprofit grantees, and now it’s complete. The process was a collaboration in itself, with TSNE MissionWorks, and Essential Partners serving as co-hosts. An advisory group and planning group gave crucial input about the strategy and tactics for this event.
What you see here are a few notes that reflect my individual experience. In this article, I am not speaking on behalf of any organization or individual.
June 2017: Let’s get oriented. What is the heart of the matter for grantors and grantees?
September 2017: You know, we really need to address the imbalance of power in the grantor/grantee relationship.
January 2018: Ok, can we agree on some best practices how to address this as grantors and grantees? Why, yes. We can.
The plan is to make the recommendations that came out of the final dialogue publicly available online, to provide a starting point for a regional or even national conversation about data and evaluation.
Meanwhile, I’d like to offer my own recommendations. Mine are based on what I learned during the dialogue series, and also on untold numbers of public and private conversations on the topic.
Understanding that nonprofits perceive funders as having not just money but also much more power.
Asking nonprofits to define their goals, their desired outcomes, and their quantitative measures of success – rather than telling them what these should be.
Factoring in the nonprofit organization’s size, capacity, and budget – making sure that the demand for data and evaluation is commensurate.
Understanding the real cost in dollars to grantees who provide the data reporting and evaluation that you request. These dollar amounts might be for staff time, technology, training, an external consultant, or even for office supplies.
Providing financial support for any data or evaluation that the funder needs – especially if the nonprofit does not have an internal need for that data or evaluation. Items to support might include staff time, technology, training, or retaining an external consultant with the necessary skill set.
Putting an emphasis on listening.
Nonprofits can help by:
Engaging in a quantitative analysis of their operations and capacity, and sharing this information with funders.
Understanding that grant makers are motivated to see nonprofit grant recipients succeed.
Understanding that grant makers are often under pressure from donors and their boards to deliver a portfolio of outcomes.
Integrating the use of data and evaluation into most areas of operation – this means building skills in data and evaluation across the entire organization.
Gathering with other nonprofits that have similar desired outcomes and comparing notes on failures and best practices.
Fostering a data-friendly, continuous learning culture within nonprofit organizations.
Both groups can help by:
Engaging in self-scrutiny about how factors such as race and class affect how data is collected, categorized, analyzed, and reported.
Talking frankly about how power dynamics affect their relationships.
Engaging in ongoing dialogue that is facilitated by a third party who is experienced in creating a safe space.
Talking about and planning the evaluation process well before the grant begins.
Creating clear definitions of key terms pertaining to data and evaluation.
Making “I don’t know” an acceptable response to a question.
Measuring what you really value, rather than simply valuing what you can easily measure.
Working toward useful standards of measurement. Not all programs and outcomes are identical, but very few are entirely sui generis.
Sharing responsibility for building the relationship.
Speaking with each other on a regular basis.
Studying (and implementing) community-based participatory research methods.
And now, because I can insert a contact form here, I’m going to. Please feel free to let me know if you’re interested in being part of a regional or national conversation about how grantors and grantees can move forward and work constructively with data and evaluation.
One thing that is quite clear is that there is no need to create a new institution, or raise up a building with a splendid dome. (The Massachusetts Institute of Technology can rest easy, without fear of competition, or brand encroachment.) I believe that all of the necessary institutions exist already here in the Bay State. What is needed is a consortium that can knit them together for this purpose, some funding, and some candidates.
It’s a pipeline, or perhaps a career ladder that the consortium needs to build – not an edifice. Although I love the splendid domes of MIT, we can simply admire them, and hope that eventually some of the people who work and study under those domes will become part of the consortium.
An organization that is able to place, mentor, and coach candidates in entry level data services positions at local nonprofit organizations. That’s TNB Labs. These entry level workers will be known as “data support analysts,” or DSAs.
At the conclusion of the one or two year placement at a nonprofit organization, I think that any of the following outcomes would count as a win:
The host nonprofit hires the DSA (with a raise and a promotion) as a long term regular employee.
The DSA lands a job providing data services at another nonprofit organization.
The DSA lands a job in a different field or sector that is congruent with his/her/their career aspirations.
The DSA is able to apply to a four-year degree program, transferring course credits, on the job experience, two-year degrees, or certifications that he/she/they have earned.
The latter scenario – of advancing in higher education – brings us to the final category of allies needed for our consortium. The best example of this kind of ally is UMass-Boston, which has programs in related areas, such as:
In addition, our consortium has a great ally in an individual UMass-Boston faculty member, Michael Johnson, whose research focus is decision science for community-based organizations. He has expressed a generous desire to be a mentor to community college students in this career ladder, and to encourage those who are qualified to apply to be Ph.D. students in this field.
So here we are. The need is there for data service providers who can serve the missions, programs, and operations of nonprofit organizations. If we can weave all these allies together into a network, we can meet these needs.
All that we require is:
Allies who are ready, willing, and able to pitch in.
Public awareness that this career ladder is available.
Funding to assist candidates cannot afford tuition for college coursework and other forms of training.
Funding to assist nonprofits that would like to host a data service analyst from this program, but lack the (modest) funding to support one.
Two poster boys of nonprofit data sanity: Bob Penna (l) and Steve Pratt (r).
Now that TNB Labs is up and running, we’re receiving a lot of requests from nonprofit organizations who are perplexed about how to manage the data that they have, before they plunge any further into data analytics or think about acquiring a new data analysis tool. This has given me a lot of opportunities to reflect on how difficult it can be for people whose expertise lies elsewhere to orient themselves to data governance.
Steve Pratt‘s blog article “Drowning in Data?” has been a huge inspiration. In it, he explains the importance of data inventories, and offers to send the Root Cause template to anyone who requests it. I highly recommend that you send an email to firstname.lastname@example.org, and ask for a copy.
At the same time, as I went over Steve’s template, I had a nagging feeling that we needed something even more elementary. Remembering my friend Bob Penna‘s exhortation of a few months before, about asking “who, when, where, what, how, and why,” I quickly drafted a data checklist that focused on those basic questions. When I sent it to Bob, he very quickly returned it with some excellent enhancements; the most brilliant one was to start the checklist with the question “WHY?” As he very sensibly pointed out, if you can’t come up with a good reason why you are collecting, analyzing, reporting, and archiving information, you might as well stop there. In the absence of a persuasive answer to the question “why?” there’s no need to ask “who, when, where, what, and how;” in fact there’s no reason to collect it at all.
With that wisdom in mind, I have tweaked the draft of the data checklist, and herewith present it to you for feedback. This version is the result of a Penna/Finn collaboration:
In the past few years, it’s become fashionable to talk about the importance of a “theory of change” for nonprofits. This is merely a way of underlining the importance of making an explicit statement about the causal relationship between what a nonprofit organization does and the impact that it has promised to deliver. I applaud this! It’s crucial to say, “if we take all of the following resources, and do all of the following actions, then we will get all of the following results.” An organization that lacks the capacity to marshal those resources and take those actions needs to reconsider, because it is on track to fail. If its capacity is not aligned with its commitment, it should acquire the resources or change its commitment to results. Of course, it some cases, it will merely need to revise its theory of change. In any case, it will have to work backward from its mission, and understand how each component contributes to achieving it.
This kind of thinking has lead to a lot of conversations (and a lot of anxiety) in the nonprofit sector about performance measurement, outcomes management, evaluation, and impact assessment.
I’d love to have some of this conversation focus on the information/communication technologies that nonprofit organizations are using. In other word, it’s time to be explicit about a theory of change that explains in detail how every component of the technology an organization uses contributes (directly or indirectly) to its ability to deliver a specific kind of social, cultural, or environmental impact.
Likewise, I’d love to have the conversation address the ways in which the efforts of a nonprofit organization’s performance measurement, outcomes management, evaluation, or impact assessment team contributes (directly or indirectly) to its ability to deliver the kind of impact that it promised its stakeholders.