Tag Archives: measurement

In search of my next vocation!

"Excelsior!" Cartoon by James Thurber

“Excelsior!”   (Cartoon by James Thurber)

After five very productive years at Tech Networks of Boston (TNB), I am now looking for my next professional challenge. I’m ready for a career shift! I’ve notified the leadership at TNB, so this is not a covert search.

If you know about any job opportunities at organizations that need someone with my skill set, I’d love to hear about them. In my next job, I’d like to focus on some or all of the following:

  • Weaving networks among nonprofit organizations in order to build collaboration, peer learning, and communities of practice.
  • Building the capacity of philanthropic and nonprofit organizations to achieve and document their desired outcomes.
  • Fostering equity, inclusion, social justice, and corporate social responsibility.
  • Aiding philanthropic and nonprofit organizations in seamlessly matching resources with needs.
  • Establishing best practices in the strategic use of information and communication technologies among mission-based organizations.
  • Facilitating candid dialogue and successful collaborations between grantmakers and grantees.

I invite you to peruse my LinkedIn profile and my résumé, and to get in touch with me about any contacts or opportunities that you’d like to suggest.

Please help me find new ways to serve organizations and individuals who are working to make the world a better place!

Deborah Elizabeth Finn – résumé – June 2018

 

 

 

 

Every nonprofit needs a theory of change for its technology. . .and for its evaluation process

if then

I’ve spent a lot of my professional life (thus far) thinking about the missions of nonprofit organizations, and about information/communication technologies for nonprofits.

In the past few years, it’s become fashionable to talk about the importance of a “theory of change” for nonprofits.  This is merely a way of underlining the importance of making an explicit statement about the causal relationship between what a nonprofit organization does and the impact that it has promised to deliver.  I applaud this!  It’s crucial to say, “if we take all of the following resources, and do all of the following actions, then we will get all of the following results.”  An organization that lacks the capacity to marshal those resources and take those actions needs to reconsider, because it is on track to fail. If its capacity is not aligned with its commitment, it should acquire the resources or change its commitment to results.  Of course, it some cases, it will merely need to revise its theory of change.  In any case, it will have to work backward from its mission, and understand how each component contributes to achieving it.

This kind of thinking has lead to a lot of conversations (and a lot of anxiety) in the nonprofit sector about performance measurement, outcomes management, evaluation, and impact assessment.

I’d love to have some of this conversation focus on the information/communication technologies that nonprofit organizations are using.  In other word, it’s time to be explicit about a theory of change that explains in detail how every component of the technology an organization uses contributes (directly or indirectly) to its ability to deliver a specific kind of social, cultural, or environmental impact.

Likewise, I’d love to have the conversation address the ways in which the efforts of a nonprofit organization’s performance measurement, outcomes management, evaluation, or impact assessment team contributes (directly or indirectly) to its ability to deliver the kind of impact that it promised its stakeholders.

 

 

TNB Labs has launched, and its first priority is data services for nonprofits!

tnb labs logo july 2016

 

Greg Palmer and I are very pleased to announce the launch of TNB Labs, a company dedicated to providing nonprofit organizations with high quality data services and program management.  TNB Labs works with organizations to audit and assess data management methodology, develop and implement data standards, and provide structural oversight through data governance to organizations of all sizes.

Through our partnership with Tech Networks of Boston, we have had a unique opportunity to listen to hundreds of stakeholders at TNB’s Roundtables uncover a need to elevate the role of data in nonprofit organizations.  TNB Labs is focused on improving the capacity of nonprofit organizations at every stage of the outcome management process.

The immediate priorities of TNB Labs are:

1) Providing Master Data Management (MDM) services to nonprofit organizations in support of their missions, focusing on data governance, data quality, data modeling, data visualizations, and program evaluation.

2) Providing workforce program management for Desktop Support Technicians (DST), Data Support Analysts (DSA), and Data Analytics/Data Evaluation entry level professionals.

3) Managing the TNB Roundtable series, which is now jointly owned by Tech Networks of Boston and TNB Labs.

TNB Labs is led by Greg Palmer (chief executive officer), and Deborah Elizabeth Finn (chief strategic officer).  The other co-founders are Bob Master (former CEO of Commonwealth Care Alliance) and Susan Labandibar (founder of Tech Networks of Boston).

TNB Labs is here to solve your problems.  Please contact us with any questions and comments you have about TNB Labs, or to learn more about data management or program management services that might be helpful to your organization.

Best regards from Deborah and Greg

Greg Palmer
gpalmer@tnblabs.org
508.861.4535

Deborah Elizabeth Finn
definn@tnblabs.org
617.504.8188

TNB Labs, LLC
PO Box 2073
Framingham, MA 01703
www.tnblabs.org

 

 

Basic concepts in technology planning for nonprofits

501 Tech Club

I had a delightful time at last week’s meeting of the Boston 501 Tech Club.  The theme was technology planning (a topic close to my heart), and Gavin Murphy of Annkissam (a colleague, esteemed client, and friend) gave an outstanding overview that I recommend to any nonprofit professional who has mastered his/her own field and is ready to think about the big picture in technology for his/her organization.  Naturally, during the Q&A time after Gavin’s presentation, I did some nitpicking on the topic of metrics, but never mind.  What you see below is the complete set of Gavin’s notes for this presentation, with no editorial changes from yours truly.  Many thanks are due to Gavin for permission to post his notes!


Technology Planning
Presented at the Boston 501 Tech Club
Gavin Murphy
Chief Operating Officer
Annkissam

1. What is Tech Planning?

  • “Technology” can means lots of things, from office wiring and networks to social networking and RFID chips.
  • Today we will focus on concepts of technology planning that should be universally applicable to whatever planning you need to do.
  • One key concept is recognizing that most decisions involve trade-offs; there is rarely a “right” option, rather different options will present different trade-offs (upfront cost, ongoing cost, quality, time, other resources or risks, etc.).
  • At the end we’ll talk about some resources that are available for people that are interested in exploring more specific topics, and we’ll also have a short Q&A session.

2. Strategic Alignment

“Plans are worthless. Planning is essential.”  – Dwight D. Eisenhower, general and president (1890-1961)

  • Technology strategy (and planning) should support organization strategy.
  • Show of hands: how many people are part of an organization that has a strategy (and you know what it is, on some level)?
  • How many people’s organizations have a technology strategy (and you know what it is, on some level)?
  • If you don’t have an organizational strategy, that’s a bigger issue!  And, frankly, one that should be addressed first.

3. Why Plan?

“It is not the strongest of the species that survive, not the most intelligent, but the one most responsive to change.” – Charles Darwin, scientist

  • Planning will help you be more adaptable to change.
  • The act of planning will force you to think through the resources you have to commit to the process (both time and money) and tradeoffs that different options represent.
  • The executive leadership needs to be involved in the planning process to some degree, although other staff or by someone from outside the organization can manage the process.
  • Even if your plans change, the act of planning will get people engaged in the options and will help to avoid “shiny object syndrome”.  Ultimately, planning will help you respond to both expected and unexpected changes to your organization or environment.

4. Planning is a Process!

  • It’s not an event, or even a single project (although there could be a project to kick it off or reevaluate things).
  • Similarly, planning can produce documents that are quite helpful, but only to the extent those documents are used to guide the decisions of the organization.
  • It’s important to budget time and resources to technology planning and implementation, just as you would dedicate ongoing resources to other critical aspects of your organization.
  • One potential trap is committing to an ongoing technology obligation without anticipating the resources it will take to maintain; for example, maintaining your own servers or establishing a social media presence.
  • It’s possible that technology is not a critical part of your organization, and that’s fine too as long as you are engaging in the process of evaluating tradeoffs to come to that conclusion.

5. Importance of metrics and measurements

  • Once you have decided on a strategy, the next thing is to think about is how to measure your progress.
  • Metrics are one way to make sure your technology strategy is closely aligned to your organizational strategy.
  • For example, if data security is a concern, you might track the percentage of your computers that have AV or disk encryption installed; if outreach is an organizational imperative then perhaps Twitter followers or Facebook friends might be a better metric.
  • Metrics should be as quantitative as possible, to minimize the risk that people will make subjective judgments and obscure the true picture of how things are going.

6. Need to set goals and track success (or failure)

  • Once you have chosen your metrics, you should set goals for those metrics and track your progress over a preset time period which should be long enough to judge results but short enough to preserve momentum.
  • If you succeed in achieving your goals–great! Adjust your goals for the next time period to be a little more challenging and keep trying to meet them. It’s important to avoid “autopilot” goals that are too easy to meet and never adjust up.
  • If you don’t meet your goals, that’s ok too. Now you have valuable information and you can either adjust your plan, your metrics, your goals, or the resources you are applying to technology. After a few cycles you should be able to find the right balance and establish a pattern of success.

7. Things went wrong?!

“Everyone has a plan – until they get punched in the face.” – Mike Tyson, Boxer

  • If things go wrong, that’s ok! That’s all part of the process.
  • The benefit of having a plan is that at least you will know when things are going wrong, which is always preferable (even if nothing can be done about it in the short run) to finding out everything has already gone wrong in the past and now things are in crisis.

8. Resources

“Those who plan do better than those who do not plan even though they rarely stick to their plan.” – Winston Churchill, British Prime Minister

“We count our successes in lives”

Brent James

Brent James is one of my new heroes.  He’s a physician, a researcher, and the chief quality officer of Intermountain Healthcare’s Institute for Health Care Delivery Research.

We had a very inspiring telephone conversation this afternoon, about whether the lessons learned from evidence-based medicine could be applied to nonprofits that are seeking to manage their outcomes.  We also swapped some stories and jokes about the ongoing struggle to document a causal relationship between what a health care organization (or a social service agency, or an arts group, or an environmental coalition, for that matter) does and what the organization’s stated aims are.  In fact, documenting that an organization is doing more good than harm, and less harm than doing nothing at all, continues to be a perplexing problem.  The truth may be less than obvious – in fact, it may be completely counter-intuitive.

In this phone conversation, we also waded into deep epistemological waters, reflecting on how we know we have succeeded, and also on the disturbing gap between efficacy and effectiveness.

It’s not merely a philosophical challenge, but a political one, to understand where the power lies to define success and to set the standards of proof.

I doubt that this is what William James (no relation to Brent, as far as I know) had in mind when he referred to success as “the bitch-goddess,” but there’s no doubt that defining, measuring, and reporting on one’s programmatic success is a bitch for any nonprofit professional with intellectual and professional integrity.  It’s both difficult and urgent.

What particularly struck me during my conversation with Brent was his remark about Intermountain Healthcare:

“We count our successes in lives.”

On the surface, that approach to counting successes seems simple and dramatic.  The lives of patients are on the line.  They either live or die, with the help of Intermountain Healthcare.  But it’s really a very intricate question, once we start asking whether Intermountain’s contribution is a positive one, enabling the patients to live the lives and die the deaths that are congruent with their wishes and values.

These questions are very poignant for me, and not just because I’m cancer patient myself, and not just because yesterday I attended the funeral of a revered colleague and friend who died very unexpectedly.  These questions hit me where I live professionally as well, because earlier this week, I met with the staff of a fantastic nonprofit that is striving to do programmatic outcomes measurement, and is faced with questions about how to define success in a way that can be empirically confirmed or disconfirmed.  Their mission states that they will help their clients excel in a specific industry and in their personal lives.  They have a coherent theory of change, and virtually all of their criteria of professional and personal success are quantifiable.  Their goals are bold but not vague. (This is a dream organization for anyone interested in outcomes management, not to mention that the staff members are smart and charming.)  However, it’s not entirely clear yet whether the goals that add up to success for each client are determined solely by the staff or by the client or some combination thereof.  I see it as a huge issue, not just on an operational level, but on a philosophical one; it’s the difference between self-determination and paternalism.  I applaud this organization’s staff for their willingness to explore the question.

When Brent talked about counting successes in terms of lives, I thought about this nonprofit organization, which defines its mission in terms of professional and personal success for its clients.  The staff members of that organization, like so many nonprofit professionals, are ultimately counting their successes in lives, though perhaps not as obviously as health care providers do.  Surgeons receive high pay and prestige for keeping cancer patients alive and well – for the most part, they fully deserve it.  But let’s also count the successes of the organization that helps a substantial number of people win jobs that offer a living wage and health insurance, along with other benefits such as G.E.D.s, citizenship, proficiency in English, home ownership, paid vacations, and college educations for the workers’ children. Nonprofit professionals who can deliver that are also my heroes, right up there with Brent James.  While we’re holding them to high standards of proof of success, I hope that we can find a way to offer them the high pay and prestige that we already grant to the medical profession.

Measuring what we value, and presenting the findings more interactively than ever

Boston Indicators Project logo

First of all, a personal resolutionI will not whine.

The Boston Indicators Project, which is an initiative by the Boston Foundation and the Metropolitan Area Planning Council, relaunched its web site in November, and I was not invited to the event.  I will subdue my inclination to pout, and move on to praising the new web site.

Fortunately, a fellow Boston Technobabe, Kat Friedrich, did attend; you therefore have the option of skipping my blog article and going straight to hers.  Kat’s focus is on “How Nonprofits Can Earn News Coverage Using Data Visualization,” which is certainly a great take-away for mission-based organizations.

My interest is slightly different.  Here are a few things that are especially striking:

The new Boston Indicators web site is great example of nonprofit technology in the service of a mission that is much greater any one community foundation or specific region.  I happen to live in the greater Boston area, so I’ve been more easily drawn to it than I would be if I were living elsewhere.  But it’s an example to any individual or organization, of the power of the universal access to the significant data, and the importance of analyzing it in ways that benefit the community.

How grantmakers and how nonprofits use information about outcomes

State of Evaluation 2012: Evaluation Practice and Capacity in the Nonprofit Sector, a report by the Innovation Network

I’m sitting here, reflecting on the Innovation Network’s “State of Evaluation 2012” report.

I encourage you to download it and read it for yourself; start with pages 14 and 15. These two pages display infographics that summarize what funders (also known as “grantors,” or if you’re Bob Penna, as “investors”) and nonprofits (also known as “grantees”) are reporting about why they do evaluation and what they are evaluating.

Regardless of whether you call it evaluation, impact assessment, outcomes management, performance measurement, or research – it’s really, really difficult to ascertain whether a mission-based organization is delivering the specific, positive, and sustainable change that it promises to its stakeholders. Many organizations do an excellent job at tracking outputs, but falter when it comes to managing outcomes. That’s in part because proving a causal relationship between what the nonprofit does and the specific goals that it promises to achieve is very costly in time, effort, expertise, and money.

But assuming that a mission-based organization is doing a rigorous evaluation, we still need to ask:  what is done with the findings, once the analysis is complete?

What the aforementioned infographics from the “State of Evalution 2012”  tell me is that both grantors and grantees typically say that the most important thing they can do with their outcome findings is to report them to their respective boards of directors.  Considering the depth of the moral and legal responsibility that is vested in board members, this is a pretty decent priority.  But it’s unclear to me what those boards actually do with the information.  Do they use it to guide the policies and operations of their respective organizations?  If so, does anything change for the better?

If you have an answer to the question of how boards use this information that is based on firsthand experience, then please feel to post a comment here.

%d bloggers like this: