Tag Archives: research

The Massachusetts Institute of Nonprofit Technology: Let’s Do This!

Massachusetts Institute of Nonprofit Technology

 

We need a Massachusetts Institute of Nonprofit Technology, and I can tell you what degree program we need to establish first:  Bachelor of Nonprofit Data.

The inspiration for this comes from many conversations with many people, but I’d especially like to credit Susan Labandibar, Julia Gittleman, and Laura Beals for pointing out, in their different ways, that one of the most pressing real-life challenges in nonprofit technology today is finding people who can bridge between the outcomes / impact assessment / evaluation / research team (on one hand) and the information systems team (on the other hand) at a nonprofit organization.

Not that I’m a professional full-time data analyst myself, but if I were, I’d find the numbers, and start doing the math:

  • How many brilliant computer scientists are graduating right here in Massachusetts every year from our best high schools, colleges, and universities?
  • Of those graduates, what percentage have strong skills in database design, database development, database management, or data analysis?
  • Of those who have strong data skills, what percentage would be eager to use their geek skills for good, if they were offered an attractive career ladder?

That’s our applicant pool for the Massachusetts Institute of Nonprofit Technology.  (Or MINT, if you prefer.)

Now, let’s figure out the absolute minimum of additional knowledge that these computer science graduates would need in order to be the kind of data analysts who could bridge between the outcomes / impact assessment / evaluation / research team and the information systems team  at a nonprofit:

  • Outcomes measurement
  • Outcomes management
  • Impact assessment
  • Evaluation
  • Social research methods
  • Knowledge management
  • Organizational cultures of nonprofits
  • Nonprofit operations
  • Organizational cultures of philanthropic foundations

That’s our basic curriculum.

If we want to expand the curriculum beyond the basics, we can add these elective subjects:

  • Nonprofit budgeting
  • Group dynamics
  • Ethics
  • Etiquette
  • Negotiation
  • Project management
  • Appreciative inquiry
  • Meeting facilitation

All of these electives would pave the way for other degree programs, in which they would also be extremely useful:

  • Bachelor of Nonprofit Systems Engineering
  • Bachelor of Nonprofit Web Development
  • Bachelor of Nonprofit Help Desk Support
  • Bachelor of Nonprofit Hands On Tech Support
  • Bachelor of Nonprofit Social Media

I already have my eye on some great local colleagues who could be the faculty for the Bachelor of Nonprofit Data program.  In addition to Susan, Julia, and Laura, I’d want to recruit these folks:

Please note that three members of the TNB team top the list of potential faculty members.  Why?  Because I work there, and because TNB has set a Big Hairy Audacious Goal of developing the careers of 1,000 technology professionals. This undertaking would be very congruent with its vision!

However, setting up the Massachusetts Institute of Nonprofit Technology must be a collaborative effort.  It will take a strong network of colleagues and friends to make this happen.

Do you think that this is needed?  Do you think my plan needs a lot of work?  Do you have any ideas or resources that you’d like to suggest?  Please feel free to use the comments section here to share your thoughts.

NPtech Labor Market Alert: The Big Job Title of 2015 Will Be “Data Analyst”

 

Disclaimer: This illustration is for entertainment purposes only. I am not a professional data analyst.

Disclaimer: This illustration is for entertainment purposes only. I am not a professional data analyst.

 

My training, such as it is, is heavily skewed toward qualitative methods; at the same time, I have a lot of respect for quantitative analysis.  However, my favorite form of research consists of staring off into space and letting ideas float into my head.  Sometimes I validate my findings by engaging in conversations in which I talk louder and louder until everyone agrees that I’m right.  It seems to work.

Lately, I’ve had a little time to stare off into space and let ideas float into my head; by this, I mean that I traveled to Austin, Texas for the Nonprofit Technology Conference (also known as #15ntc) and had some down time on the plane.  By the time I arrived in Austin, I had become convinced that “Data Analyst” would be this year’s standout job title in the field of nptech.  At the conference, I was able to confirm this – by which I mean that I didn’t meet anyone there who talks more loudly than I do.

What are the take-ways?  It depends on who you are:

  • For data analysts who are now working in the field of nonprofit technology:  prepare to be appreciated.
  • For data analysts now working in other sectors: think about whether this is a good moment to make a career shift in which you use your geek powers for good. But make sure you know what you’re getting into.
  • For nonprofit executives: don’t kid yourselves. Brilliant data analysts who want to work in the nonprofit sector aren’t going to be attracted by job announcements that indicate that the successful candidate will also be responsible for network administration, hands-on tech support, social media, and web development.
  • For workforce development professionals:  this is your cue. It’s time to put together a program for training computer science graduates to be nonprofit data geeks.
  • For donors, grantmakers, and other funders:  if you want reports from nonprofits are based on reliable and valid methods of analysis, then you will need to underwrite data analysts at nonprofits.  That means money for training, for salaries, and for appropriate technology.

If you don’t agree with my findings, please take a moment to share yours in the comments section.

“Accidental Evaluator” is the new “Accidental Techie.” I’m just saying.

laura beals

Laura Beals, who is director of evaluation at Jewish Family and Children’s Services of Boston, published a great article on the NTEN blog earlier this month, called “Are You an ‘Accidental Evaluator?’ “

I think that this is a great question to ask, because many nonprofit professionals currently managing program evaluation within small nonprofits are indeed coming to the task with less preparation than they would like.  Perhaps they are program directors, or grant writers, or chief financial officers, or database administrators.  And now the pressure is on them to come up with numbers that show that their organizations are actually creating the positive change in the world that the organization has promised to deliver.

In fact, many of today’s accidental evaluators at nonprofits are in the same position that accidental techies were ten or fifteen years ago.

I respectfully disagree with those of my esteemed colleagues who want to help nonprofit professionals by reassuring them that they don’t have to meet the standards of academic peer reviewed journals when they use data to tell their stories.  While it’s true that the level of rigor required for nonprofit programmatic evaluation is much less strict, it’s not enough to point this out and encourage nonprofit professionals to relax.  Those nonprofit professionals are running organizations with a special legal status that make them answerable to the public and responsible for contributing to the common good.  This is a serious ethical obligation.

From my point of view, those of us who understand the importance of evaluation in the nonprofit sector should be working to deliver appropriate forms of professional development to “accidental evaluators,” just as NTEN has labored mightily to deliver professional development to “accidental techies.”

In fact, NTEN itself is in a very good position to assist “accidental evaluators,” because many technology topics are intimately tied up with nonprofit evaluation, such as database development, data integration, and data visualization. Indeed, if you look at some the companion articles on the NTEN blog, you’ll see that this effort is underway:

I’m pleased to say that here in Boston we’re actively addressing this.  For example, Laura and her wonderful JF&CS colleague Noah Schectman recently led a meeting of local nonprofit professionals who are seeking to improve their skills in bridging between evaluation and technology.  A pivotal moment at this session came when the executive director of a tiny nonprofit raised her hand and asked Noah, “Will you be my best friend?”  Noah’s face lit up, and he told her that he would.  That’s the kind of reassurance that we should be offering nonprofit professionals who feel overwhelmed; we should be telling them that support and training are on the way.

 

“Power corrupts. PowerPoint corrupts absolutely.” (Redux)

A slide from the PowerPoint version of Abraham Lincoln's Gettysburg Address

This is another article, salvaged with help from the Wayback Machine, from my now-defunct first blog. I think that the points I made then are as valid in 2013 as they were in 2005.  What do you think?

Mon 14 Feb 2005 06:41 AM EST

Most days of the week, I tend to think of information technology as morally neutral.  It isn’t inherently good or evil; the applications of a technology are good or evil.

But I do find some forms of information technology irritating or counter-productive – especially as they are often used in the nonprofit/philanthropic sector.

PowerPoint happens to be in that category.

I came to conclusion through my favorite research method.  (I.e., staring off into space for about half an hour.)  During this strenuous research, I asked myself two questions:

  1. When have I enjoyed giving a presentation based on PowerPoint?
  2. When have I enjoyed or learned a lot from someone else’s PowerPoint presentation?

Although I try to avoid giving PowerPoint presentations these days, I had no trouble answering Question #1 on the basis of previous experience.  I almost always liked it.  It’s great to have my talking points, my graphic displays, and my annotations packaged in one document.  Assuming that there’s no equipment failure on the part of the projector, the screen, the computer, or the storage medium that holds the PowerPoint document – it’s very convenient – although it’s not very safe to assume that none of these factors will fail.

In short, PowerPoint is designed to make presenters reasonably happy.  (Except in cases of equipment failure.)

The answer to Question #2 is a little more difficult.  I can be an exacting judge of how information is presented, and of whether the presenter is sensitive to the convenience and learning styles of the audience.

Perhaps the presenter put too many points on each slide, or too few.  Perhaps I was bored, looking at round after round of bulleted text, when graphic displays would have told the story more effectively.  Perhaps I wondered why the presenter expected me to copy the main points down in my notebook, when he/she knew all along what they were going to be.  Perhaps the repeated words, “next slide, please,” spoken by the presenter to his/her assistant seemed to take on more weight through sheer repetition than the content under consideration.  Perhaps there were too many slides for the time allotted, or they were not arranged in a sequence that made it easy to re-visit specific points during the question and answer period.

In short, PowerPoint as a medium of presentation does not tend to win friends and influence people.  (Of course, the best designed PowerPoint presentations succeed spectacularly, but the likelihood of creating or viewing one is fairly low.)

However, all is not lost.  If you have struggled to attain some high-level PowerPoint skills, and your role in a nonprofit/philanthropic organization calls for you to make frequent presentations, I can offer you advice in the form of the following three-point plan:

  1. Knock yourself out.  Create the PowerPoint presentation of your dreams.  Include all the bells and whistles.  Be sure to write up full annotations for each slide.
  2. Print out this incredible PowerPoint presentation in “handout” format, and give a paper copy to each person at the beginning of your talk.  As a bonus, you can also tell your audience where they can view or download it on the web.
  3. Cull out all but five or six slides for each hour of your planned presentation.  These should only include graphics that must be seen to be believed, and text that is more effective when read silently than when spoken.  This severely pared-down version will be the PowerPoint document that you will actually use during your presentation.

I realize that this will probably not be welcome advice, but the interests of your organization will undoubtedly dictate that you deploy a PowerPoint strategy that will, at the very least, not alienate the audiences at your presentations.

If you have any lingering hopes that PowerPoint is the best tool for engaging stakeholders in your mission, my final advice to you to review the PowerPoint version of Abraham Lincoln’s Gettysburg Address.

 




A note on the title of this article:

I wish I had invented this aphorism, but I didn’t.

In 1887, John Dalberg-Acton (1st Baron Acton) wrote, “Power tends to corrupt and absolute power corrupts absolutely.”

In 2003, Edward Tufte wrote “Power corrupts.  PowerPoint corrupts absolutely.

“We count our successes in lives”

Brent James

Brent James is one of my new heroes.  He’s a physician, a researcher, and the chief quality officer of Intermountain Healthcare’s Institute for Health Care Delivery Research.

We had a very inspiring telephone conversation this afternoon, about whether the lessons learned from evidence-based medicine could be applied to nonprofits that are seeking to manage their outcomes.  We also swapped some stories and jokes about the ongoing struggle to document a causal relationship between what a health care organization (or a social service agency, or an arts group, or an environmental coalition, for that matter) does and what the organization’s stated aims are.  In fact, documenting that an organization is doing more good than harm, and less harm than doing nothing at all, continues to be a perplexing problem.  The truth may be less than obvious – in fact, it may be completely counter-intuitive.

In this phone conversation, we also waded into deep epistemological waters, reflecting on how we know we have succeeded, and also on the disturbing gap between efficacy and effectiveness.

It’s not merely a philosophical challenge, but a political one, to understand where the power lies to define success and to set the standards of proof.

I doubt that this is what William James (no relation to Brent, as far as I know) had in mind when he referred to success as “the bitch-goddess,” but there’s no doubt that defining, measuring, and reporting on one’s programmatic success is a bitch for any nonprofit professional with intellectual and professional integrity.  It’s both difficult and urgent.

What particularly struck me during my conversation with Brent was his remark about Intermountain Healthcare:

“We count our successes in lives.”

On the surface, that approach to counting successes seems simple and dramatic.  The lives of patients are on the line.  They either live or die, with the help of Intermountain Healthcare.  But it’s really a very intricate question, once we start asking whether Intermountain’s contribution is a positive one, enabling the patients to live the lives and die the deaths that are congruent with their wishes and values.

These questions are very poignant for me, and not just because I’m cancer patient myself, and not just because yesterday I attended the funeral of a revered colleague and friend who died very unexpectedly.  These questions hit me where I live professionally as well, because earlier this week, I met with the staff of a fantastic nonprofit that is striving to do programmatic outcomes measurement, and is faced with questions about how to define success in a way that can be empirically confirmed or disconfirmed.  Their mission states that they will help their clients excel in a specific industry and in their personal lives.  They have a coherent theory of change, and virtually all of their criteria of professional and personal success are quantifiable.  Their goals are bold but not vague. (This is a dream organization for anyone interested in outcomes management, not to mention that the staff members are smart and charming.)  However, it’s not entirely clear yet whether the goals that add up to success for each client are determined solely by the staff or by the client or some combination thereof.  I see it as a huge issue, not just on an operational level, but on a philosophical one; it’s the difference between self-determination and paternalism.  I applaud this organization’s staff for their willingness to explore the question.

When Brent talked about counting successes in terms of lives, I thought about this nonprofit organization, which defines its mission in terms of professional and personal success for its clients.  The staff members of that organization, like so many nonprofit professionals, are ultimately counting their successes in lives, though perhaps not as obviously as health care providers do.  Surgeons receive high pay and prestige for keeping cancer patients alive and well – for the most part, they fully deserve it.  But let’s also count the successes of the organization that helps a substantial number of people win jobs that offer a living wage and health insurance, along with other benefits such as G.E.D.s, citizenship, proficiency in English, home ownership, paid vacations, and college educations for the workers’ children. Nonprofit professionals who can deliver that are also my heroes, right up there with Brent James.  While we’re holding them to high standards of proof of success, I hope that we can find a way to offer them the high pay and prestige that we already grant to the medical profession.

How grantmakers and how nonprofits use information about outcomes

State of Evaluation 2012: Evaluation Practice and Capacity in the Nonprofit Sector, a report by the Innovation Network

I’m sitting here, reflecting on the Innovation Network’s “State of Evaluation 2012” report.

I encourage you to download it and read it for yourself; start with pages 14 and 15. These two pages display infographics that summarize what funders (also known as “grantors,” or if you’re Bob Penna, as “investors”) and nonprofits (also known as “grantees”) are reporting about why they do evaluation and what they are evaluating.

Regardless of whether you call it evaluation, impact assessment, outcomes management, performance measurement, or research – it’s really, really difficult to ascertain whether a mission-based organization is delivering the specific, positive, and sustainable change that it promises to its stakeholders. Many organizations do an excellent job at tracking outputs, but falter when it comes to managing outcomes. That’s in part because proving a causal relationship between what the nonprofit does and the specific goals that it promises to achieve is very costly in time, effort, expertise, and money.

But assuming that a mission-based organization is doing a rigorous evaluation, we still need to ask:  what is done with the findings, once the analysis is complete?

What the aforementioned infographics from the “State of Evalution 2012”  tell me is that both grantors and grantees typically say that the most important thing they can do with their outcome findings is to report them to their respective boards of directors.  Considering the depth of the moral and legal responsibility that is vested in board members, this is a pretty decent priority.  But it’s unclear to me what those boards actually do with the information.  Do they use it to guide the policies and operations of their respective organizations?  If so, does anything change for the better?

If you have an answer to the question of how boards use this information that is based on firsthand experience, then please feel to post a comment here.

What I learned about outcomes management from Robert Penna

Robert Penna

Yesterday, along with a number of colleagues and friends from Community TechKnowledge, I had the privilege of attending a training by Robert Penna, the author of The Nonprofit Outcomes Toolbox.

As you probably  know, I’ve been on a tear about outcomes measurement for a few months now; the current level of obsession began when I attended NTEN’s Nonprofit Data Summit in Boston in September.  I thought that the presenters at the NTEN summit did a great job addressing some difficult issues – such as how to overcome internal resistance to collecting organizational data, and how to reframe Excel spreadsheets moldering away in file servers as archival data.  However, I worked myself into a tizzy, worrying about the lack, in that day’s presentations, of any reference to the history and literature of quantitative analysis and social research.  I could not see how nonprofit professionals would be able to find the time and resources to get up to speed on those topics.

Thanks to Bob Penna, I feel a lot better now.  In yesterday’s training, he showed me and the CTK team just how far you can go by stripping away what is superfluous and focusing on what it really takes to use the best outcomes tools for job.  Never mind about graduate level statistics! Managing outcomes may be very, very difficult because it requires major changes in organizational culture – let’s not kid ourselves about that.  However, it’s not going to take years out of each nonprofit professional’s life to develop the skill set.

Here are some other insights and highlights of the day:

  • Mia Erichson, CTK’s brilliant new marketing manager, pointed out that at least one of the outcomes tools that Bob showed us could be easily mapped to a “marketing funnel” model.  This opens possibilities for aligning a nonprofits programmatic strategy with its marcomm strategy.
  • The way to go is prospective outcomes tracking, with real time updates allowing for course correction.  Purely retrospective outcomes assessment is not going to cut it.
  • There are several very strong outcomes tools, but they should be treated as we treated a software suite that comprises applications that are gems and applications that are junk.  We need to use the best of breed to meet each need.
  • If we want to live in Bob Penna’s universe, we’re going to have to change our vocabulary.  It’s not “outcomes measurement – it’s “outcomes management.” The terms “funder” and “grantmaker” are out – “investor” is in.

Even with these lessons learned, it’s not a Utopia out there waiting for nonprofits that become adept at outcomes management.  Not only is it difficult to shift to an organizational culture that fosters it, but we have to face continuing questions about how exactly the funders (oops! I should have said “investors”) use the data that they demand from nonprofit organizations.  (“Data” is of course a broad term, with connotations well beyond outcomes management.  But it’s somewhat fashionable these days for them to take an interest in data about programmatic outcomes.)

We should be asking ourselves, first of all, whether the sole or primary motivation for outcomes management in nonprofits should be the demands of investors.  Secondly, we should be revisiting the Gilbert Center’s report, Does Evidence Matter to Grantmakers? Data, Logic, and the Lack thereof in the Largest U.S. Foundations.We need to know this. Thirdly, we should be going in search of other motivations for introducing outcomes management.  I realize that most nonprofits go forward with it when they reach a point of pain (translation:  they won’t get money if they don’t report outcomes). 

During a break in Bob’s training, some of my CTK colleagues were discussing the likelihood that many nonprofit executives simply hate the concept of outcomes management.  Who wants to spend resources on it, if it subtracts from resources available for programmatic activities?  Who wants to risk finding out (or to risk having external stakeholders find out) that an organization’s programs are approximately as effective as doing nothing at all?  Very few – thus the need to find new motivations, such as the power to review progress and make corrections as we go.  I jokingly told my CTK colleagues, “the truth will make you free, but first it will make you miserable.”  Perhaps that’s more than a joke.

%d bloggers like this: