I invite you to participate in this survey, bearing in mind that it is for recreational purposes, and has no scientific value:
There are many reasons that this survey is of dubious value, for example:
- As the survey’s principal (and only) investigator, I’m hopelessly biased in favor of certain responses and against others; no effort has been made to create a neutral instrument.
- No pilot testing has been done to ensure that the choices offered are both exhaustive and mutually exclusive.
The list could go on, but I’ll leave it at that. Although most of my training is in qualitative social research, I have taken undergraduate and graduate level courses on quantitative research, and the points I made about what’s wrong with my survey are what I could pull out of memory without consulting a standard text on statistics.
In other words, when it comes to quantitative analysis, I know just enough to be dangerous.
Meanwhile, I worry about nonprofit organizations that are under pressure to collect, analyze, and report data on the outcomes of their programs. There are a lot of fantastic executive directors, program managers, and database administrators out there – but it’s very rare for a nonprofit professional who falls into any of those three categories to also have solid skills in quantitative analysis and social research methods. Nevertheless, I know of plenty of nonprofit organizations where programmatic outcomes measurement is done by an executive director, program manager, or database administrator whose skill set is very different from what the task demands. In many cases, even if they come up with a report, the nonprofit staff members may not even be aware that what have done is presented a lot of data, without actually showing that there is any causal relationship between the organization’s activities and the social good that they are in business to deliver.
Let’s not be too hasty in deprecating the efforts of these nonprofit professionals. They are under a lot of pressure, especially from grantmaking foundations, to report on programmatic outcomes. In many cases, they do the best they can to respond, even if they have neither the internal capacity to meet the task nor the money to hire a professional evaluator.
By the way, I was delighted to attend gathering this fall, in which I heard a highly-regarded philanthropic professional ask a room full of foundation officers, “are you requiring $50,000 worth of outcomes measurement for a $10,000 grant?” It’s not the only question we need to ask, but it’s an extremely cogent one!
I’d love to see nonprofit professionals, philanthropists, and experts in quantitative analysis work together to address this challenge.
We should also be learning lessons from the online tools that have already been developed to match skilled individuals with nonprofit professionals who need help and advice from experts. Examples of such tools include the “Research Matchmaker,” and NPO Connect.
We can do better. It’s going to take time, effort, money, creativity, and collaboration – but we can do better.
Tagged: activities, analysis, analytic, assessment, causal relationship, causality, convenience sample, data analysis, data collection, data reporting, database administrator, effectiveness, efficacy, evaluation, executive director, exhaustive, foundation, generalizability, grantmaking, impact, impact assessment, mission, mutually exclusive, nonprofit, nptech, outcomes, outcomes measurement, pilot test, principal investigator, program manager, programmatic outcomes measurement, qualitative research, quantitative analysis, quantitative research, reliability, reporting, research, sampling, scientific, skill set, social good, social research, statistics, survey, survey instrument, technology, validity