Skip to main navigation

Catalogue Blog

Getting to Know the Catalogue Review Team: Part 3

Today marks the 3rd and final installment of our “Getting to Know the Reviewers” blog series. We’re excited to introduce our readers to Hedrick Belin, President of the Potomac Conservancy (a Catalogue charity) and 4-time Catalogue reviewer. Approximately one-quarter of our review team is comprised of members from peer nonprofits. Between these reviewers, members from foundations, corporate giving programs, other partner organizations and philanthropists, the Catalogue team is always confident in the work of this group to help us determine which small charities are truly “the very best” in Greater Washington.

What do you enjoy most about reviewing nonprofits for the Catalogue?

Hedrick Belin, Potomac Conservancy: It lets me see what other innovative conservation groups are undertaking to clean our air, safeguard our drinking water supply and protect wildlife. There are many amazing groups making a real difference in our overall quality of life with very few financial resources. Every spring, I come away re-energized after spending a weekend reading through a dozen applications.

What is one piece of advice you would give to future Catalogue applicants?

Hedrick: Specifics matter. Assume the reviewer does not know anything about your organization. What can you put in your application that shows the concrete impacts and on-the-ground differences that your organization is making? Quantify by including lots of metrics to demonstrate you are not just a nice organization doing nice things, but are really filling a community need and making a difference when it comes to changing lives or improving the community. Make it easy for me to recognize immediately that you are one of the best.

What is one piece of advice you would give to new/future Catalogue reviewers?

Hedrick: Come with an open mind, but a critical eye. The Catalogue is supposed to represent the best small non-profits in Washington, DC, not every small non-profit. You should be open to innovative approaches to solving some of the chronic problems in the region, but also read the applications carefully to see which groups have the best return on investment.

How has being a reviewer had an impact on your views of philanthropy in Greater Washington?

Hedrick: The region is blessed with an incredibly strong non-profit sector that is getting stronger every year. I’m constantly impressed with the passionate individuals fighting every day on the front lines to build a more just and sustainable world and have seen the power of this sector to change lives and save lives. The larger non-profits that have been around for decades tend to get the press, but the non-profits that the Catalogue selects deserve to be recognized as well for the tremendous difference they are making in our local communities.

What do you feel your unique background brings to the Catalogue review process?

Hedrick: As an Executive Director, I quickly assess organizational alignment. Do the organizational mission, vision and strategies tie together in a concise, compelling way? Is there a clear theory of change that the organization is employing to drive every decision?As a former consultant working with over 100 social purpose organizations across the non-profit spectrum, I’ve developed an ability to evaluate the entities’ efficacy and impact by looking at a few key answers in the application. For example, I look at the size and composition of the board. I also look at revenue streams, both in terms of diversity of sources and also if they have the revenue engine to drive the short and long-term goals listed in question.

To learn more about Potomac Conservancy and find out how you can donate or volunteer, head over to their Catalogue web page, or connect with them on their website, Facebook and Twitter pages.

Rethinking the “Impact Question”: Evaluating the (Nonprofit) Evaluators, Part IV

The following blog, written by Catalogue for Philanthropy President and Editor, Barbara Harman, was published in the Huffington Post on Monday, February 24th. It is the final post in a series on the “evaluation problem.”

In my previous post, I argued that metrics measure something, but not everything. Let’s take a look at what a basic, metrics-based “logic model” looks like (though note that Charity Navigator’s new model is much more extensive and challenging than the more streamlined model I am suggesting here):

Inputs (what you bring to the table as resources: staff, funds, expertise)
Activities (programs and services; what you do)
Outputs (things that can be measured — numbers of people you serve, units of housing you build, meals you provide, numbers of classes you conduct)
Outcomes (results — impact you have in the short, medium and long-term)

Good, but not good enough. For the model to be complete, it needs to begin with a description and analysis of the community in which you work and the specific challenges you face. If you want to know, at the end of the process, what impact really means, you first have to know, and state, what the conditions are in which your work takes place and out of which it emerges. Describing these is a complex task — sometimes even a moving target — that doesn’t easily lend itself to metrics.

In addition, how you assess your results will depend on what you value. If, at the end of the line, you are measuring something intangible like the resiliency or grit of vulnerable children who have grown up in poverty, you will have a greater challenge before you than will an organization seeking, say, to measure an increase in the rate of employment for job-seeking adults, where numbers are their friends. (This is not to say that the work is harder, only that the task of assessing the work is.) You have to make sure that you have identified grit and resiliency, and any other critical life skills, as core values, and you have to explain why they are.

As citizens and donors, we should do what we can to make sure that those organizations are working to build more creative communities, and to devise programs to deal with extremely challenging (if not, thus far, intractable) social problems, are not excluded because their outcomes are not as easy to measure as others. If I am visiting a community center in Washington, DC’s Ward 8 where the average family income is $9100 a year, I should not be looking at outcomes the same way I would if I were visiting a community where the somewhat better-off youngsters need a smaller boost in order to be successful. The hill is steeper in some places than it is in others, and we have to take that into account.

At the Catalogue for Philanthropy: Greater Washington, we have approached these questions in what is, given the direction that evaluation appears to be taking, a rather unusual way. We have gathered the community of professionals in the field — from foundations, corporate giving programs, peer nonprofits, government agencies and the philanthropic advisory community — and asked them to evaluate applicant nonprofits. Our review process has three stages: programmatic review (the conditions you address, the programs you have created, the impact you have); financial review (reasonable projections of income and expenses; diversified funding; transparency); and site visits (reviewers are asked to share their experience of previous visits, not to visit anew).

Some 120 individuals participate annually, sharing their expertise and direct knowledge. Communities have this knowledge, but it is rarely aggregated or shared with the public at large. We share it in our annual print catalogues and, of course, online, and we are able to do what the rating entities cannot do: actually evaluate need, program quality, and impact — without overburdening community-based nonprofits that, by and large, lack the resources to perform extensive evaluations themselves.

Creating communities of knowledge — actually pooling the know-how of people who have expertise in the field — seems like an obvious thing to do in the service of philanthropy, especially in an era in which knowledge-sharing has become so much easier. It means, too, that we can ask questions that don’t lend themselves to easy answers because we can use the brainpower of the community to identify the nonprofits that are doing the best work. There is no reason why this model could not be shared, and why there could not be a Catalogue for Philanthropy in every region of the country — something we hope to make happen in the not-too-distant future. (A note: the Catalogue focuses on community-based nonprofits with budgets below three million. These are not, by and large, the ones reviewed by Charity Navigator, though this is a category into which the great majority of all nonprofits falls.)

For the moment, though, nonprofits need to remember that — unless they are primarily reliant on the U.S. Government, in which case they had better pay attention to its model — most individual donors are not themselves professional givers. Many are driven more by their desire to give back, their personal passions, and their wish to make a difference and than they are by evidence-based impact assessments.

This does not mean that data and measurement do not matter or that a reasonable approach to evaluating impact should not be part of what foundations are funding and even teaching. But charities also need to find a way to assess their work in a manner that does justice to its complexity, and then translate what they learn into an account that will have meaning and power for individual donors whose contributions make up nearly three quarters of all donations. We should keep in mind that it is not just the good work we do that matters, but also the speaking and writing about it — the sharing of it — that counts. We need to train ourselves and teach others how to be agents of the imagination, ready and willing and equipped to tell compelling stories about the differences for the better that philanthropic work makes.

The task is a challenging, but essential, one. It needs more attention than it has received and I intend to address it in future posts.