Reading through the mainstream press and non-profit sector literature, lists have become the fashion. Top 10 best or worst [you fill in the blank], most highly paid, most effective, most underrated….most….least….. Perhaps this is in part due to the culture of immediacy resulting from a twitter/social medial/internet giving culture. Perhaps we are just increasingly impatient, or dare I say lazy, as a sector.  Either way, in most cases lists do not serve the interests of donors, organizations, or the beneficiaries of non-profit services.  Please note that this blog is not intended as an attack on individuals or organizations generating lists, so specific examples are not cited. Rather, it questions the legitimacy and ethics of list-making. The overriding plea is to recognize and mitigate the lack of context lists involve. The overriding concern is unintended consequences. What will list readers conclude from an organization’s position on, or omission from, a list? What will be the effect of such conclusions, particularly if the “lister” has high credibility?

  1.  No context. By definition, lists do not provide context. Take the example of CEO compensation (whether a ranking or a listing of specific compensation numbers). We do not know what the person’s skill set is (or the organization’s need for the person’s unique skill set), whether there were unusual challenges or, conversely, negative performance issues, the size of the organization, the individual’s workload, whether the Board engaged in a proper compensation evaluation (e.g. benchmarking to industry and proper performance evaluation), trends in the particular sector of the organization, whether part of the compensation is to supplement transport or a relocate or other one-off matter, or whether this compensation reflects one year or a longer term commitment. And these examples hardly begin to convey complexity of context, particularly the mix of people, regulations, organizations, programs, and the external environment. Indeed one reason I chose not to critique specific rating agencies or other “listers” is that such critiques require context beyond the scope of this blog. This applies in both the for-profit and non-profit world.
  2. Undue conclusions about donor trust or a particular item. Once we know that the CEO is overpaid or at least top of the highest paid list, it is hard to have a balanced view of the organization or even the board’s performance. The list certainly doesn’t provide it. Once we know that the organization was on the list of organizations with a particular problem (e.g. in the US late Form 990 filers), we wonder if there are more general management issues. Conversely, once we see that an organization makes the best financial management list, we may (correctly or erroneously) assume that governance, accountability, and ethics are also first in class.  We, and the listers, also often incorrectly assume that positioning on a list is sufficient to draw conclusions regarding donor trust. Charity Navigator notes with a respect to a list compiling “large, complex organizations” of significant budgets exceeding $100 million “work[ing] throughout [the US] the world”:  They became household names in part because of their exceptional financial management, no easy feat considering the scope and size of their operations. Charitable givers should feel confident that these national institutions put their donations to good use [emphasis added].”[i]  The point is, we have no legitimate basis for extrapolating from lists, but we all do.
  3. Apples and oranges. Lists often aggregate organizations from different sectors, sizes, etc.  Lists ignore a range of matters such as differences between operating foundations and other non-profit forms, regulation in different sectors that could trigger costs, and the performance of each of the organizations compared to competitors in their specific sector. Drawing attention to this point through disclaimers does not make the reader of the list any more able to take into account these differences in interpreting the lists. The reader simply does not have the necessary information.
  4. Omissions. The omission of an organization from a top 10 or other list can unwittingly raise questions about another high performing, high quality charity. Is an organization omitted because it doesn’t qualify or because the lister didn’t consider it? Omissions can also skew competition. I am not aware of listers that carefully cover all of the major participants in a sector.  There are more than 1.1 million organizations charitable status in the US under Section 501(c)(3).
  5. Definition of Criteria. Most lists are, as one would expect, succinct. This means that they fail to define adequately the basis for the list. What does “financial performance” mean? Cost of fund-raising? Some measure of program output relative to income? Some unidentified reliance on ratios of administrative expenses to program costs (which itself is a questionable basis for comparison)?[ii] Investment or endowment management savvy? The structure of the list almost by definition means that readers invest significant time reading alternative invest time in research, even on the same web site.
  6. Missing the Mission: Excessive focus on the organization and not the beneficiaries. List makers focus on everything from CEO compensation to financial performance to the filing of the Form 990. When did we lose focus on the purpose of the non-profit sector:  the execution of a mission that serves beneficiaries in need? I have not found, but welcome examples of or comments on, lists that highlight the beneficiaries. Perhaps this is not listable? If so, what does that say about the value of lists?
  7. Suggestion: Improve transparency of lists. I would strongly urge list-makers to ensure the following:
    a.    Define the target population clearly for the reader, including how organizations were selected and organizations within a certain category that are left out.
    b.    Define the basis of the list in detail. Clarify limitations of the category presented. For example, financial performance does not necessarily convey high standards of accountability, the suitability of use of donor funds, or such hidden issues as duplication of services at a higher cost than competitors despite apparently excellent internal financial management.
    c.    State clearly that lists are not an appropriate means of determining the quality of the donor experience or even efficacy of use of donor funds overall, the quality of the services delivered to beneficiaries, or the ethical foundation of the organization.
    d.   Step back and consider potential damage done by omissions – not including within your database (and therefore excluding from list eligibility) various organizations that become candidates for your list – both positive and negative.
    e.    Pick up the phone. At the very least call organizations listed. Double check accuracy and verify that there are no unusual circumstances before your listing potentially triggers negative (or even positive) responses from donors, regulators, or the public.
    f.     Check how these lists are being used and whether the use as the practice evolves presents any unintended ethical or other consequences to the organizations, the donors, or the beneficiaries.
  8. Useful Lists. Finally, some of these lists are useful. The most useful do not target organizations but rather behaviours or collection or organization of data.  The most useful advisory list I can think of is the Independent Sector’s Checklist for Accountability (available at http://www.independentsector.org/checklist_for_accountability).[iii] Even this list requires considerable assessment of context to determine crucial missing and/or irrelevant items for a particular organization. Like any general advisory list, the key is application to a particular organizational context. This blog will try (see e.g. Risks posted April 2011) to generate this sort of list on occasion. Other potentially useful lists compile and share data such as Guidestar’s state organization lists collecting contact details for the organizations with 501(c)(3) filings in a particular state. This appears to be more data management and objective collection of data than judgmental triage, although I could be missing something.[iv]

Copyright 2011 Susan Liautaud. All rights reserved

[i]http://www.charitynavigator.org/index.cfm?bay=topten.detail&listid=18, Charity navigator downloaded April 23, 2011.
[ii] For example, the Internal Revenue Service declined to require fund-raising ratios in the new Form 990 due to potentially misleading readers.
[iii] Downloaded April 23, 2011.
[iv]E.g. http://www2.guidestar.org/rxg/products/guidestar-data-sets.aspx. Downloaded April 23, 2011.