Indicator Collection Project Methods

The primary goal of the indicator database, which populates the Indicator Explorer tool, was to document the indicators that community indicator projects are using to measure the well-being of their communities. While we wanted to achieve a broad representation of the community indicators field, there is no exhaustive sampling frame. To address this challenge, we employed two approaches to collecting projects for the database.

First, we began by speaking with community indicator professionals to get a list of the biggest or well-known projects operating in the field. This list includes large-scale projects, such as Sustainable Tools for Assessing and Rating Communities (STAR) and the Annie E. Casey Foundation’s Kids Count projects. In addition, this list included early players in the community indicator movement, such as Sustainable Seattle and the Jacksonville Council on Citizen Involvement’s Quality of Life indicators (which released its 30th quality of life report in 2015). Thirty-three of the 108 projects in this study were obtained through this approach.

Once these well-known projects were included in the database, we set out to ensure broad coverage of the field. For this we chose the Community Indicator Consortium (CIC) Project List as a sampling frame. The goal was to ensure that projects with broad range of resources and time in the field were included. The member list was downloaded on July 1, 2015 and the data collection period occurred July through December 2015. From the member list of roughly 300 member organizations, every fourth project was included. This resulted in a sample of 71 projects. Some of the well-known projects (described above) were also on the CIC list but were excluded from the CIC sampling process.

In order to be included in the project, each of the chosen projects needed to have the following criteria:

  1. Project must have had a valid website. If the link on the CIC member list was bad, we performed a Google search to locate the project through alternate means. Although the website did not have to be currently maintained, it needed to have an accessible record of the project.

  2. Project must have been in English. In addition, it needed to be from a First-World country that could be reasonably comparable to the United States.

  3. Project had to be measuring at the level of a locality, such as state, county, metropolitan statistical area, city, or neighborhood level. This eliminated projects measuring at a national level.

  4. Projects needed to have a plan for measuring their locality of focus. In other words, they needed to have identified indicators they were tracking for their community. This eliminated data warehouses and projects that claimed to be in the process of deciding what to measure. Data warehouses are an important part of the community indicator world, but they represent a data repository rather than an intentional plan to assess the health and well-being of a locality. Emerging projects were also excluded because they did not have a complete plan for measuring their communities. However, in future iterations of this study, they would be eligible for inclusion if they have a clear plan for measurement.

  5. Projects must have been based on indicators rather than rankings. This database was looking for indicators that projects used to assess the health and well-being of their communities. Ranking projects were included only if they provided a list of the indicators used to create the rankings. The only exception to this was the Robert Wood Johnson County Health Rankings, which was included because indicator projects use these rankings as indicators, tracking their performance from year to year.

If a project did not meet one of these criteria, we moved to the next project on the list until we identified one that matched all five criteria. The version of the project included in the database was the most current iteration on the website at the time it was entered.

This sample yielded projects from around the world—primarily from the United States, but also included a handful of projects from Canada, the United Kingdom, and Australia. It also included communities of various sizes ranging from New York City, Los Angeles, and Houston to Kewaunee County, Wisconsin (population approximately 21,000) and Vernonia, Oregon (population approximately 2,200).

The included projects covered a variety of topics including community health, environmental sustainability, children’s well-being, and general community well-being, among others. All told, the database contains over 3,300 distinct indicators from 108 projects. Of the indicators collected, 72 percent appear in only one project. Just 3 percent of indicators appear in 10 or more projects.

For questions about the methods used, contact us.

For more information about the contents of the database, see the Indicator Explorer.