AFI HomeAbout AFIAFI Quick ViewAFI ReportCommunity Action GuideBlogResourcesPartnersContact

MethodologyScientific evidence, expert opinion, and statistical methodologies were employed to select, weight, and combine the elements used in the AFI data report.

Why Choose MSAs Over Cities?

Defining a “city” by its city limits overlooks the interaction between the core of the city and the surrounding suburban areas. Residents outside the city limits have access to fitness-related resources in their suburban area as well as the city core; likewise, the residents within the city limits may access resources in the surrounding areas. Thus, the metropolitan area, including both the city core and the surrounding suburban areas, act as a unit to support the wellness efforts of residents of the area. Consequently, the MSA data were used where possible in constructing the AFI. It is understood that various parts of the central city and surrounding suburban area may have very different demographic and health behavior characteristics, as well as access to community-level resources to support physical activity. Currently, the nationally available data needed to measure these characteristics and resources are not available to allow comparisons of all of the smaller geographical levels in the MSAs. However, it would be possible for communities within the MSA to collect local data using the measurements and strategy outlined in this report to identify opportunities and to monitor improvements occurring as a result of their initiatives.

How Were the Indicators Selected for the Data Index?

Elements included in the data index must have met the following criteria to be included:

  • Be related to the level of health status and/or physical activity for a community;
  • Have recently been measured and reported by a well-respected agency or organization at the metropolitan area;
  • Be available to the public;
  • Be measured routinely and provided in a timely fashion; and
  • Be modifiable through community effort (for example, “smoking rate” is included, but “climate” is not).

What Data Sources Were Used to Create the Data Index?

Publicly available data sources from federal reports and past studies provided the information used in this version of the data index. The largest single data source for the personal health indicators was the Selected Metropolitan/Micropolitan Area Risk Trends Behavioral Risk Factor Surveillance System (SMART BRFSS). Through an annual survey, conducted by the Center for City Park Excellence, the Trust for Public Land provided many of the Community/Environmental Indicators, and the U.S. Census American Community Survey was the source for most of the MSA descriptions. The U.S. Department of Agriculture; State Report Cards (School Health Policies and Programs Study by the CDC); the Health Resources and Services Administration’s (HRSA) Area Resource File; and the Federal Bureau of Investigation’s (FBI) Uniform Crime Reporting Program also provided data used in the MSA description and index. In all cases, the most recently available data (typically 2007) were used. The data index elements and their data sources are shown in Appendix A.

How Was the Data Index Built?

Potential elements for the AFI data index were scored for relevance by a panel of 26 health and physical activity experts (listed in Appendix B). Two DelphiMethod–type rounds of scoring were used to reach consensus on whether each item should be in the data index and, if so, the weight it should carry.

The Delphi Method began with the development of a draft list of elements or measures to include in the index. An expert panel was selected and the questionnaire was mailed to them for their input. The expert panel members are listed in Appendix B. Each participant was asked to score the elements on a scale from 0 to 3 (0 = not appropriate for the index; 1= should be in the index, but of minor importance; 2= should be in the index and is of moderate importance; 3= should be in the index and is of high importance) independently and return their scoring sheet for analysis and preparation for the second round. The panel members also were asked to add measures they thought should be in the index. The responses from the first round were summarized into a feedback version of the same list. Consensus was obtained for some elements during the first round so the panelists were not asked to rate them during the second round.

The list of measures used for the second round showed the panelists scores from the first round. The panelists were asked to score the elements on the same scale again after they saw how their colleagues had scored each element on the first round and send it back for analysis. After the second round, a consensus was obtained for all of the elements. A final summary report was provided to the expert panel members for their feedback.

A weight of 0.5 was assigned to those elements that were considered to be of little importance; 1.0 for those items considered to be of moderate importance; and 1.5 to those elements considered of high importance to include in the data index. From this process, 30 currently available indicators were identified and weighted for the index and 17 description variables were selected. The description elements were not included in the data index calculation, but are shown for cities to use for comparison. Each item was first ranked (worse value = 1) and then multiplied by the weight assigned by consensus of the expert panel. The weighted ranks were then summed by indicator group to create scores for the personal health indicators and the community/environment indicators. Finally, the MSA scores were standardized to a scale with the upper limit of 100 by dividing the MSA score by the maximum possible value and multiplying by 100. (In the pilot study, released in May 2008, this last step was not performed.)

The following formula summarizes the scoring process:
math
r = MSA rank on indicator
w = weight assigned to indicator
k = indicator group
n = 11 for personal health indicators, 15 for community/environmental indicators, 1 for the health care providers indicator
MSA Scoremax = hypothetical score if an MSA ranked best on each of the elements

The individual weights also were averaged for both indicator groups to create the total score. Both the indicator group scores and the total scores for the 50 cities were then ranked (best = 1) as shown on the Metropolitan Area Snapshots. A large amount of data was missing for five MSAs that did not provide environmental/community data to the Trust for Public Land. Consequently, only the personal health indicators were scored and ranked for these five MSAs.

How Should the Scores and Ranks Be Interpreted?

It is important to consider both the score and rank for each city. While the ranking lists the MSAs from the one with the highest score to the one with the lowest, the scores for many cities are very similar, indicating that there is relatively little difference between them. For example, the score for Boston was 71.4 while the score for San Francisco was 70.6. While Boston was ranked higher than San Francisco, in reality they are both very similar across all of the indicators and, thus, there is little difference between the community wellness of the two MSAs. Also, while one city carried the highest rank (Washington, DC) and another carried the lowest rank (Oklahoma City, OK), this does not necessarily mean that the highest ranked city has excellent values across all indicators and the lowest ranked city has failed all the indicators. The ranking merely points out that, relative to each other, some cities scored better on the indicators than the others.

How Were the Strengths/Advantages and Opportunities/Challenges Determined?

Areas of strengths/advantages and opportunities/challenges of the metropolitan areas were listed to assist communities in identifying potential areas where those communities might focus their efforts, using approaches adopted by those cities that have strengths in the same area. This process involved comparing the descriptive and data index elements of the MSA to the average of the other MSAs. Those where the MSA values were “better” than the average of the MSAs were considered strengths/advantages. Elements that were “worse” were listed as opportunities/challenges.

What Limitations of This Project Need to be Considered?

The items used for the personal health indicators were based on self-reported responses to the Behavioral Risk Factor Surveillance Survey, and are subject to the well-known limitations of self-reported data. Since this limitation applies to all metropolitan areas included in this report, the biases should be similar across all areas, so the relative differences should be valid. As per advice provided on the FBI Uniform Crime Reporting Program Web site, violent crime rates were not compared to U.S. values or averages of all MSAs. As indicated on the FBI Web site, data on violent crimes may not be comparable across all metropolitan areas because of differences in law enforcement policies and practices from area to area. The Trust for Public Land community/environmental indicators only includes city-level data, not the complete MSA. Consequently, most of the community/environmental indicators shown on the MSA tables are for the main city in the MSA and do not include resources in the rest of the MSA. Also, data were missing for most of the community/environmental indicators for eight metropolitan areas. Consequently, a score for the community/environmental component and the total score were not calculated for those five MSAs. In addition, these five were included only in the ranking of the personal health indicator scores.