Chapter 2.0 Conceptualization of the Study and Methodological Approach
This web page is archived for historical purposes and is no longer being updated.
In this section, we describe the methodology that was used in this case study and highlight key features of site selection, interviewing, coding and analysis of interview data. A case study protocol was developed laying out criteria for the selection of study sites and specifying data collection and data analysis procedures. Study instrumentation and coding criteria were constructed as part of the study protocol. The study protocol is presented in Appendix A. The interview guides used in data collection are included in Appendix B. The codebook used in analyzing interview data is included as Appendix C.
2.1 Conceptual Framework
Development of a study design and an assessment protocol depended on a clear concept of what was to be accomplished with the research and how this was to be done. The first step in study design was to derive key research questions to be asked in order to achieve the purpose of the study - to identify those at high risk of syphilis transmission and to explore avenues for reaching them with STD prevention and control programs. These research questions formed a link between the study objectives and the study questions that governed data collection. The three research questions specified for this study were:
- What groups are at high risk of syphilis transmission?
- What community institutions are capable of reaching them?
- What are the barriers to bringing effective prevention to this population?
We designed more specific study questions to elicit the information needed to answer these research questions. The study questions were then incorporated into data collection instruments and formed the framework for the codebook and the data analysis.
2.2 The Role of the Assessment Protocol
The assessment protocol was the methodological standard protecting the uniformity of study procedures across all sites and data sources. The protocol included specification of
- Site selection criteria
- Sampling procedures within sites
- Data collection methods
- Data management and quality control measures, and
- A data analysis plan
Interview guides and abstracting forms for compiling demographic and archival data were appended to the protocol. The protocol was pilot-tested and revised after the pilot test. Data collection teams were instructed to document all exceptions to the protocol in writing. There were no significant deviations from the protocol once the relatively minor modifications suggested by the pilot test were incorporated.
2.3 Pilot Testing
A pilot study was conducted in Hinds and Humphrey Counties, Mississippi, in November, 1995. The pilot test utilized the assessment protocol and the accompanying instrumentation. It included a meeting with community contacts on the first and last days of the visit to discuss the experience of participating in this study. A member of DSTD/HIVP's Program Operations Branch accompanied the Battelle team on the pilot test site visit.
Based on the pilot test, we adjusted the protocol and instruments to better conform to the administration of public health services at the district level rather than the county level in southern states. We also discontinued a plan to designate local "Participatory Research Teams" to guide the research. The pilot test made clear that this would create a local expectation for long-term research in the community that could not be sustained in this short-term study.
The study procedures and instruments did not change sufficiently after the pilot test so that the results from the pilot site had to be excluded from consideration. We analyzed data from the Mississippi pilot test along with those from the other three sites to arrive at the study conclusions.
2.4 Site Selection
Resources permitted us to include four sites in this study. In each site, a metropolitan area was paired with a rural counterpart. The urban-rural pairing of sites allowed us to examine the syphilis problem in areas that were likely to share transmission of the disease.
Sites were chosen on the basis of criteria that would maximize the likelihood that useful information about the research questions would emerge from these cases. The most significant threat to credibility in research based on purposive site selection is inadequate a priori specification of the criteria by which sites are to be selected. Site selection criteria are the equivalent of sampling frames in survey research. For this reason, site selection criteria were defined before any sites were selected.
States eligible for the study were specified by CDC based on epidemiological evidence that they had experienced the 1990 syphilis epidemic. Those states were Alabama, Arkansas, Florida, Georgia, Louisiana, Mississippi, North Carolina, South Carolina, and Tennessee. Two states - Georgia, North Carolina - were removed from consideration because of the presence of other public health projects that could conflict with the operation of this study.
In the remaining seven states, large cities were identified as potential urban components. We mapped two concentric circles of counties around the metropolitan county. We eliminated counties that U.S. census statistics showed to be non-rural. We then selected a rural county for study based on presence of a central town with STD prevention and control services. A consideration in selecting rural sites was to reduce travel costs and time for the research teams.
Once potential urban-rural pairs had been identified, final selection of sites for the study was made using three criteria: (1) syphilis epidemiology, (2) social demography, and (3) geography. These criteria were used in the order given to narrow the choices of urban-rural pairs to be included in our study.
Syphilis epidemiology. We wanted to learn about the syphilis problem in counties with high syphilis case rates, and to find examples of programs or efforts that had been successful in decreasing syphilis rates. We inferred this from either consistently high syphilis case rates since the mid-1980s, or syphilis case rates that were high during that period but decreased noticeably after 1991. CDC provided us with the primary and secondary syphilis rates for all counties in the southern states for the period 1981-1993. We reviewed these data for each urban county and all of the rural counties that fell within two concentric circles around them. Based on these epidemiological data, we were able to make a first cut of sites. We deleted any urban site for which there was not a rural counterpart with evidence for significant syphilis morbidity during the 1990 epidemic.
Social demography. Demographic criteria considered in site selection were the proportion of the population that is African-American and the proportion of households below the poverty level. These two demographic characteristics have been demonstrated to be linked to high syphilis morbidity (Moran et al. 1989, Potterat 1992, Rolfs and Nakashima 1990). We wanted to look at syphilis prevention and control in populations that had high values on these two statistics. Demographic information was taken from the U.S. Department of the Census City and County Data Book for 1994. For urban areas, we were able to obtain specific census data at the city level; for rural areas, we used county-level data. Based on demographic criteria, we were able to narrow the eligible sites to six urban/rural pairs.
Geography. We were interested in finding possible urban-rural linkages in either transmission or control of syphilis. Therefore, in finalizing site selection, we considered the location of the urban and rural sites relative to transportation routes that people travel for reasons of commerce, employment, or entertainment. We wanted one pair of sites located along the Mississippi River. We also wanted sites along a major interstate highway, such as the I-95 corridor in the southeastern states. We also wanted sets of sites dispersed in several southern states, rather than clustered closely together.
Based on the geographic criterion, we chose the optimal set of sites for our study. The four sites chosen were:
- Hinds County (City of Jackson) and Humphreys County, Mississippi
- Montgomery County (City of Montgomery) and Lowndes County, Alabama
- Shelby County (City of Memphis), Tennessee, and Tunica County, Mississippi
- Richland County (City of Columbia) and Orangeburg County, South Carolina.
The choice of Hinds and Humphreys Counties, Mississippi, as the pilot site was a somewhat arbitrary one, based on the presence of two rural sites in Mississippi. The values of all these sites on the site selection criteria are shown in Table 2.1.
Table 2.1 Site Selection Indicators
Alabama | South Carolina | Tennessee | Mississippi | |||||
Montgomery County | Lowndes County | Richland County | Orangeburg County | Shelby County | Tunica County(MS) | Hinds County | Humphreys County | |
Population | 214,996 | 12,701 | 294,004 | 86,862 | 844,847 | 8,055 | 254,606 | 11,916 |
% African American | 41% | 76% | 42% | 58% | 43% | 75% | 51% | 69% |
Male/Fem Ratio | 89.2 / 100 | 86.3 / 100 | 94.6 / 100 | 87.4 / 100 | 91 / 100 | 87.5 / 100 | 88 / 100 | 87.3 / 100 |
% Below Poverty Level | 18% | 39% | 14% | 25% | 18% | 57% | 21% | 46% |
% African Americans Below Poverty Level | 35% | 50% | 30% | 36% | 35% | 48% | 39% | 62% |
%Population on Farm | .4 | 4.6 | .3 | 2.7 | .1 | 5.5 | .4 | 6.6 |
Population Per Square Mile | 272 (City of Montgomery 1,423) | 18 | 389 (City of Columbia 844) | 79 | 1,119 (City of Memphis 2,384) | 18 | 293 (City of Jackson 1,800) | 29 |
P/S syphilis case rates: | ||||||||
1985 | 29 | 23.9 | 68.5 | 41.8 | 50.3 | 0 | 90.6 | 37.7 |
1986 | 33.3 | 0 | 66.1 | 15.5 | 58 | 45.6 | 85 | 0 |
1987 | 27.3 | 7.8 | 48.8 | 27.2 | 73.2 | 11.6 | 52.4 | 39.3 |
1988 | 28.7 | 39.5 | 28 | 60.6 | 94.6 | 166.1 | 27.2 | 31.9 |
1989 | 53.1 | 79.1 | 19.8 | 55.6 | 147.9 | 108.9 | 36.4 | 40.7 |
1990 | 128.2 | 150.1 | 24.1 | 42.4 | 158.8 | 147 | 71.1 | 24.7 |
1991 | 182.6 | 165.4 | 55.2 | 127.3 | 98.9 | 444.7 | 102.3 | 33.4 |
1992 | 58.8 | 23.5 | 61.2 | 147.4 | 71.4 | 99.1 | 84.2 | 142.4 |
1993 | 53.7 | 23.5 | 49.3 | 130.8 | 64.9 | 24.8 | 90.4 | 134 |
1994 | 42.9 | 23.5 | 52.1 | 31.8 | 63 | 284.9 | 58.5 | 302.1 |
1995 | 35.19 | 16.0 | 50.7 | 16.5 | 56.45 | 78.9 | 48.9 | 207.1 |
2.5 Accessing Respondents Within Sites
Steps for identifying and contacting potential respondents in communities were described in the assessment protocol and followed as closely as possible in all communities.
We used a network approach beginning with CDC Public Health Advisors in each state where the study would be conducted. These individuals referred us to STD experts in the State Health Department who gave us contacts in the District and County Health Departments. The staff in the District and County Health Departments connected us with staff at lower administrative levels in their organizations, as well as with people in the community outside of the public health system.
Public health staff at different administrative levels often had different personal networks of community contacts to whom they referred us. Public health staff with grassroots outreach positions, such as DIS and Minority AIDS Coordinators, gave us referrals to CBOs, community activists, and other community leaders. Those at higher administrative levels tended to refer to a more specialized person in the Health Department, or to other health service providers, or to people in administrative positions in other community institutions.
We also identified people from community-based institutions by direct contact. We began this process by subscribing to the local newspapers and ordering the local telephone books from our study sites. The newspapers were used to gain an understanding of the communities and to see whether there were community leaders we should contact. We looked in the telephone books under headings such as health care, clinics, hospitals, organizations, social service, and substance abuse.
We made direct calls to institutions that we thought relevant to our study. When we contacted these institutions, we asked to speak with staff members most involved in issues of STDs, drug abuse, health education, or infectious diseases. If the institutions did not deal directly with health service delivery, we asked to speak with those individuals most informed about community issues, minority issues, drug use, prostitution, social services, health care, schools, religion, or politics, depending on the institution.
This careful preliminary work in each site helped us to minimize refusals for interviews and optimize our time in the field. Beginning with a few key people in each community, we could use personal referrals to gain entree to others both prior to and during our site visits. Following these procedures, we were able to schedule in advance nearly 75 percent of our field time with interviews, leaving the other 25 percent available for additional referrals in the field. Moreover, this method improved our confidence that we were accessing the "right community" when we began to receive repeat referrals.
Once we had completed a list of individuals and organizations with whom we wanted to speak during our site visits, the same Battelle researcher who had made the initial contact called potential respondents back to set up a definite interview appointment. At this time, additional referrals were often made, leading to more contacts and possible interviews. If an individual selected for the study was not available at the time of our site visit, we asked for a referral to someone else in the organization who could provide similar expertise.
2.6 Data Collection
The primary method of data collection was semi-structured, open-ended interviews with individuals and in group settings. Secondary data were used to augment the interview and observational data. The secondary data included a literature review, public health reports, epidemiological summaries, progress reports of the local and state health departments, and any other documents that would help to gain knowledge of the local socio-cultural context.
The interview guides for this study were developed based on the study questions laid out in the Assessment Protocol. The interview guides were designed to be as open-ended as possible, in order to allow respondents to elaborate on their ideas and opinions. We used the guides as a way to introduce the points we thought crucial and interesting and to remind us of probes we could use to delve deeper into specific topics. In order to make the interviewing process most efficient, separate interview guides were designed for different categories of respondents:
- State STD program managers,
- STD program staff,
- STD clinic staff and other health care providers, and
- Community leaders.
While conducting the pilot study, we found that we needed a separate interview guide for local researchers to solicit information on the health care situation in the community without asking unnecessarily specific questions about clinic operations.
A core series of topics were linked to the primary study questions and discussed with all respondents. They included
- Who do you think is at greatest risk for syphilis?
- What institutions do you think are best able to reach these people?
- What are the best ways to get out messages to these people?
- What are the barriers to reaching these people with prevention or messages?
- If you were able to design the ideal syphilis prevention and control program, what would it be?
In addition to the core issues, each interview guide addressed topics within the particular area of expertise of specific respondents. The matching of open-ended interview guides to respondent types is shown in Table 2.2.
Table 2.2 Matching Interview Guides to Respondent Categories
Interview Guide | Categories of Respondents |
State Program Manager | State, District, or County public health office head. |
STD Program Staff | People in public health departments from mid- or lower-managerial level to Disease Intervention Specialists (DIS) to outreach workers. |
STD Clinic Staff and Health Care Providers | People at all levels in the public health clinics and in other public and private health care. |
Local Researcher | People in research positions either in academia, in health care settings, or in private consultation. |
Community Leaders | People in the community who did not fall within a health care or research setting, such as teachers, politicians, community activists, social service workers, and others. |
The types and numbers of interviews completed are summarized in Table 2.3. The number of interviews are broken down by site in Table 2.4. Respondents interviewed for this study were health department administrators and providers, other public and private health care providers, representatives of local social service agencies, community leaders, and representatives of community- based organizations (CBOs) whose reach includes those thought to be at high risk for syphilis. Although one of the main purposes of our research was to gain an understanding about how people in key community organizations characterize individuals with a high probability of transmitting syphilis, it was not feasible to conduct a survey of core transmitters due to funding and time limitations.
Table 2.3 Number of Interviews per Respondent Category
Category of Respondent | Number of Interviews |
STD Program Administrators | 27 |
Representatives of CBOs | 26 |
Disease Intervention Specialists | 24 |
Community Health Center Representatives | 21 |
Community Leaders | 15 |
Nurses | 14 |
STD Program Staff | 13 |
Private Physicians | 13 |
Local Researchers/Experts | 7 |
Representatives of Drug Treatment Facilities | 6 |
Representatives of Corrections Facilities | 5 |
School Representatives | 4 |
Church Representatives | 3 |
Table 2.4 Number of Respondents Per Site
# of Respondents | ||
Site | Urban | Rural |
Montgomery County, Alabama | 27 | |
Lowndes County, Alabama | 14 | |
Hinds County, Mississippi | 25 | |
Humphreys County, Mississippi | 17 | |
Richland County, South Carolina | 23 | |
Orangeburg County, South Carolina | 22 | |
Shelby County, Tennessee | 34 | |
Tunica County, Mississippi | 16 |
In each state, we spoke with representatives of the STD/HIV division in the State Health Department. In three sites, the urban component was the state capital, and the State and County Health Departments were located in the same city. In Tennessee, we spoke with staff in the Memphis/Shelby County Health Department, as well as representatives of the State Health Department in Nashville. In the rural sites, we spoke with the District Health Officers, even if they were located in another county than our study county. In both the urban and rural areas, we interviewed all disease intervention specialists (DIS), either individually or in group interviews. We spoke with public health nurses in the urban STD clinics and the rural multi-service clinics. In each urban site, we spoke with the Minority AIDS Coordinator, HIV/AIDS Educators, and other administrative staff in the State or County Health Department.
We interviewed physicians and nurses in every health care institution in the rural counties. This included community health centers and private physicians' offices. In the urban sites, we spoke with representatives of Federally Qualified Health Centers in three out of the four cities, representatives of local hospitals in three cities, and representatives of substance abuse treatment facilities in three cities. In addition, in the urban areas we spoke with local AIDS activists, church-based health center representatives, corrections facility health care providers, local researchers, heads of local CBOs, and other community leaders. In the rural areas we also spoke with local activists, heads of CBOs, law enforcement officials, and politicians.
We had no refusals or no-shows for scheduled interviews, although there were two replacements in the field, both for providers from rural sites who were busy with patients at the scheduled time. In both cases, we interviewed another provider from the same clinic. In one of the eight study communities, a community health center could not agree to the study without approval from its Board of Directors. We sent additional materials about the research project for this review. After participation was cleared by the Board, the concerns of staff were alleviated and we were able to interview doctors and nurses in that institution.
2.7 Field Visits
The pilot site visit to Hinds and Humphreys Counties, Mississippi, was held in November 1995. The remaining three site visits were held between January and early March 1996. There were a total of six researchers who conducted the field visits, five went to the pilot site and four went to the other three sites. Two researchers went to all four of the study sites; two went to three sites, one went to two, and another went to one site.
At each site, all four members entered the field at the urban location. For the first half-day, all four researchers met with STD representatives at the State/County Health Department, usually first in a group setting to explain the research, and then broken up into individual interviews for the rest of the morning. In Tennessee, two researchers met with the urban County Health Department leaders in Memphis while the other two met with State Health Department STD leaders in Nashville.
The two research teams split up in the afternoon of the first day and conducted interviews in the urban site. On the second day, the rural team traveled to the rural site to conduct interviews for the next three days. On the fifth day, the rural team returned to the urban site to finish interviewing and debrief with the urban team. The schedule of the site visits can be seen in Table 2.5.
Table 2.5 Schedule for Field Activities
Day 1 | Day 2 | Day 3 | Day 4 | Day 5 |
AM | AM | AM | AM | AM |
Meet w / Key Public Health Staff and Community Representatives | Interviewing (urban) Travel to Rural Site | Interviewing | Interviewing | Interviewing |
PM | PM | PM | PM | PM |
Interviewing (urban) | Interviewing Meet w/Key Rural Representatives | Interviewing | Interviewing | Team Debrief on Key Site Issues Travel |
In both the urban and rural sites, most interviews were conducted by two-person teams so that one person could concentrate on asking questions and probing while the other took notes, ran the tape player, and probed when necessary. Most interviews were between one and two hours long, depending on the level of expertise of the informant and the detail of description of public or private health care or other organizational operations.
Because of scheduling overlaps, there were times in both the urban and rural sites where two interviews had to be conducted at the same time, and the research teams split up to conduct interviews alone. Because of the density of possible respondents, this occurred more often in the urban than in
the rural areas. This was especially likely in the case where additional interviews were scheduled while in the field, building upon the local professional and personal networks of respondents.
At the beginning of each interview, a Battelle researcher explained objectives of the research, informed the respondent that the interview was confidential and that no names or recognizable identifiers would be attached to their statements, and asked the person for permission to tape record the interview. People asked not to be taped in less than 10 percent of interviews. In no interview was a tape the only source of data recording. Full written notes were kept of all interviews, and tapes were only used to supplement the information from the written notes.
2.8 Data Management and Data Analysis
Each researcher was responsible for transcribing field notes into electronic data files. The two-person teams reconciled their typed notes and audio tapes into one text file for each interview. All text data files were collected in a central computer directory of completed interview transcripts. Transcripts were then run through a computerized text data analysis system, The Ethnograph.
The research questions, interview questions, and code variables that drove the research process were developed into an analysis plan for the case studies and final report. We used the detailed study questions, and added concepts that emerged during interviewing and later coding to design the analysis codebook. (See Appendix C for the Codebook.) Two Battelle researchers were given the task of coding each data file according to the conceptual variables laid out in the codebook. A third researcher reconciled the coding of the other two researchers, and the final codes were then entered into the analysis files.
Coded data fields were organized into text files by code variables. Major code variables are summarized in Table 2.6. Within each major coding category, sub-codes were used to further refine the content of the field. Definitions of these codes, and the sub-codes developed for them, are presented in the codebook in Appendix C.
Coded data were organized in two ways for analysis: by site and by major code variables. Copies of the coded data were sorted into four site binders (one for each site pair) and eight code binders (one for each major code across all sites). The case studies were written for each site using interview data, supplemented by documentary evidence such as background literature, agency reports, and organizational brochures and pamphlets. The site binders were used to support the case study reports for each of the four study sites. The binders separated by code variable were used to write the cross-site final report.
2.9 Quality Control of Data Collection and Data Analysis
In descriptive case studies like this one, we are concerned with reliability (replicability of the results), construct validity (correct operational definition of observations), and external validity (generalizability). Internal validity (the degree to which hypothesized relationships are causal rather than spurious) is not relevant since this case study did not seek to evaluate a priori hypotheses.
The most important threats to reliability come from uneven administration of the study protocol. The latter is a question of quality control of data collection. While site visits cannot be made identical in a case study such as this one, we have done the best we can to minimize deviations from the protocol in the field. We know of no events in the field that were a serious enough deviation from standard procedures to bring our findings into question. We are reasonably certain that replication of this study in other communities would not lead to fundamentally different conclusions.
We also took steps to verify the accuracy of our data throughout data collection and data analysis. There were multiple interviewers for almost all interviews, providing an ongoing check for validity and reliability of data collection. After the "note taker" transcribed notes into an electronic file, the other researcher added any additional information from his or her own field notes. The audio tapes for each interview were used to fill in missing information and to catch recording errors. The case studies for each site were sent to leaders in the state and local STD programs interviewed in that site so that they could fill in any missing information, correct any misunderstandings, or add comments.
Accurate operational definition of the study (external validity) relates to whether we looked at the right things to answer the research questions. Did we talk to the right people and did we ask them the right things? What proportion of the "right people" did we talk to? Would the results have been appreciably different if we had talked to different people in these communities, or if we had collected different data from the same people?
We are reasonably confident that we collected the correct information to address the study questions and that we were thorough in data collection from all sources. We sought input from CDC and from knowledgeable researchers in other institutions in conceptualizing our study and in accessing existing sources of data on the study topic. Our method for networking to ascertain informants within communities was exhaustive, yielding a large number of interviews through repeat referrals. We had a low refusal rate for those from who we requested interviews.
This study was restricted to the question of health services delivery and drew on the judgment of individuals with specialized knowledge of service needs and barriers in populations judged to be at high risk of syphilis infection. We did not interview actual or potential service users on these issues. This decision was made because a survey of users or community members would have been costly and time-consuming to implement because of clearance delays. However, a client perspective could have validated the perceptions of service providers and might have uncovered differences in perspective with policy implications for service delivery.
Generalizability of this study to other settings depends on the bias introduced by purposive site selection and is linked to the clarity of the site selection design and the rigor with which it was carried out. It is to protect generalizability as much as possible that we were careful to specify criteria for site selection prior to identifying any specific sites for data collection. The units eligible for this study were communities in ten southern states identified by CDC on the basis of high syphilis morbidity during the 1990 epidemic. There were multiple sites that met these criteria, and choices were made to include some rather than others for reasons of convenience. We know of no systematic bias introduced by this procedure.
Two states were eliminated from this study because there were other research agendas being conducted there. This may introduce a bias if the presence of these other research projects either reflects or causes a significant difference in the reach of STD prevention services. We have no evidence that this is the case. However, these results may not generalize if a pre-existing demonstration project has affected the capacity of the program to reach high-risk populations.
- Page last reviewed: February 15, 1999 (archived document)
- Content source: