2009 June Omnibus
Prepared by Princeton Survey Research Associates International
The 2009 June Omnibus Survey obtained telephone interviews with a nationally representative sample of 1,005 adults living in the continental United States. The survey was conducted by Princeton Survey Research International. The interviews were conducted in English by Princeton Data Source, LLC from June 18 to June 21, 2009. Statistical results are weighted to correct known demographic discrepancies. The margin of sampling error for the complete set of weighted data is ±3.6%. Details on the design, execution and analysis of the survey are discussed below.
This report compares data from the June 2009 Omnibus Survey to prior Pew Internet Tracking Surveys. Both types of surveys collect data from nationally representative dual-frame (landline and cell phone) samples, employ the same respondent selection process, and identify internet users using identical questions. They are conducted by the same survey research firm, Princeton Survey Research Associates International, at the same field house. However, there are differences between the two types of surveys that should be noted when trending data across them. First, tracking surveys consist of roughly 2,250 interviews completed over the course of three to four weeks. These surveys maintain a very close 2-to-5 ratio of weekend-to-weekday interviews, to minimize the impact of day-of-the-week effects. Omnibus surveys, in contrast, consist of roughly 1,000 interviews completed over the course of four days, usually a Thursday-to-Sunday timeframe. There is no specific control in omnibus surveys for weekend-to-weekday interview ratio. To the extent that day of the week impacts technology use and online behavior, this may introduce variance in the data across the two types of surveys.
Moreover, tracking surveys follow a 7-call design in which sample that has not reached a final disposition at the end of seven days is retired, unless there is an outstanding appointment or callback for that telephone number. The omnibus surveys use a 4-call design over the course of the 4-day field period. One result of these different approaches is that tracking surveys generally achieve higher response rates than omnibus surveys. Again, this difference could introduce variance in the data across the two types of surveys.
Design and Data Collection Procedures
A combination of landline and cellular random digit dial (RDD) samples was used to represent all adults in the continental United States who have access to either a landline or cellular telephone. Both samples were provided by Survey Sampling International, LLC (SSI) according to PSRAI specifications.
Numbers for the landline sample were selected with probabilities in proportion to their share of listed telephone households from active blocks (area code + exchange + two-digit block number) that contained three or more residential directory listings. The cellular sample was not list-assisted, but was drawn through a systematic sampling from dedicated wireless 100-blocks and shared service 100-blocks with no directory-listed landline numbers.
Interviews were conducted from June 18 to June 21, 2009. As many as 5 attempts were made to contact every sampled telephone number. Sample was released for interviewing in replicates, which are representative subsamples of the larger sample. Using replicates to control the release of sample ensures that complete call procedures are followed for the entire sample. Calls were staggered over times of day and days of the week to maximize the chance of making contact with potential respondents. Each household received at least one daytime call in an attempt to find someone at home.
For the landline sample, interviewers asked to speak with the youngest adult male or youngest female currently at home based on a random rotation. If the target adult was not available, interviewers asked to speak with the youngest adult of the other gender. For the cellular sample, interviews were conducted with the person who answered the phone. Interviewers verified that the person was an adult and in a safe place before administering the survey.
Weighting is generally used in survey analysis to compensate for sample designs and patterns of non-response that might bias results. A two-stage weighting procedure was used to weight this dual-frame sample. A first-stage weight was applied to account for the overlapping sample frames. The first stage weight balanced the phone use distribution of the entire sample to match population parameters. The phone use parameter was derived from an analysis of the most recently available National Health Interview Survey (NHIS) data along with data from recent dual-frame surveys. This adjustment ensures that the dual-users are appropriately divided between the landline and cell sample frames.
The second stage of weighting balanced sample demographics to population parameters. The sample was balanced to match national population parameters for sex, age, education, race, Hispanic origin, region (U.S. Census definitions), population density, and telephone usage. The basic weighting parameters came from a special analysis of the Census Bureau’s 2008 Annual Social and Economic Supplement (ASEC) that included all households in the continental United States. The population density parameter was derived from Census 2000 data. The telephone usage parameter came from the analysis of NHIS data.
Weighting was accomplished using Sample Balancing, a special iterative sample weighting program that simultaneously balances the distributions of all variables using a statistical technique called the Deming Algorithm. Weights were trimmed to prevent individual interviews from having too much influence on the final results. The use of these weights in statistical analysis ensures that the demographic characteristics of the sample closely approximate the demographic characteristics of the national population. Table 1 compares weighted and unweighted sample distributions to population parameters.
Effects of Sample Design on Statistical Inference
Post-data collection statistical adjustments require analysis procedures that reflect departures from simple random sampling. PSRAI calculates the effects of these design features so that an appropriate adjustment can be incorporated into tests of statistical significance when using these data. The so-called "design effect" or deff represents the loss in statistical efficiency that results from systematic non-response. The total sample design effect for this survey is 1.38.
PSRAI calculates the composite design effect for a sample of size n, with each case having a weight, wi as:
In a wide range of situations, the adjusted standard error of a statistic should be calculated by multiplying the usual formula by the square root of the design effect (√deff). Thus, the formula for computing the 95% confidence interval around a percentage is:
where pˆ is the sample estimate and n is the unweighted number of sample cases in the group being considered.
The survey’s margin of error is the largest 95% confidence interval for any estimated proportion based on the total sample— the one around 50%. For example, the margin of error for the entire sample is ±3.6%. This means that in 95 out every 100 samples drawn using the same methodology, estimated proportions based on the entire sample will be no more than four percentage from their true values in the population. It is important to remember that sampling fluctuations are only one possible source of error in a survey estimate. Other sources, such as respondent selection bias, questionnaire wording and reporting inaccuracy, may contribute additional error of greater or lesser magnitude.
Table 2 reports the disposition of all sampled telephone numbers ever dialed from the original telephone number samples. The response rate estimates the fraction of all eligible respondents in the sample that were ultimately interviewed. At PSRAI it is calculated by taking the product of three component rates:
- Contact rate – the proportion of working numbers where a request for interview was made
- Cooperation rate – the proportion of contacted numbers where a consent for interview was at least initially obtained, versus those refused
- Completion rate – the proportion of initially cooperating and eligible interviews that were completed
Thus the response rate for the landline sample was 15 percent. The response rate for the cellular sample was 18 percent.