E-Reading Rises as Device Ownership Jumps
January 2014 Omnibus Week 1
Prepared by Princeton Survey Research Associates International
January 2-5 2014
The PSRAI January 2014 Omnibus Week 1 obtained telephone interviews with a nationally representative sample of 1,005 adults living in the continental United States. Telephone interviews were conducted by landline (500) and cell phone (505, including 268 without a landline phone). The survey was conducted by Princeton Survey Research Associates International (PSRAI). Interviews were done in English and Spanish by Princeton Data Source from January 2 to 5, 2014. Statistical results are weighted to correct known demographic discrepancies. The margin of sampling error for the complete set of weighted data is ± 3.5 percentage points.
Design and Data Collection Procedures
A combination of landline and cellular random digit dial (RDD) samples was used to represent all adults in the continental United States who have access to either a landline or cellular telephone. Both samples were provided by Survey Sampling International, LLC (SSI) according to PSRAI specifications.
Numbers for the landline sample were drawn with equal probabilities from active blocks (area code + exchange + two-digit block number) that contained three or more residential directory listings. The cellular sample was not list-assisted, but was drawn through a systematic sampling from dedicated wireless 100-blocks and shared service 100-blocks with no directory-listed landline numbers.
Interviews were conducted from January 2 to 5, 2014. As many as three attempts were made to contact every sampled telephone number. Sample was released for interviewing in replicates, which are representative subsamples of the larger sample. Using replicates to control the release of sample ensures that complete call procedures are followed for the entire sample. Calls were staggered over times of day and days of the week to maximize the chance of making contact with potential respondents. Each phone number received at least one daytime call when necessary.
For the landline sample, interviewers asked to speak with the youngest adult male or female currently at home based on a random rotation. If no male/female was available, interviewers asked to speak with the youngest adult of the other gender. This systematic respondent selection technique has been shown to produce samples that closely mirror the population in terms of age and gender when combined with cell interviewing.
For the cellular sample, interviews were conducted with the person who answered the phone. Interviewers verified that the person was an adult and in a safe place before administering the survey.
Weighting and analysis
Weighting is generally used in survey analysis to compensate for sample designs and patterns of non-response that might bias results. The sample was weighted to match national adult general population parameters. A two-stage weighting procedure was used to weight this dual-frame sample.
The first stage of weighting corrected for different probabilities of selection associated with the number of adults in each household and each respondent’s telephone usage patterns.1 This weighting also adjusts for the overlapping landline and cell sample frames and the relative sizes of each frame and each sample.
This first-stage weight for the ith case can be expressed as:
The second stage of weighting balanced sample demographics to population parameters. The sample is balanced to match national population parameters for sex, age, education, race, Hispanic origin, region (U.S. Census definitions), population density, and telephone usage. The basic weighting parameters came from the US Census Bureau’s 2012 American Community Survey data.2 The population density parameter was derived from Census 2010 data. The telephone usage parameter came from an analysis of the January-June 2013 National Health Interview Survey.3
Weighting was accomplished using Sample Balancing, a special iterative sample weighting program that simultaneously balances the distributions of all variables using a statistical technique called the Deming Algorithm. Weights were trimmed to prevent individual interviews from having too much influence on the final results. The use of these weights in statistical analysis ensures that the demographic characteristics of the sample closely approximate the demographic characteristics of the national population. Table 1 compares weighted and unweighted sample distributions to population parameters.
Effects of Sample Design on Statistical Inference
Post-data collection statistical adjustments require analysis procedures that reflect departures from simple random sampling. PSRAI calculates the effects of these design features so that an appropriate adjustment can be incorporated into tests of statistical significance when using these data. The so-called “design effect” or deff represents the loss in statistical efficiency that results from unequal weights. The total sample design effect for this survey is 1.31.
PSRAI calculates the composite design effect for a sample of size n, with each case having a weight, wi as:
In a wide range of situations, the adjusted standard error of a statistic should be calculated by multiplying the usual formula by the square root of the design effect (√deff ). Thus, the formula for computing the 95% confidence interval around a percentage is:
where is the sample estimate and n is the unweighted number of sample cases in the group being considered.
The survey’s margin of error is the largest 95% confidence interval for any estimated proportion based on the total sample— the one around 50%. For example, the margin of error for the entire sample is ±3.5 percentage points. This means that in 95 out every 100 samples drawn using the same methodology, estimated proportions based on the entire sample will be no more than 3.5 percentage points away from their true values in the population. It is important to remember that sampling fluctuations are only one possible source of error in a survey estimate. Other sources, such as respondent selection bias, questionnaire wording and reporting inaccuracy, may contribute additional error of greater or lesser magnitude.
Table 2 reports the disposition of all sampled telephone numbers ever dialed from the original telephone number samples. The response rate estimates the fraction of all eligible sample that was ultimately interviewed. At PSRAI it is calculated by taking the product of three component rates:4
- Contact rate – the proportion of working numbers where a request for interview was made5
- Cooperation rate – the proportion of contacted numbers where a consent for interview was at least initially obtained, versus those refused
- Completion rate – the proportion of initially cooperating and eligible interviews that were completed
Thus the response rate for the land line samples was 7 percent. The response rate for the cellular samples was 9 percent.
- numoffset=”6″ i.e., whether respondents have only a landline telephone, only a cell phone, or both kinds of telephone. ↩
- ACS analysis was based on all adults excluding those living in institutional group quarters (GCs). ↩
- Blumberg SJ, Luke JV. Wireless substitution: Early release of estimates from the National Health Interview Survey, January-June, 2013. National Center for Health Statistics. Dec 2013. ↩
- PSRAI’s disposition codes and reporting are consistent with the American Association for Public Opinion Research standards. ↩
- PSRAI assumes that 75 percent of cases that result in a constant disposition of “No answer” or “Busy” are actually not working numbers. ↩