Numbers, Facts and Trends Shaping Your World

Teens, Privacy and Online Social Networks

Methodology

The Parents & Teens 2006 Survey, sponsored by the Pew Internet and American Life Project, obtained telephone interviews with a nationally representative sample of 935 teens age 12 to 17 years-old and their parents living in continental United States telephone households. The survey was conducted by Princeton Survey Research Associates International. The interviews were done in English by Princeton Data Source, LLC from October 23 to November 19, 2006.  Statistical results are weighted to correct known demographic discrepancies.  The margin of sampling error for the complete set of weighted data is ±3.7%.

Prior to fielding the telephone survey, a series of six in-person focus groups with middle and high school students in two American cities was conducted by Harris Interactive in June of 2006. Groups were single gender, and grouped in three grade ranges – 7th and 8th, 9th and 10th, and 11th and 12th grades.  Each group contained 7 – 8 participants.  After completing the 6 in-person focus groups, a 7th online, mixed gender high school age focus group was also conducted by Harris Interactive.  Participants were screened to meet several criteria.  All participants had internet access at home. Participants in the in-person focus groups spent at least 5 hours per week online and at least half of the participants accessed social networking sites at least a few times a month.  Participants in the online focus group spent at least 15 hours per week online and at least 80% of participants accessed social networking sites at least a few times a month.  At least half of participants in the in-person and online focus groups had their own cell phone and at least half of the girls and three-quarters of the boys participated in electronic or online gaming.  The participants reflected a range of household income levels and racial and ethnic backgrounds.  The 24 participants in the online focus group lived in states from across the United States. 

These qualitative results from the focus groups are not representative of the U.S. teen population. All participants were paid a modest cash incentive for their participation.

Details on the design, execution and analysis of the telephone survey are discussed below.


DESIGN AND DATA COLLECTION PROCEDURES 

Sample Design

The sample was designed to represent all teens ages 12 to 17 living in continental U.S. telephone households. The sample is also representative of parents living with their teenage children.

The telephone sample was pulled from previous PIAL projects fielded in 2004, 2005, and 2006. Households with a child age 18 or younger were called back and screened to find 12 to 17 year-olds. The original telephone samples were provided by Survey Sampling International, LLC (SSI) according to PSRAI specifications. These samples were drawn using standard list-assisted random digit dialing (RDD) methodology.

Contact Procedures

Interviews were conducted from October 23 to November 19, 2006. As many as 10 attempts were made to contact every sampled telephone number. Sample was released for interviewing in replicates, which are representative subsamples of the larger sample. Using replicates to control the release of sample ensures that complete call procedures are followed for the entire sample.

Calls were staggered over times of day and days of the week to maximize the chance of making contact with potential respondents. Each household received at least one daytime call in an attempt to find someone at home. In each contacted household, interviewers first determined if a child age 12 to 17 lived in the household.  Households with no children in the target age range were deemed ineligible and screened out.  In eligible households, interviewers first conducted a short interview with a parent or guardian, then interviews were conducted with the target child.


WEIGHTING AND ANALYSIS

Weighting is generally used in survey analysis to compensate for patterns of nonresponse that might bias results. The interviewed sample was weighted to match national parameters for both parent and child demographics. The parent demographics used for weighting were: sex; age; education; race; Hispanic origin; marital status and region (U.S. Census definitions). The child demographics used for weighting were gender and age. These parameters came from a special analysis of the Census Bureau’s 2005 Annual Social and Economic Supplement (ASEC) that included all households in the continental United States that had a telephone.

Weighting was accomplished using Sample Balancing, a special iterative sample weighting program that simultaneously balances the distributions of all variables using a statistical technique called the Deming Algorithm. Weights were trimmed to prevent individual interviews from having too much influence on the final results. The use of these weights in statistical analysis ensures that the demographic characteristics of the sample closely approximate the demographic characteristics of the national population. Table 1 compares weighted and unweighted sample distributions to population parameters.

Table 1 - Sample Demographics

Effects of Sample Design on Statistical Inference

Post-data collection statistical adjustments require analysis procedures that reflect departures from simple random sampling. PSRAI calculates the effects of these design features so that an appropriate adjustment can be incorporated into tests of statistical significance when using these data. The so-called “design effect” or deff represents the loss in statistical efficiency that results from systematic non-response. The total sample design effect for this survey is 1.36.

PSRAI calculates the composite design effect for a sample of size n, with each case having a weight, wi as:

Formula 1

In a wide range of situations, the adjusted standard error of a statistic should be calculated by multiplying the usual formula by the square root of the design effect (√deff ). Thus, the formula for computing the 95% confidence interval around a percentage is:

Formula 2

where pˆ is the sample estimate and n is the unweighted number of sample cases in the group being considered.

The survey’s margin of error is the largest 95% confidence interval for any estimated proportion based on the total sample— the one around 50%.  For example, the margin of error for the entire sample is ±3.7%.  This means that in 95 out every 100 samples drawn using the same methodology, estimated proportions based on the entire sample will be no more than 3.7 percentage points away from their true values in the population.  It is important to remember that sampling fluctuations are only one possible source of error in a survey estimate. Other sources, such as respondent selection bias, questionnaire wording and reporting inaccuracy, may contribute additional error of greater or lesser magnitude.


RESPONSE RATE

Table 2 reports the disposition of all sampled callback telephone numbers ever dialed.  The response rate estimates the fraction of all eligible respondents in the sample that were ultimately interviewed. At PSRAI it is calculated by taking the product of three component rates11:

  • Contact rate – the proportion of working numbers where a request for interview was made – of 95 percent12
  • Cooperation rate – the proportion of contacted numbers where a consent for interview was at least initially obtained, versus those refused – of 62 percent
  • Completion rate – the proportion of initially cooperating and eligible interviews that agreed to the child interview and were completed – of 79 percent

Thus the response rate for this survey was 46 percent.13

Table 2 - Sample Disposition

Methodology prepared by Princeton Survey Research Associates International and Harris Interactive

← Prev Page
1 2 3 4 5 6 7
Next Page →
  1. PSRAI’s disposition codes and reporting are consistent with the American Association for Public Opinion Research standards.
  2. PSRAI assumes that 75 percent of cases that result in a constant disposition of “No answer” or “Busy” over 10 or more attempts are actually not working numbers.
  3. The response rates for the original surveys that provided the callback sample averaged approximately 30 percent, thus the 46% response rate is a subset of the 30% initial response rate.

Sign up for our weekly newsletter

Fresh data delivery Saturday mornings

Sign up for The Briefing

Weekly updates on the world of news & information