April 16, 2003

The Ever-Shifting Internet Population: A new look at Internet access and the digital divide

Methodology

Telephone Survey

This report is based on the findings of a daily tracking survey on Americans’ use of the Internet. The results in this report are based on data from telephone interviews conducted by Princeton Survey Research Associates from March 1 to March 31, and May 2 to May 19, 2002, among a sample of 3,553 adults, 18 and older. For results based on the total sample, one can say with 95% confidence that the error attributable to sampling is plus or minus 2% points.  For results based Internet users (n=2,259), the margin of sampling error is plus or minus 2% points.  For results based on non-users (n=1,294), the margin of error is plus or minus 3% points.  In addition to sampling error, question wording and practical difficulties in conducting telephone surveys may introduce some error or bias into the findings of opinion polls. Other data in the survey are drawn from other Pew Internet Project phone surveys in March, April and May-June 2000 and December 2002.

The sample for this survey is a random digit sample of telephone numbers selected from telephone exchanges in the continental United States. The random digit aspect of the sample is used to avoid “listing” bias and provides representation of both listed and unlisted numbers (including not-yet-listed numbers). The design of the sample achieves this representation by random generation of the last two digits of telephone numbers selected on the basis of their area code, telephone exchange, and bank number.

New sample was released daily and was kept in the field for at least five days. This ensures that complete call procedures were followed for the entire sample. Additionally, the sample was released in replicates to make sure that the telephone numbers called are distributed appropriately across regions of the country. At least 10 attempts were made to complete an interview at every household in the sample. The calls were staggered over times of day and days of the week to maximize the chances of making contact with a potential respondent. Interview refusals were re-contacted at least once in order to try again to complete an interview.  All interviews completed on any given day were considered to be the final sample for that day.

Non-response in telephone interviews produces some known biases in survey-derived estimates because participation tends to vary for different subgroups of the population, and these subgroups are likely to vary also on questions of substantive interest. In order to compensate for these known biases, the sample data are weighted in analysis. The demographic weighting parameters are derived from a special analysis of the most recently available Census Bureau’s Current Population Survey (March 2001). This analysis produces population parameters for the demographic characteristics of adults, age 18 or older, living in households that contain a telephone. These parameters are then compared with the sample characteristics to construct sample weights. The weights are derived using an iterative technique that simultaneously balances the distribution of all weighting parameters.

We calculate a response rate as the product of three individual rates:  the contact rate, the cooperation rate, and the completion rate. Of the residential numbers in the sample, 71.2% were contacted by an interviewer and 46.1% agreed to participate in the survey. Eighty-seven percent were found eligible for the interview. Furthermore, 93.5% of eligible respondents completed the interview. Therefore, the final response rate is 30.7%.

Group Interviews

We conducted a series of 6 group interviews with a total of 40 people, and three individual interviews. The group interviews were with a mix of new Internet users and non-users. The individual interviews were with non-users. The criteria for new users was they had to have been online for a year or less and be older than 18. Non-users had to be older than 18. We recruited for our group interviews via Community Technology Centers in the Greater Washington DC-Baltimore area. We held groups at the CTC’s with new and non-users who had taken classes or were a part of a class and with those who had not taken a class or even made a prior visit to the center. We endeavored to get a broad mix of new and non-users, recruiting from urban CTC’s, suburban and rural CTC’s and centers with predominately African-American clients, a mix of Hispanic and African-American clients, or a mix of African-American and white clients. Groups lasted between 35 minutes and an hour and 15 minutes and were conducted in June and July of 2002. One or two interviewers conducted the group interviews. Interviews were tape recorded with the participants’ knowledge and oral consent. A short demographic questionnaire was administered to each participant before the start of the interview. Participants received a meal, or snacks as an incentive to participate.