The Blogger Callback Survey, sponsored by the Pew Internet and American Life Project (PIALP), conducted telephone interviews with 233 self-identified bloggers from previous surveys conducted for PIALP. The interviews were conducted in English by Princeton Data Source, LLC, from July 5, 2005 to February 17, 2006. Statistical results are weighted to correct known demographic discrepancies. The margin of sampling error for the complete set of weighted data is ±6.7%.
The low number of respondents is a significant limitation to this study.
It is important to note some limitations to this callback survey of bloggers. First, the survey is a callback study, which means that it inherently has some bias in that not everyone that we reached in a random sample is willing to take another survey. In addition, a relatively large number of people who told us in an earlier survey that they kept a blog or online journal said in this survey that they were not currently doing this. As a result, this survey has a response rate of 71% and a relatively low “n” or number of respondents, which can make it difficult to do complex analyses of the data with a high degree of certainty. Also, because of the difficulty of finding bloggers to talk to, the survey was conducted over a long period of time, which means that the blogosphere may have changed over the period of time that we were asking our questions.
In addition, some of the question wording in the survey may have used terms to describe elements of a blog that are different from the terms that some bloggers use. For example, a blogroll is also sometimes called a friends list or a subscription list. The term “hits” used to ask bloggers about their traffic has inconsistent meaning across software packages and thus may not accurately measure traffic to a particular weblog.
Respondents who keep a blog were eligible for the callback survey.
Sample for this survey was collected from several recent PIAL general population surveys.1 All respondents who said they kept their own blogs were eligible for this callback survey. Sample for the original surveys was drawn using standard list-assisted random digit dialing (RDD) methodology.
Interviews were conducted from July 5, 2005, to February 17, 2006. As many as 10 attempts were made to contact every sampled telephone number. Calls were staggered over times of day and days of the week to maximize the chance of making contact with potential respondents. Each household received at least one daytime call in an attempt to find someone at home.
Weighting was used to approximate the demographic characteristics of the national population.
Weighting is generally used in survey analysis to compensate for patterns of nonresponse that might bias results. The interviewed sample of all bloggers was weighted to match parameters for sex, age, education, race, Hispanic origin, and region. These parameters were defined as the weighted demographics of all self-identified bloggers from the general population surveys from which callback sample was garnered. Table 1 compares weighted and unweighted sample distributions to population parameters.
Weighting was accomplished using Sample Balancing, a special iterative sample weighting program that simultaneously balances the distributions of all variables using a statistical technique called the Deming Algorithm. Weights were trimmed to prevent individual interviews from having too much influence on the final results. The use of these weights in statistical analysis ensures that the demographic characteristics of the sample closely approximate the demographic characteristics of the national population.
Additional national telephone surveys were used to capture an up-to-date estimate of the percentage of internet users who are currently blogging.
Random-digit telephone surveys conducted by Princeton Survey Research Associates International in two waves (November 29 to December 31, 2005, and February 15 to April 6, 2006) yielded a sample of 7,012 adults. The demographic information for internet users and bloggers listed in this report are derived from those large-scale surveys. For results based on internet users (n=4,753), the margin of sampling error is plus or minus 3 percentage points. For results based on bloggers (n=308), the margin of sampling error is plus or minus 7 percentage points.
Further details about survey methodology are available in the questionnaire associated with this report, available at: http://www.pewinternet.org/Reports/2006/Bloggers.aspx
- numoffset=”17″ The surveys used for callback sample were: February 2004 and 2005 Tracking Surveys; November 2004 Tracking; November Activity Tracking; January 2005 Tracking; September 2005 Tracking; the Exploratorium Survey; Nov/Dec 2005 Tracking Survey; the Spyware Survey; and PSRAI’s Demographic Tracking Survey. ↩