data capture services

Effects of Badly Worded Survey Questions  

In questionnaire design text, it is both important that question-wording should express what is being measured, and that appropriate response options are provided. However, there are times the latter is overlooked, and a mismatch occurs between the question wording and response options. So, the big question is, how detrimental are these mismatches? Below, we discuss the effects of badly worded survey questions (how mismatches between survey questions and response options affect a study). We hope this information will help you weigh whether it is more critical for questionnaires to firmly match survey wording, or give more significance to more straightforward and less wordy options.

Common Sample of Mismatched Questions

effects of badly worded survey questions

effects of badly worded survey questions

Effect of Mismatched Questions

  • Higher Item Nonresponse

Mismatched questions will result in higher item nonresponse because of the difficulty in responding due to additional cognitive processing. In responding to survey questions, respondents must identify the question, understand it, recover relevant information, create a judgment, and then report (Jenkins and Dillman 1997; Tourangeau, Rips, and Rasinski 2000). Consequently, there is a probability some respondents will skip items, or refuse to answer when they are troubled to exert additional intellectual effort. In an experiment by Dillman et al. (2014), they found that the mismatched version caused greater item nonresponse, and differences in response distributions, compared to the matched version. The study varied the question stems paired with forced-choice response options to be matched or mismatched with check-all question stems.

  • Longer Response Time

Mismatched versions will take longer to finish compared to matched versions because of the added time-consuming comprehension and mapping complications. Additionally, the mismatch may only be identified by the respondents when they are mapping their answer to the response options and determine they are not appropriate. In this case, they have to exert additional effort in finding an adequate answer in the response option to replace their answer. Van der Zouwen and Dijkstra (2002) determined that mismatches or poorly designed response options (inadequate range of response alternatives) were greatly connected with swaying from the classic interviewer/respondent interaction in a question/answer sequence. An example of a classic interaction is an interviewer asking exactly worded questions to respondents, and receiving an adequate answer from them. Mismatch tends to result in respondents asking the interviewer to clarify their question further, resulting in longer survey completion time.  

Confirmation by a Recent Study of the University of Nebraska-Lincoln: “The Effects of Mismatches Between Survey Question Stems and Response Options on Data Quality and Responses”

The experiment divided the respondents into two groups. The first group received a survey version with all answer options and response options matched, and the 2nd group a slightly mismatched version. Their theory was that mismatched questions would result in higher item nonresponse and longer response time.

Example Matched and Mismatched Survey Questions from the Study

Results of the Study

The item nonresponse rate is higher in the mismatched than in the matched version. The probability of an item being unanswered in the mismatched version compared to the matched version is 1.6 times. 

Response time was longer in the mismatched than in the matched version. During the survey conducted via telephone interview, the researchers found that the interviewees took more time to complete the mismatched version, compared to the correctly matched one. 

The effect of the mismatch is not greater for those with lower cognitive abilities based on age or education. 

Summary

The study confirmed that mismatches weaken data quality in both mail and telephone surveys. Although, it may be less detrimental for the mail survey because it is self-administered, and respondents can see both question stem and response options. Researchers should design their questionnaires carefully, taking into consideration the proper relationship of the question stem and response option. Moreover, in analyzing survey results, researchers should review both stem questions and response options to determine if a mismatch could have made an impact on the results. 

For more information on Survey Design, survey mailing services, and data capture services, quantitative data collection services in general, contact us!

By |2020-06-27T08:58:34+00:00June 20th, 2020|Survey Research Services|0 Comments

Reasons to Avoid Sending Mail Surveys During the Holidays

Many experts believe that the holidays are a bad time to ask people to participate in survey research, yet there are still some that will insist they’ve garnered better results during the holiday season. However, despite the success of some people in research during the holidays, we at DataForce Survey and Study Management still do not recommend sending mail surveys so close to the holiday. Below are two of the main reasons we urge you to think twice. 

1. People Are More Stressed During the Holidays

Avoid Sending Mail Surveys During the Holidays

A 2015 Healthline survey that measured holiday stress confirms that the majority of respondents were under stress. According to the results of the survey, 65 percent of generation X, 61 percent of millennials, and 62% percent of baby boomers feel some stress during the holidays. 

When your audience is more stressed they are less likely to show interest in your mail survey. Their mind is more focused on picking the right gift, finances, holiday schedule, and so on. Although you may still receive survey response, it will undoubtedly be lower in quantity and quality than when your audience is in a period in time where they are under less pressure. 

2. A Quarter of Americans Plan to Travel During the Holidays 

Avoid Sending Mail Surveys During the Holidays

According to a survey of more than 1,000 adults conducted by Experian, 1-in-4 Americans plan to travel during the holidays. Among those leaving their home for some R and R, Gen X-ers and millennials will be the ones to travel the most. 38% of Millennials and 35% of Gen X-ers said they typically hit the road during the holiday season, while only 16% of Baby Boomers and 11% of Gen Z members said they travel at this time. During the holidays, your target audience might be out of their house. Your response rate will likely experience a decline by at least a few percentage points as a consequence. 

If you send out your packets during the holidays, there is also a good possibility that your respondents will leave their mail to pile up, and in an effort to catch up, throw out the ones that are least relevant into getting them back on track in their regular daily routine. Hence, the odds that your survey packet will be included with the mail thrown in the trash bin will be quite high. 

If you cannot avoid sending out that mail survey during the holidays, here are our recommendations:

The delivery and return of holiday mail take longer. And, since potential participants are busy and distracted with the season, they tend to put off answering surveys. If you must send out your surveys during a holiday, allow additional time for returns. Most survey projects see the first set of results around 7 – 10 days after mailing, with the bulk of responses coming in between 14 – 18 days after the mailing date. If you are mailing your project around a holiday though, it is better to plan for an additional 1 – 2 weeks for responses, and expect them to be a little lower. 

Check out our post on 3 Tips to Streamline Your Survey Return Schedule

Sending mail surveys during the holidays is not the best idea or strategy to increase your response rate. However, we understand there are situations when you still have to take this course of action. Our mail survey experts at DataForce can help you plan, prepare, and execute your mail survey confidently and flawlessly during this period. For more information on survey fulfillment or any aspect of mail survey management, contact us today!

 

Check out – forms processing services, incentive fulfillment services

By |2020-06-21T15:36:42+00:00June 20th, 2020|Survey Mailing Services|0 Comments

Five Ways to Avoid Survey Response Fatigue

We are now in an age where sending feedback requests or surveys has never been so easy. Technology has made online surveys popular and has streamlined telephone and mail surveys to be convenient as well. Target audiences are being flooded by survey requests that survey fatigue is inevitably happening.

Survey fatigue occurs when people are discouraged in responding to surveys because they are overwhelmed with the number of questions on a survey or bombarded with numerous surveys. In fact, in 2012, a report was made by Pew Research, their telephone survey response rates have dropped from 36% in 1997 to a mere 9%. With these results, it is essential to understand how to reduce survey response fatigue to improve overall survey response rates. Below we’ve listed five ways to avoid survey response fatigue for the benefit of your data collection project.

1. Filter Questions

WAYS TO AVOID SURVEY RESPONSE FATIGUE

Create surveys that will provide you with the most relevant and needed information. Remove unnecessary questions and use responses on earlier questions to further filter the survey from items that are looking for similar answers (although the way of questioning is different). Avoid gathering as much data as you can on a survey, instead keep your questions focused on meeting your survey goals.  

 

2. Time Your Survey

WAYS TO AVOID SURVEY RESPONSE FATIGUE

Fewer questions do not automatically mean a shorter time for respondents to answer the survey. The survey questions’ complexity is a significant factor, so make sure to test the survey and how long it takes to finish. Surveys that take too long can tire participants, which results in lower quality data, non-completion, or total abandonment.

Manage expectations by advising respondents on how long the survey will take before they begin. 

 

3. Keep Your Respondents in Mind

Focus not just on your research goals but on the people who will be providing data. Think about how they will feel about the questions, survey layout, and response options. Make notes on challenges or roadblocks the respondents may encounter, as well as check for bias in questions and remove them. Balancing the demand from stakeholders for additional data while maintaining a level of empathy will ultimately give you better data.

4. Communicate the Value 

WAYS TO AVOID SURVEY RESPONSE FATIGUE

A study by Vision Critical determined that people are more likely to do a survey when they feel their opinion matters. Accordingly, a majority of the respondents (87%) claim they took the survey because they believed it would contribute to making a difference in a company’s product or services. For example: Advising respondents that the survey will help the business create their new menu will motivate respondents to participate, especially regular customers. It would also be helpful if the request came from someone with authority, like the owner of the establishment, to emphasize the importance of the survey. 

 

5. Show Appreciation

WAYS TO AVOID SURVEY RESPONSE FATIGUE

Value the time and effort of your respondents. Provide respondents with incentives such as cash, promotion, or gift card. Check out our post on 3 Survey Incentives to Explode Your Response Rate. Another way to show appreciation is by allowing them to find out the results of the survey. Post the results online and reveal how the survey results are improving the brand.  

There you have it, Five Ways to Avoid Survey Response Fatigue for your respondents to have a more pleasant survey experience, and get you higher response rates and quality data.

For more information on survey research services, Mail Surveys and Data Collection in general, contact DataForce!

By |2020-06-22T20:41:30+00:00June 20th, 2020|Survey Mailing Services|0 Comments

What is a Likert Scale and How to Create One

Are you interested in finding out what you can use to measure questions that are neither agreed nor disagreed? A Likert scale can help you measure attitudes and opinions with a greater degree of nuance than simple binary questions, which offer only two answer options. Please read our blog post to learn what is a Likert scale and how to create one for your next survey.

1. What is a Likert scale? 

The Likert scale is one of the most popular rating scales developed to measure one’s attitudes or opinions. Fixed choice response formats are used to determine how people feel about the topic, products, services, or experience. The scale deems that the strength/intensity of the experience is linear. These linear scales measure points of agreement/disagreement. People are given five to seven choices, or even nine balanced responses, that often come with a neutral point. 

 

2. Common Likert Scale Question

A Likert scale does not have a fixed number of leveled items. Many researchers use five levels, but some also use 4, 7, 9, and even 10-leveled items. Since adding more levels produces diverse valuations, a 5 or 7 level scale is most often ideal for avoiding extreme options by obtaining just a bit of variation.

Below are some examples of Likert scale questions and answers:

a. Agreement

The employee training provided the knowledge I need to do my work efficiently.

  • Strongly Disagree
  • Disagree
  • Undecided
  • Agree
  • Strongly Agree

b. Satisfaction

How satisfied are you with our customer support?

  • Highly Dissatisfied
  • Dissatisfied
  • Neutral
  • Satisfied
  • Highly Satisfied

c. Frequency

How often do you visit our store?

  • Very Frequently
  • Frequently
  • Occasionally
  • Rarely
  • Never

 

3. When to Use Likert Scales

What is a Likert Scale

Likert scale is useful in measuring the general feeling or opinion of a particular topic, product, services or experience, and collecting additional data on the factors that contribute to those feelings or opinions. However, a Likert scale should only be used when the question items are related to each other and can be presented in a degree-scale form. Since respondents are not limited to a yes/no answer, a Likert scale allows researchers to obtain quantitative data that can be easily analyzed. Nevertheless, a Likert scale may be compromised because of “social desirability”. Social desirability is the bias exhibited by people to present themselves in a positive light in the community. For example, in taboo questions involving sex, illegal drugs, or racism, respondents may heighten “good behavior” or depress “bad” or undesirable behavior of their responses. One way to reduce social desirability bias is by allowing anonymity on self-administered surveys. A study by Paulhus (1984) found that when respondents have to put their name, address, and telephone number on the survey, results show more positive personality characteristics than an anonymous survey.  

 

4. How to create a Likert Scale

Establish the footing of your survey questions and response scale by first deciding what you want to measure. It is best to use a Likert scale when several factors are influencing the way your respondents feel about something. For instance, you want to measure patient satisfaction. Many factors affect patient satisfaction, including affordability, general behavior of doctors, amenities, and administrative procedures. The respondents’ opinions, attitudes, feelings, or experience must be measurable in a scale form. Moreover, make sure that there are two well-defined extremes for the response.  

For Example:

What is a Likert Scale

 

Recommendation

  • A Likert scale should have the same number of positive and negative responses.
  • Stay odd. Provide your respondents with a neutral option. 
  • Use the appropriate description to label response. When you just use numbers, people may obscure which end is affirmative and which is undesirable.
  • Make sure your survey questions are specific
  • Use terms that your target audience understands
  • Avoid bias questions
  • Avoid long and complicated questions
  • Avoid double-barreled questions

Check out our post on How to Write Great Survey Questions

You’ve most likely encountered Likert scale questionnaires without even knowing it. Likert scale questions are valuable for assessing people’s opinions on a specific topic when undertaking in-depth research. 

For more information on data collection techniques or any aspect of mail survey management, contact us today! We provide outstanding quantitative data collection services and paper scanning services!

 

Get more information on – survey research services incentive fulfillment services or survey mailing services

3 Tips to Streamline Your Survey Return Schedule

Effective mail surveys are typically planned and executed like a well-choreographed dance routine that must have all dancers hitting their mark at the right place and the right time. Every step in the sequence – from printing and mailing to fulfillment and data collection services – must be optimized for a streamlined performance.

One step that requires particular attention is the timing and management of survey returns. Survey return management depends on factors outside your control, including the timing of respondents’ completing the surveys and the postal service delivering the returns. This relative blind spot also creates challenges in staffing. You can imagine the wasted cost in staffing a team to process more than 10,000 returns while facing unexpected survey return delays.

As a leading provider of mail surveys, we help researchers plan for and anticipate potential risks and/or delays before they happen. Here is what we found are the best ways to streamline your survey return schedule:

 

1. Set an Appropriate Response Time Window

Survey response time is driven by several factors:

  • Interest Level A high-interest topic and a short survey could see returns in as little as a few days, whereas others can take as many as one to two weeks. Being mindful of this will help you set reasonable timeline expectations. 
  • In-home Date Depending on your mail delivery method (standard vs. express), your respondents may receive the survey in-home in as little as a few days or as long as 1 week or more. Consider in-home receipt as part of your timeline window.
  • Return By Date A clearly communicated “complete/return by” date should be printed on the survey materials so that respondents have a deadline.
  • Holidays – Holiday mail takes longer to be both delivered and returned. Respondents also tend to put off completing their surveys during holidays as distractions abound. If you must send out your surveys during a holiday, allow extra time for returns. We typically recommend a 3-4 week return window for most surveys, and 5-6 weeks during holidays, to ensure participants have enough time to consider and complete their surveys.

 

Survey Return Schedule

Survey Return Schedule

(3 mailings, $2 pre-incentive, $40 promised incentive)

 

2. Don’t Jump the Gun on Subsequent Mailings

During your response window, you will notice that the number of responses starts to taper off. While you may be eager to move on to your next mailing, we recommend waiting for any late responses because of the impact it will have on subsequent mailings. As you can see from the 2nd graph above, the 2nd mailing was sent when the response rate reached its lowest. This is to provide the follow-up mailings a revised file of respondent names and addresses before going into production – which can take up to a week or more (depending on the level of printing and assembly required). In so doing you can account for people’s replies from the first mailing and avoid sending them follow-up mailings for a survey they just completed!

 

3. Minimize Post Office Delay

There are specific steps you can take before the returns come in that will help you streamline and expedite the process: 

  • Make sure there is enough money in your business reply account – If you overlook this step, you likely won’t hear about it until after the envelopes have piled up at the post office, and someone gets around to contacting you, costing you valuable time and energy.
  • Make sure your dedicated postal worker is not on vacation – It happens more often than you would think. Postal workers are given various assignments, and there is typically an employee dedicated to handling your company’s reply mail. We recommend you call the post office to ensure that a dedicated staff member will indeed be working on your assignment during your response window.

Survey response mail is among the most exciting yet uncertain parts of survey management. By following these tips, you’ll be better able to estimate your timeline, minimize risk, and account for all of your survey responses.

For more information on survey fulfillment services or any aspect of mail survey management, contact us today!

 

By |2020-03-30T19:31:31+00:00February 27th, 2020|Survey Mailing Services, Survey Research Services|0 Comments

8 Common Survey Bias Errors and How to Avoid Them

When done correctly, surveys are the golden key in social science research – helping us uncover the attitudes and behaviors of a target population like no other platform. As with all scientific research, however, they are susceptible to bias. Bias is a sneaky, subtle characteristic that can creep into any part of your research in the form of leading questions, skewed sample selection, respondent social pressures, and more. In fact, bias is so prevalent in our everyday lives, it is often difficult to spot. 

A primary goal of research is to minimize bias – if not entirely eliminate it. In so doing, results can be trusted as an honest reflection of the attitudes and behaviors of the total target population. Thankfully, survey science has been around a long time and the most common biases are well documented. 

As a leading provider of mail and multi-modal surveys, we help research professionals avoid bias every day. Following are the 8 most common data capture services survey bias errors we encounter, with helpful tips on how to avoid them. 

1. Social Desirability Bias

Respondents have a propensity to answer questions in a way that makes them look good according to social norms. Such topics as taking care of the environment, spending time with one’s children, etc., are ripe for socially acceptable responses that don’t quite match up with reality. This also applies to group norms as well, such as attending church, exercising and more. 

How to avoid it: Don’t use yes or no questions for these topics. Have respondents select from alternatives or use a ranking or rating scale. 

 

2. Acquiescence Bias

Respondents tend to agree and give a “yes” response to most questions, especially if they haven’t given the topic much thought before. For example, would you want your washing machine to have more preset options? Sure, why not. Next question.

How to avoid it: Don’t use yes or no questions for these topics. Have respondents select from alternatives or use a ranking or rating scale. 

3. Question Order Bias

Question order matters. Among the most common culprits of question order bias is “letting the cat out of the bag” too soon. For example, if you are doing a brand awareness survey and mention your brand name too early, you will inadvertently affect how people rate their familiarity with your brand on subsequent questions. This also holds true for response option order. A respondent might remember a choice that appeared in an earlier question and be more likely to select that response on later questions. 

How to avoid it: Use a logical question sequence that goes from general to specific. Manage response order with randomization.

4. Habituation Bias

When a series of questions are worded similarly or use a similar structure, respondents tend to answer in a less engaging way. Instead, many spot the pattern and go on autopilot to get through the survey with minimal energy. This adversely affects data quality, as respondents do not give each question the consideration it deserves. 

How to avoid it: Vary question-wording and keep it conversational. 

5. Sponsorship Bias

When respondents know who commissioned the survey, it can influence responses. Their existing feelings and opinions about the brand or organization can taint even the most general questions in the survey. This is particularly troublesome in product surveys. 

How to avoid it: Do not use any logos on the invitation, survey form or any other collateral. Declare that the survey is being moderated independently of any brand or organization.

 

6. Confirmation Bias

This bias occurs on the researcher side when the survey itself is conducted to confirm a hypothesis, rather than simply gauge opinion. It is particularly common in political circles. Such researchers will give extra weight to responses that confirm their belief, and dismiss evidence to the contrary. In many cases, they will pose leading questions, such as “Don’t you agree that taxes are too high?” rather than a more neutral “Which of the following describes your view on taxes?”  Confirmation bias is a natural human phenomenon, and is not always easy to spot, even within ourselves. It is simply part of the way we process and evaluate information in our daily lives.

How to avoid it: Do not use leading questions. Continually reevaluate responses and challenge them against your preconceptions.

 

7. Culture Bias

Our own cultural experience influences our thoughts and assumptions about other cultures, which can cause unintended bias in research. This phenomenon, known as ethnocentrism, is defined as “judging another culture solely by the values and standards of one’s own culture.” In some cases, the assumptions made can be downright offensive. 

How to avoid it:  Embrace the principle of cultural relativism – that an individual’s beliefs and activities should be understood and evaluated in terms of that individual’s own culture. Have unconditional positive regard and be mindful of their cultural assumptions, too. 

 

8. Halo Effect Bias

People have the tendency to hold an overall impression of something based on only one characteristic. This so-called halo effect can introduce bias on both the moderator and respondent side. For example, a moderator may make an assumption about a respondent-based on a first positive impression. A respondent may respond to a series of questions on a brand based solely on their feeling about one attribute.

How to avoid it: Choose question order carefully and stick to one topic at a time. Continually remind yourself why each question is being asked and hold off analysis until later.

While bias is an integral part of the human experience, it can be minimized in research to deliver an honest reflection of the attitudes and behaviors of your target population. By looking out for these common survey bias errors, you will be well on your way to survey success.

For more information on survey bias or any aspect of multi-modal data collection, contact us today!

By |2020-03-30T19:34:56+00:00January 28th, 2020|Survey Research Services|0 Comments

How to Pick the Right Respondents for Your Survey

In our blog on Choosing the right sample size”, we provided a formula to ensure your target population is represented accurately. Knowing that number early is important for determining your mail quantity and bidding out your project to vendors. However, it is only half the equation in survey sampling. The other half is making sure you pick the right people.  

So how do you choose the participants? 

1. Define Your Target Population

Before you can choose survey participants, you need to define the common binding characteristics or traits of the overall population. For example, “government employees” or “existing customers.” These are often combined with other characteristics: “government employees who use iPhones” or “existing customers who have utilized a particular service.” It is imperative to select the most appropriate target population to satisfy the objectives of the survey. 

 

2. Identify Your List Source

Some survey samples are easier to generate than others. For example, if you are surveying your existing customers, you likely already have everything you need in your company database. But if your target is “Latina women 25-40 who shop online,” you may have some work to do. In this case, you may want to look for available public data or purchase a list from a sample provider. Once you determine the list you need, then you become better positioned to choose a sampling method and pick your respondents. 

 

3. Choose a Sampling Method

There are many scientific ways to select a sample. They can be divided into two groups: probability and non-probability sampling. Probability sampling is any method that utilizes random selection like drawing straws or randomized computer selection. Everyone in a target group has an equal probability of being chosen. It is the preferred method of researchers because it accounts for bias and sampling error. 

But sometimes probability sampling is not feasible, either due to time constraints or list accessibility. In that case, non-probability sampling is used. People must still meet common binding criteria, but they are chosen in such places as a mall or a busy neighborhood. Such samples are often useful but don’t account as easily for bias and sampling error. 

Depending on the needs of your study, you will typically choose from one of the following common methods:

  1. Random SamplingThe purest form of probability sampling. The most basic example of this technique would be the lottery method.
  2. Stratified SamplingIdentifies a subset of the target population such as fathers, teachers, females, etc., and selects them at random.
  3. Systematic SamplingUses every Nth name in a target list, where N is a variable of your choosing.
  4. Convenience SamplingA non-probability method used when only a few members of the target population are available. 
  5. Quota SamplingUses subset criteria like stratified, but doesn’t randomize their selection. 
  6. Purposive Sampling A method that uses predefined criteria with a purpose in mind. For example, gauging the perceptions of Caucasian females between 30-40 years old on a new product but not randomize their selection.

Survey sampling is a critical part of data collection. Your survey provider can help you weigh these options for your survey to ensure you get the quality data you need. For more information on survey sampling or any aspect of mail survey management, contact us today!

By |2020-03-30T19:45:29+00:00January 13th, 2020|Survey Research Services|0 Comments