MaxDiff Experiments

2020-07-28

Check out our updated, more detailed description of MaxDiff analysis here

 

Our customers are constantly making choices, some conscious, but most unconscious. Regular surveys questions with traditional question scales are unable to get to the core of why a consumer makes his or her decision. Most of these scales don’t simulate real-life conditions, which always include some type of real trade-offs. 

 

To simulate real life consumer decision making, the most precise approach is through either a Conjoint Analysis or MaxDiff Analysis. Gradient has designed countless experiments within a wide range of markets and sectors, with each its own application. 

 

How it works

MaxDiff is a type of survey question in which respondents are presented a lists of items and are asked to indicate which item in each list they like the most and which item they like the least. 

 

Typically, respondents are asked to complete tasks, where the options shown in each task vary according to an experimental design. Due to the nature of the experiments, respondents are forced to make trade-offs, which unveils their true preferences. 

 

A typical survey question is displayed below. Respondents are asked to complete multiple similar tasks, where the options shown in each task varies according to an experimental design.

 

 
 

When to use a MaxDiff Analysis

  • Preferences between new products
  • Preferences for existing brands
  • Message testing
  • When you want to rank-order attributes in terms of relative importance
  • When you want to force respondents to make trade-offs
  • When you want to place less of a burden on respondents relative to a Conjoint Analysis

Designing MaxDiff Experiments

A MaxDiff analysis is conducted by showing participants subsets of items from a list and having the respondent identify the best and worst or most and least preferred options from said list. The reason we take this approach is that it can be challenging for a respondent to rank order 7 or more items in a survey experience. So what MaxDiff leverages is our ability to identify the polars (best and worst) from a list, and simplifies the task in a more digestible number of items at a time.

 

A respondent would typically see around 5 to 15 questions where 3 to 5 items are shown, and they are asked to indicate the best and worst from the list. This yields very accurate data, since it is a more comprehensible task for survey takers than presenting the entire list.

 

The steps for running a MaxDiff analysis are:

  1. Determine the attributes to be tested in the MaxDiff analysis.
  2. Generate the experimental design.
  3. Program the survey that hosts the MaxDiff tasks.
  4. Collect responses.
  5. Analyze the MaxDiff results.
  6. Report the findings.

 

Each of these builds upon the previous action in working toward the end goal of understanding the preferences of the customer base.

 

Rules of Thumb

  1. Try and limit the number of alternatives in each set to 5 or less. More generally, any set containing more than seven is highly unlikely to be useful, as respondents find it difficult to compare so many alternatives and tend to take short-cuts (e.g., reading only half the alternatives).
  2. Specify the number of alternatives in each set to be no more than half the number of alternatives in the entire study.
  3. Ensure that each alternative is shown at least three times (unless anchored MaxDiff is used; see below).
  4. Where the focus is only on comparing the alternatives (e.g., identifying the best from a series of product concepts), it is a good idea to create multiple versions of the design so as to reduce the effect of order and context effects. Sawtooth Software suggest that if having multiple versions, 10 is sufficient to minimize order and context effects, although there is no good reason not to have a separate design for each respondent. Where the goal of the study is to compare different people, such as when performing segmentation studies, it is often appropriate to use a single version (as if you you have multiple designs, this is a source of variation between respondents, and may influence the segmentation)