by johnr » Sun Feb 11, 2024 2:50 pm
Hi Yuhan
You might also wish to consult de Bekker-Grob et al. (2015) who extended our earlier work. With that out of the way, there are a few ways to proceed that have been tried and found to work in the past.
1. Use an orthogonal design for a pilot. This may or may not be feasible as there are limited numbers of orthogonal designs, and there may not been one available to match your exact problem. If there is, it turns out that under certain assumptions, an orthogonal design may be optimal. One such condition is if the parameters are all zero, so such designs are (again under a number of assumptions) equivalent to assuming the parameters are all simultaneously zero. Remember, that orthogonal designs have been used in choice analysis for over 40 years and have worked, and will continue to work. People assume that our promotion of efficient designs is a rally against orthogonal designs. This is not the case. I once used an orthogonal design for a pilot and based on 20 respondents, was able to estimate an MNL model with all the parameters significant and of the expected signs. I kept the design and didn't bother updating it.
2. You can assume very small prior values that capture the expected signs of the estimates. For example, you know that price should be negative - so why assume that it will be zero. You might assume it is -0.0001 for example. This may be enough to reduce choice tasks that are unlikely to capture you much information from.
3. Use your judgement as to what the likely parameters will be. Remember, if you knew them exactly in advance, then you wouldn't need to conduct the survey in the first place. You will never get them exactly right anyway. So you can use literature, or your own judgement.
4. Use a Bayesian design approach with uninformative priors. This is what I tend to do in practice. For some attributes, I have an expectation of the parameters (again, think price), but may not know the exact value it will take. In such cases, I may use a Bayesian Uniform prior U(-0.5,0) - where the exact values I use are scaled to account for the magnitude of attribute (i.e., I don't use U9-0.5,0) blindly). For parameters I don't know the sign of (e.g., categorical variables), I still use an uninformative prior but one that is not bounded at zero (e.g., U(-1,1). Again, the exact values I use depend on the magnitudes of the variables the prior is attached to. I typically do this rather than assume zero priors as I would argue that zero (fixed or local) priors are optimized for a particular value = zero). I hear people say use zero priors if you are unsure - I hate this argument - if you optimize for zero priors, you are basically saying you are sure of the value - it will be zero. Bayesian priors represent uncertainty by the analyst as to the exact value.
You might want to check out Bliemer and Collins (2016) who offer great advice on priors.
Now to get more to specifics of what I do (again - not saying I'm right, but the process I typically go through and which tends to work for me).
a) I typically have a pilot of between 20 and 50 respondents (although I have gone up to 100 but that maybe overkill). This is not just to get priors, but also to test logistics of the survey, allow them to make open ended text comments, etc.
b) Check the relative magnitudes of the average contributions of each attribute to overall utility...
Say I have U = b1[U(-0.5,0)]*x1[5,10,15] + b2[U(-0.8,0.2)]*x2[2,4,6]
If I look at the first attribute, the average parameter value will be -0.25, and the average attribute level is 10, hence the average contribution to utility is -2.5. For the second attribute, the average contribution to utility is -2. Hence, on average, I am assuming that the first attribute will have a larger impact on utility on average than the second. I have to ask myself, is this an assumption I wish to make?
c) Check the marginal rates of substitution I am assuming (WTPs for example) given the priors I have assumed. Do these make sense?
Following a), b) and c), I haven't had too many issues in the past.
John
EW de Bekker-Grob, B Donkers, MF Jonker, EA Stolk (2015) Sample size requirements for discrete-choice experiments in healthcare: a practical guide, The Patient-Patient-Centered Outcomes Research 8, 373-384.
MCJ Bliemer, AT Collins (2016) On determining priors for the generation of efficient stated choice experimental designs, Journal of Choice Modelling 21, 10-14