Efficient design for small sample size

This forum is for posts covering broader stated choice experimental design issues.

Moderators: Andrew Collins, Michiel Bliemer, johnr

Efficient design for small sample size

Postby JonaSJ » Sat Dec 09, 2023 7:52 am

Hi Forum users,

I am designing my first DCE and am struggling with a couple of questions I would like to ask.

My setup is the following:
- 4 labelled alternatives (AltChicken, AltBeef, AltTurkey, AltPork) + opt-out
- with same 4 attributes (Regional - Yes/No, AnimalWelfare - Levels 1-3, Magnesiumcontent - 0.5g to 3.5g, Price - 99ct to 3.49€)
- I can estimate the sign of priors from literature and focus group, but not the exact estimate. Therefore I aim to use bayesian priors
- My sample size will be around 90 people (it's the maximum I can convince, no budget). I will send each respondent 12 choice sets, I will draw them randomly from the 24 instead of blocking.

Syntax
Code: Select all
Design
;alts = AltChicken, AltBeef, AltTurkey, AltPork, OptOut
;rows = 24
;eff = (mnl,d)
;bdraws = sobol(1000)
;model:
U(AltChicken) = regional.dummy[(u,0.0001,0.5)]*Regional[1,0] + animalwelfare.dummy[0.03|0.02]*Animalwelfare[0,1,2]
 + magnesium[(u,0,0.5)]*Magnesium[0.5,1.5,2.5,3.5] + price[(u,-0.6,-0.0001)]*Price[0.99,1.49,2.19,3.49] /
U(AltBeef) = regional.dummy*Regional+ animalwelfare.dummy*Animalwelfare
 + magnesium*Magnesium + price*Price /
U(AltTurkey) = regional.dummy*Regional + animalwelfare.dummy*Animalwelfare
 + magnesium*Magnesium+ price*Price /
U(AltPork) = regional.dummy*Regional+ animalwelfare.dummy*Animalwelfare
 + magnesium*Magnesium + price*Price
$


NGene created design resulted in a D error: 0.177; A error: 0.3; B estimate: 70,857,338; S estimate: 21,976,988,215

My questions are the following:

1)Since I only know the signs, do my chosen coefficients make sense? For animalwelfare I chose just small numbers to zero to minimize the usage of bayesian priors, since as I understood you should have not too many. Also I cannot estimate the gravity for each parameter, I just expect that price will be a strong influence on utility, therefore its range is higher.

2) My biggest concern is to not reach significant insights due to small sample size. Therefore, to have less complexity, I have made the following "compromises" on my initial design:
- originally, I actually wanted to make all parameters alternative-specific to have the most sophisticated information, but that would have been a lot more parameters
-I reduced the levels for Animalwelfare from 4 to 3 levels, to have one dummy parameter less.
-I first wanted to use a alternative specific parameter for Animalwelfare of AltTurkey, since it is more accurate that it only has the level 2, and shouldn't have 0 or 1. But that would again add another parameter.
- For magnesium, levels also usually differ between alternatives (some might only have 0.5 and 1.5, others only high ones), but again I do not want to add too much complexity.
The way I understand, my D error is already quite high with this compromised setup.
Do these decisions make sense, or might some compromises not be necessary at all because they don't change a lot in the result?

3)Do you see any obvious strong shortcomings of my syntax or design in general? Ideas that could be helpful to avoid them?
JonaSJ
 
Posts: 2
Joined: Thu Dec 07, 2023 7:32 pm

Re: Efficient design for small sample size

Postby Michiel Bliemer » Sat Dec 09, 2023 9:59 am

1) Your priors look okay at first glance.

2) Reducing the number of parameters is a good idea if you have limited sample size, so I think that your decisions are reasonable. Except that you can use different levels for magnesium for different alternatives without a problem since you are still estimating the same coefficient, e.g.

U(AltChicken) = ... + magnesium[..] * MagnesiumChicken[0.5,1.5,2.5] + ... /
U(AltBeef) = ... + magnesium * MagnesiumBeef[1.5,2.5,3.5] + ...
(it is usually preferred that the levels have some overlap, e.g. here level 1.5 and 2.5 are overlapping across both alternatives; if there is no overlap then it may be more difficult to disentangle the alternative-specific constants from the effect of differences in attribute levels)

3) You are missing alternative-specific constants, you need to add them to all utility functions and set their priors relative to the opt-out (they will likely be positive if you believe that most decision makers will not select the opt-out).

Michiel
Michiel Bliemer
 
Posts: 1733
Joined: Tue Mar 31, 2009 4:13 pm

Re: Efficient design for small sample size

Postby JonaSJ » Mon Dec 11, 2023 3:21 am

Thank you so much Michiel!

Regarding point 3), I was under the impression that I could omit the alternative-specific constants in the design, as long as I consider them in the model. That's why I decided to remove them, to limit complexity.
So I understand, the addition of the constants and their influence on the design are significant and more important than any concerns with complexity?

If I add them (I do indeed assume a positive prior), I would add for each alternative in the beginning
U(AltChicken) = b1(0.0001)+..
U(AltBeef) = b2(0.0001)+..
U(AltTurkey) = b3(0.0001)+..
U(AltPork) = b4(0.0001)+...
U(OptOut) = b5(0)

Another struggle I have is that my errors and estimates are very high. As far as I understood, an S estimate: 21,976,988,215 tells me I would need over 21 billion respondents to get significant parameter estimates? No matter how I vary my design while still not loosing too much logic and sense of my original experiment idea, the errors (D error: 0.17) and estimates I get are always in the same ball park. With these estimates, can I therefore expect nothing else than an insignificant study result? I also wonder why these estimates are so high from the start, since I only have 4 attributes.
JonaSJ
 
Posts: 2
Joined: Thu Dec 07, 2023 7:32 pm

Re: Efficient design for small sample size

Postby Michiel Bliemer » Mon Dec 11, 2023 6:05 am

Your ASCs add to the number of parameters you need to estimate and therefore affect the degrees of freedom of your design.

Your sample size estimates of course do not make any sense if you use near-zero values for priors. You should only look at sample size estimates when you have sufficiently reliable priors from a pilot study. Sample size estimates are obviously Infinite with a zero prior (which essentially says that the attribute has no impact on choice behaviour).

The D-error should not never be very large though. If it is very large (close to infinite or much larger than 1), then you overspecified your model.

Michiel
Michiel Bliemer
 
Posts: 1733
Joined: Tue Mar 31, 2009 4:13 pm


Return to Choice experiments - general

Who is online

Users browsing this forum: No registered users and 20 guests