Page 1 of 1

may constraints be the problem in getting good S-estimates?

PostPosted: Wed Mar 22, 2017 4:19 am
by elsa
Dear NGene team,

I am preparing the experimental design for a experiment on traditional pig breeds conservation. I managed to get some estimates from a small pilot sampling (just 20 respondents) where I got some prior estimates for my attributes. Because I will estimate most of my attributes as effects coded I would like to include this specification in my design. However, this implies providing priors for all the nlevels -1 of my attributes. Because the pilot is very small, I just managed to estimate an MNL model with continuous coding for my attributes.

I have made a first attempt of running the code below. I assumed fixed priors based on the MNL estimates, assuming the same prior value for all the effect-coded levels.
I was looking at the preference estimates I would get with my priors and I don’t see any trouble there. However, the S estimates are too large, so something is going wrong.
1. May the constraints that I imposed be the trouble makers? Or is there anything else I am overlooking?
2. Also I was considering going for Bayesian priors. But in that case, can I assume the same Bayesian design for all the levels of the effect-coded attributes?
3. Finally, the attribute estimates I got from the MNL for such a small sample resulted in several attribute parameters not being significant. Shall I then assume a value of 0 for them (either in the fixed or in the Bayesian approach)?

Below is the coding
Many thanks in advance for your time and advice
elsa



Design
;alts=A,B,SQ
;rows=24
;block=4
;eff=(mnl,d)
;con

;cond:
if (A.land=2, B.tsp=[2,3]),
if (A.land=1, B.tsp=[1,2]),
if (A.land=0, B.tsp=1),
if(A.exist=0, B.prod=[0,1])

;model:

U(A)= b1.effects[0.0820|0.0820]*exist[2,1,0]+b2.effects[0.011|0.011]*mng[2,1,0]+b3.effects[0.064|0.064]*tsp[3,2,1]+
b4.effects[0.024|0.024]*land[2,1,0]+b5.effects[0.072|0.072]*prod[2,1,0]+b6[-0.0008]*cost[10,20,30,40,50,60]/

U(B)= b1.effects*exist[2,1,0]+b2.effects*mng[2,1,0]+b3.effects*tsp[3,2,1]+
b4.effects*land[2,1,0]+b5.effects*prod[2,1,0]+b6*cost[10,20,30,40,50,60]/

U(sq)= b0[0]+b1.effects*existsq [2,1,0](0,0,24)+b2.effects*mngsq[2,1,0](0,0,24)+b3.effects*tspsq[3,2,1](0,0,24)+
b4.effects*landsq[2,1,0](0,0,24)+b5.effects*prodsq[2,1,0](0,0,24)+b6*costsq[0]$

Re: may constraints be the problem in getting good S-estimat

PostPosted: Wed May 10, 2017 7:22 pm
by Michiel Bliemer
1. No it is not the constraints that are problematic. Effects/dummy coded variables are always quite difficult to estimate since their range is restricted to 0 and 1, so you will always need larger sample sizes with such codings. But also your cost coefficient is very very small, the contribution to utility is only -0.0008 * 35 on average, so unless you have made a mistake with units, the pilot study either suggests that cost is not that important, or you simply do not have a large enough sample to make any such claims. In that case, S-estimates do not really say anything, they are only valid if your prior values are reasonable. Maybe you need not worry too much.

2. Bayesian priors follow directly from your pilot study, you can use the standard errors as the standard deviations in a normally distributed Bayesian prior. This will at least account for the uncertainty in setting your priors, so i would definitely recommend using Bayesian priors. You can use a Bayesian prior for each effects coded parameter.

3. There is no standard practice. I would typically still use the value that comes out of estimation, unless I believe it is too large, as too large priors can be problematic. Too large can be defined as yielding a contribution to utility of more than 1 or 2 utility units.