I have some questions regarding dominant alternatives and repetitions in an unlabelled design.
I have just posted a question regarding another topic (C-error in WTP efficient designs), so the description of the experimental design might be a little bit redundant (sorry for this).
I have obtained priors from a Pilot study to generate a D-efficient Bayesian design. Specifically, I have divided by two the parameters values (both mean and st errors), and specified the mean values = 0 of the parameters which were statistically insignificant in the MNL model. All the priors are assumed to follow a normal distribution. Here is the syntax:
- Code: Select all
Design
;alts = alt1, alt2, nobuy
;rows = 48
;block =12;
;eff=(mnl,d,mean)
;bdraws=gauss(3)
;con
;cond:
if (alt1.INV=[0], alt1.COM=[0]),
if (alt2.INV=[0], alt2.COM=[0]),
if (alt1.COM=[0], alt1.INV=[0]),
if (alt2.COM=[0], alt2.INV=[0])
;model:
U(alt1) = b1[(n,-0.23,0.04)]*PPPRICE[1,2,3,4]
+ b2.dummy[(n,0.70,0.13)|(n,0.36,0.12)|(n,0.33,0.12)]*STANDARDS[3,2,1,0]
+ b3[(n,3.18,0.44)]*FPRICE[0.05,0.1,0.2,0.3]
+ b4[(n,0,1.60)]*INV[0,0.05,0.10]
+ b5.dummy[(n,0.60,0.18)|(n,0.58,0.18)|(n,0,0.18)]*COM[3,2,1,0]
/U(alt2) = b1*PPPRICE
+ b2*STANDARDS
+ b3*FPRICE
+ b4*INV
+ b5*COM/
U(nobuy)= b0[(n,0,0.24)]$
I would like to control for dominant alternatives and repetitions by adding the stars on alt1 and alt2.
My first concern is related to the use of this command in case I have some mean values=0, i.e. without a sign, and priors which, instead, have a negative or positive value. I was wondering whether this may cause disadvantages in the generation of the design, especially in the allocation of the attributes levels across the choice tasks.
Second, if I keep the number of the choice tasks = 48 and add the stars, I do not obtain any design after ten minutes. On the other hand, if I specify the number of the choice tasks = 24, Ngene successfully generates the designs. Given my number of parameters and alternatives (two product alternatives and a nobuy alternative), 24 choice tasks should be ok. As expected, the D-error significantly increases: from a value of 0.5497 (48 rows design) to a value of 1.106 (24 rows design) and this latter one is a little bit higher than the D-error of 48 rows design * 2. I am wondering which approach is more appropriate (or safer) to use, whether the larger design without checking for replication and dominant alternatives or the design with a higher D-error, but specifying the stars on the unlabelled alternatives. Any suggestion is more than appreciated.
Thanks again for your kind attention and support.
Claudia