Experimental design for all 2-way interactions
Posted: Thu Sep 12, 2024 8:18 pm
Hello,
I am a fan of the software, but I only need it every few months to create experimental designs. Therefore, I occasionally have to ask for help again.
For a research study, we would like to conduct a DCE (as Best-Worst Scaling Case 3). Our client wants us to keep the option open to analyze all possible 2-way interaction effects. We have a total of 3 attributes with 8, 6, and 6 levels. Four unlabeled alternatives are to be used. When calculating the minimum required size, I arrive at the following:
Attr1: 8-1 = 7
Attr2: 6-1 = 5
Attr3: 6-1 = 5
Attr1xAttr2: (8-1)*(6-1) = 35
Attr1xAttr3: (8-1)*(6-1) = 35
Attr2xAttr3: (6-1)*(6-1) = 25
Constant: 3 (to account for left-right bias)
Total = 115 * 3 = 345 rows
Here is our experimental design:
We have specified 350 rows in 25 blocks. We are using effect coding. The priors are based on good guesses. Unfortunately, we only have a sample size of 150 participants. Therefore, we did not conduct a pilot study to collect priors. It is possible that, especially for the attribute "region," the priors may not reflect reality. At first glance, the results and design properties look quite good.
Our questions:
1) Is the design good (efficient) enough to analyze all main and all possible 2-way interaction effects?
2) How much could the relatively small sample size of 150 affect the estimation accuracy?
3) In the design, b0 is specified. The constant is only needed to test for possible left-right bias. Does the constant still need to be considered in the utility function when creating the design? Does including the constant affect the efficiency of the design?
4) Since we did not conduct a pilot study and the priors are based on 'good guesses' (especially for the 'Region' attribute), how sensitive is the design to possible incorrect assumptions about the priors? How can I minimize the risk of incorrect priors?
5) What could be changed or improved in the syntax?
I look forward to any feedback. Thank you very much!
Andrew
I am a fan of the software, but I only need it every few months to create experimental designs. Therefore, I occasionally have to ask for help again.
For a research study, we would like to conduct a DCE (as Best-Worst Scaling Case 3). Our client wants us to keep the option open to analyze all possible 2-way interaction effects. We have a total of 3 attributes with 8, 6, and 6 levels. Four unlabeled alternatives are to be used. When calculating the minimum required size, I arrive at the following:
Attr1: 8-1 = 7
Attr2: 6-1 = 5
Attr3: 6-1 = 5
Attr1xAttr2: (8-1)*(6-1) = 35
Attr1xAttr3: (8-1)*(6-1) = 35
Attr2xAttr3: (6-1)*(6-1) = 25
Constant: 3 (to account for left-right bias)
Total = 115 * 3 = 345 rows
Here is our experimental design:
- Code: Select all
design
;alts = opt1, opt2, opt3, opt4
;eff = (mnl, d, mean)
;rows = 350
;block = 25
;bdraws = halton(500)
;model:
U(opt1) = b0
+ b1.effects[(n,-0.5,0.1)|(n,-0.4,0.1)|(n,-0.4,0.1)|(n,-0.2,0.1)|(n,0.5,0.1)|(n,0.4,0.1)|(n,0.3,0.1)] * region[0,1,2,3,4,5,6,7]
+ b2.effects[(n,0.4,0.1)|(n,0.3,0.1)|(n,0.2,0.1)|(n,-0.2,0.1)|(n,-0.3,0.1)] * risk[0,1,2,3,4,5]
+ b3.effects[(n,0.4,0.1)|(n,0.3,0.1)|(n,0.2,0.1)|(n,-0.2,0.1)|(n,-0.3,0.1)] * incentive[0,1,2,3,4,5]
/
U(opt2) = b0
+ b1 * region
+ b2 * risk
+ b3 * incentive
/
U(opt3) = b0
+ b1 * region
+ b2 * risk
+ b3 * incentive
/
U(opt4) = b1 * region
+ b2 * risk
+ b3 * incentive
$
We have specified 350 rows in 25 blocks. We are using effect coding. The priors are based on good guesses. Unfortunately, we only have a sample size of 150 participants. Therefore, we did not conduct a pilot study to collect priors. It is possible that, especially for the attribute "region," the priors may not reflect reality. At first glance, the results and design properties look quite good.
Our questions:
1) Is the design good (efficient) enough to analyze all main and all possible 2-way interaction effects?
2) How much could the relatively small sample size of 150 affect the estimation accuracy?
3) In the design, b0 is specified. The constant is only needed to test for possible left-right bias. Does the constant still need to be considered in the utility function when creating the design? Does including the constant affect the efficiency of the design?
4) Since we did not conduct a pilot study and the priors are based on 'good guesses' (especially for the 'Region' attribute), how sensitive is the design to possible incorrect assumptions about the priors? How can I minimize the risk of incorrect priors?
5) What could be changed or improved in the syntax?
I look forward to any feedback. Thank you very much!
Andrew