Hi ChoiceMetrics team,
I have a few questions regarding experiment design in Ngene:
1. While developing an efficient experiment design, I generated preliminary estimated choice probabilities for each alternative. After averaging across 36 situations, the probabilities for alt1, alt2, and alt3 are 41%, 35%, and 24%, respectively. However, when checking which alternative had the highest probability in each choice situation, I found that alt1 had the highest probability in 23 situations, alt2 in 11, and alt3 in only 2. This makes me question whether alt3 might be too weak and alt1 too dominant.
2. According to the Ngene documentation, B-error serves as an indicator of utility balance across alternatives, with a recommended range of 0.7 to 0.9. In my design, B-error values range from 0.049 to 0.998, with an average of 0.794. Should I consider removing scenarios with B-error values outside this range, even if doing so reduces the number of scenarios below the predefined 36?
3. Would using the mfederov method instead of swap help avoid dominance by one alternative? However, I noticed that mfederov may not support continuous attributes or mimic attribute levels between alternatives. Is there another approach that might achieve a better balance without these limitations?
4. My design involves 31 parameters in total. Is 36 scenarios sufficient to estimate these parameters reliably? Additionally, if I move to a Bayesian efficient design, how many parameters would be recommended to treat as random?
Thank you very much for your guidance!