(Your guidance is hugely appreciated!)
The model averaging approach offers a great solution to the outlined challenges. (Actually the Bliemer, Scarpa and Rose paper on this very topic offers a very comprehensive overview of the methodology and theory.)
In terms of adopting this approach for my research:
- As you have previously advised in the literature (and this thread) the mixed logit specification comes with a significant computational burden and thus a design can be achieved effectively by applying the model averaging approach. However, how many models would I need to specify to generate an efficient design for rppanel if I need to incorporate uncertainty over (i) interaction terms (ii)dummy vs effects coding? If I specify models M1 for MNL, M2 for rppanel; how many more models should be outlined for appropriate inclusion of the other uncertainties aforementioned?
- Given the nature of the mixed logit framework, we know that that assumptions about the parameter population distributions can impact estimates. Greene&Hensher (2002) advise that for dummy variables a uniform distribution assumption can be useful. However I aim to set all my explanatory variables (3 continuous and 3 categorical with 4 levels each) to be randomly and normally distributed because the uniform specification appears to be unrealistic and restrictive in my case: but is this a logical conclusion?
Does Ngene support other distributional specifications?
Cheers for the support!