My colleagues and I conducted a DCE with a minimal D-efficient design. In other words we used as less choice sets as possible, because our topic was very complicated and we thought that answering our choice sets was highly cognitively demanding.
Currently we are modelling our data and I have a few questions. Because we included 9 choice sets, from a statistical point of view we can only include 9 parameters in our analysis. In our case, given the number of attribute levels and their coding, this would mean we can only conduct a MNL model. However we do have 550 respondents so we tried modelling an RP model. Nlogit could also provide an RP model with proper model fit (better fit compared to the MNL model based on the AIC and Log likelihood) and the retrieved parameter estimates make sense and do not differ very much from the MNL results.
- Would you advise to present these RP model results even though we technically do not have enough degrees of freedom (and then in the discussion report this limitation)? Or would you advise to stick with the MNL results (this means that preference heterogeneity and panel structure of our data could not be taken into account)
Although Nlogit fitted a proper RP model with all attributes included as random parameters, the constant term (i.e., modelling either of the two program options against opt-out) connot be modelled as random. If this constant is included as a random parameter the model 'explodes'. The constant gets a estimate of about 12 with a SE of about 2 (SD = 10 & SE = 1.5). This obviously is an indication that this model cannot be fitted.
- What could be explanations for this? Does this imply multicolinearity of the constant with other attribute estimates? Can we 'just' conclude that the constant should be held fixed and model an RP with only the attributes set as random? Is this due to our minimal design? Or are there other explanations?
Hope you can answer these questions
Kind regards
Jorien