Hi all,
I have been tasked with analysing responses to a choice experiment with 4 (nominal) attributes with either 2 or 3 levels. Respondents saw 9 choice situations, each of which had 2 non-labeled alternatives and an opt-out. The odd thing, to me, is that 48 D-efficient design matrices were deployed in an effort to improve level balance. Respondents were randomly assigned to one of the CE versions and they are distributed evenly across versions. There are many thousands of respondents.
I have designed and analysed a few choice experiments in the past. With this data set, however, choice models (MNL, MIXL, ...) either do not converge at all or seem to converge but come with singularity warnings or errors that cast doubt on the results. I am familiar with my software (R), tried various approaches/packages to no avail, and have double-checked my data. I am confident the problem is not in these two steps of the analysis.
My question is: can the decision to pool design matrices be causing my estimation problems? I can imagine, for instance, that the 48 designs meet design theory requirements individually, but that collectively they do not. Maybe I'm just unlucky with this data set. I would greatly appreciate all thoughts and comments.
Regards,
Florian