Monadic Exposure vs Experimental Design

This forum is for posts covering broader stated choice experimental design issues.

Moderators: Andrew Collins, Michiel Bliemer, johnr

Monadic Exposure vs Experimental Design

Postby rich_imr » Tue Feb 12, 2019 4:36 am

Greetings Forum,

I was wondering if I could get some direction on using monadic exposure vs experimental designs in discrete choice experiments.

Imagine a simple hypothetical discrete choice experiment where the researcher wants to test two different versions of a currently available product. E.g. maybe for an attribute the product currently has a low quality component and they want to test what would happen if they incorporated a medium or high quality component.

If one chose to use a monadic cell design, i.e. half of the sample is exposed to the product with the medium quality component and the other half is exposed to the high quality component, does it make sense to estimate a CMNL model and simulate preference shares for the medium and high quality components across the entire sample?

To me, it seems that simulating shares across the entire sample would be incorrect because for each half of the sample, preferences were only elicited for one half of the possible stimuli. Sample differences across the cells will also impact preferences for the components. Therefore, stacking the monadic cells into one data set, estimating an alternative specific parameter for the quality attribute, and simulating shares for the medium/high components doesn't make sense because you're just applying one cells preferences to the other cell. The sample composition affects the quality attribute parameter and the parameter could have also been different if the entire sample saw each stimuli, as would be the case if one ran a discrete choice experiment where each respondent sees two choice sets; one for each quality component.

So in this hypothetical example, I would simply run two MNL models; one for each quality component to test the statistical significance of respondent characteristics. Then, to correct for bias in the preference shares, I would balance the sample distributions within each cell to population targets. Is this a conventional way to model monadic cell data?

Finally, I see the benefit of monadic cell designs in cases where ensuring there is no bias in responses from exposure to multiple stimuli, but I don't think it is appropriate to then simulate shares across the entire sample using a CMNL model.

Any guidance is greatly appreciated!

RT
rich_imr
 
Posts: 12
Joined: Wed Oct 21, 2015 2:52 am

Re: Monadic Exposure vs Experimental Design

Postby Michiel Bliemer » Tue Feb 12, 2019 9:10 am

I am not sure whether I completely understand the terminology because I am not familiar with the terms "monadic cell" and "CMNL", but I think that you are referring to a within-subject design and a between-subject design.

If there are two versions of a product, it is preferably to use a within-subject design in which each respondent observed both versions. When comparing preferences across the two versions, you can rule out differences in sample and hence the statistical comparison test between the versions has more power. In some cases, it is not possible or not preferable to let each person in the sample observe both versions, so then one uses a between-subject design where the population is split into sub-samples and each sub-sample observes one version only. You can estimate models separately on the sub-sample data sets or you can estimate a simultaneous model based on the pooled data set. When you estimate separate models, you need to compare parameters across two data sets, which is somewhat problematic because there could be scale differences and also the statistical test would loose powerful. Therefore, in most cases one pools the two data sets and estimates a model where one corrects for scale differences, for example in a nested logit model or an error component model where parameters are estimated all simultaneously. Statistical comparison between parameters of different versions in a simultaneous model has more power. Essentially, you avoid biased parameters by explicitly accounting for scale differences and including interaction effects between the version shown to the respondent and all the attributes (i.e. you can extract preferences for each version separately).

I hope that this answers your question?

Michiel
Michiel Bliemer
 
Posts: 1705
Joined: Tue Mar 31, 2009 4:13 pm


Return to Choice experiments - general

Who is online

Users browsing this forum: No registered users and 5 guests

cron