Page 1 of 1

Different results for each block

PostPosted: Mon Jun 10, 2024 5:21 pm
by sam197902
Hi Michiel,

We are currently working on a project to evaluate the publication practices among researchers. The DCE has 6 attributes, with one attribute having 3 levels and the remaining attributes having 2 levels each. To create the design, we used a Bayesian d-efficient design with 24 rows blocked into 3. We had 504 respondents and analysed them using mixed MNL. Separate analyses were conducted for each of the three blocks, revealing distinct differences in the results for each block (this analysis was not originally planned and was an incidental finding). Should we be concerned about these differences? While we evaluated the level balance of the DCE design, it was not assessed for each individual block.

Thanks
Sameera

Re: Different results for each block

PostPosted: Tue Jun 11, 2024 8:15 am
by Michiel Bliemer
Each block only has 8 different choice tasks. How many parameters do you have (including dummies and constants)? And how many alternatives? Using only 8 choice tasks may be too small to get reliable parameter estimates as the model does not have many degrees of freedom with such a limited set of different data points. An experimental design is only complete when combining all blocks. Attribute level balance may also vary across blocks, maybe some blocks have more low levels while others have more high levels. Perfect blocking is only possible with orthogonal designs.

Are the results also very different using MNL without random coefficients? It is not common to estimate models on small blocks and I am not sure if I would trust the estimates on such limited sets of data points. Do the parameters when estimating on all blocks together make sense?

Michiel

Re: Different results for each block

PostPosted: Tue Jun 11, 2024 1:43 pm
by sam197902
Thanks, Michiel.
Sorry, I should have provided you with more details.

The DCE was an unlabeled DCE with 2 alternatives. It had 8 variables (1 ASC + 7 dummy variables).
The results of both MNL and mixed MNL were similar. And yes, when analysed with all the blocks, the results made sense.

Do you think that interacting the ASC with the block number would adjust the results for the overall differences between the blocks? Please see the following code.

U(1) = Rank_H*FIE_1 + Rank_M*FIE_2 + Sty_Min*FORMAT + Speed*DECISION + REVIEW*REVIEW + EDITOR*EDITOR + EVIDENCE*EVID +
ASC_1*BLOCK1 + ASC_2*BLOCK2 + ASC_3*BLOCK3

The results from the above code were similar (not the exact numbers) to those obtained when analysed with all the blocks

Thank you very much for your guidance!

Sameera

Re: Different results for each block

PostPosted: Tue Jun 11, 2024 3:24 pm
by Michiel Bliemer
With 8 parameters and 2 alternatives, the minimum number of choice tasks you need is 8, so I would say that estimating models with blocks of 8 choice tasks is not a good idea.

Interacting the ASC with the block number would only account for differences in left-to-right bias across blocks, but it does not account for differences in preferences towards the attributes. For that, you would need to interact all attributes with block dummies. But then you end up with the same problem.

I am not exactly sure what the purpose of the analysis would be, it is very unusual to compare results across blocks of data. Since blocks only have a limited number of choice tasks and do not have enough variety to allow reliable estimation on such a restricted subset of data, I would not recommend it.

Michiel

Re: Different results for each block

PostPosted: Tue Jun 11, 2024 4:19 pm
by sam197902
Great.
Thanks for the advice, Michiel.