by johnr » Sun Feb 18, 2018 9:38 am
I'm not sure if I am a fan or not, but some people use best worst experiments to cull large lists of attributes - they use a two step approach to the study - study one uses BW, then from that, study 2 use a DCE with attributes from the BW study. That is, the attributes are the alternatives in the best worst experiment (not the levels) and people choose the most important/least important attributes out of the sets. This gives you a relative ranking of attribute importance and they then use the top X attributes in the follow up DCE. I've seen this done on multiple occasions successfully. I'm a little sceptical for a few reasons however. Firstly, the data generation process between the BW and DCE tasks is probably very different. One you are getting importance ratings between attributes, and the other you are trading off levels of attributes. The problem might be that price isn't important if the product costs 20 cents, but hugely important if it costs $1000. Second, I guess how the BW task is framed is important to the outcome also. I saw this done in a health context once, where the BW task ranked the attributes, but it wasn't clear to me (or the researcher when I discussed this with her), whether you should include only the top X attributes, or a mix of top and bottom attributes. In her case, the ordering wasn't attribute importance, but some sort of ranking of attribute preference - and the lower ones were all the negative attributes whilst the top attributes were the positive attributes. It made no sense to me at the time that you would only include positively viewed attributes in the DCE, as the negative ones could also impact on choice in the DCE.
Its an intriguing idea, and as I said, where I have seen it done, it appears to have worked. Would I advocate doing it in practice however I don't know, but it is not too dissimilar to the use of qualitative research I guess, so there might be something there.