Model for DCE deisgn and analysis

This forum is for posts covering broader stated choice experimental design issues.

Moderators: Andrew Collins, Michiel Bliemer, johnr

Model for DCE deisgn and analysis

Postby quynhanhho95 » Fri Mar 08, 2024 1:06 pm

Dear Moderators,

I have two questions regarding the model used in design and models used for analysis. My apologies if these are not appropriate questions in this forum but I hope you can assist me clarify some problems.

My DCE design has 6 attributes (com,quiz,screen,time,therapist,cost) - time,therapist,cost have 4 levels; others have 2 levels.
I used D-efficient design for MNL model, specifying cost as a continuous and linear variable and other attributes as dummy variables. Cost is assusmed to be continuous because I want to estimate WTP. I also included the interaction cost*therapist (prior = 0) as my previous interview informs some correlation between cost and therapist.

Now I am analysing my DCE data using STATA. I am planning to use mixed logit as my final model. I experience a few problems:

1/ How to check if I should include the cost*therapist interaction in my final model?
What I have done is: I started with MNL model (main effects only) as usually suggested. Then should I run MNL model (main effects + three interaction cost*therapist1 cost*therapist2 cost*therapist3) and test the significance of the interactions, OR should I run MNL model (main effects + all two-way interactions betwwen attributes) and test the significance of the interactions?)

2/ Will the intepretation be influenced and more complicated if I include cost*therapist into my mixed logit model? What I think is that the impact of therapist/cost on utility has been distributed to therapist, cost, and cost*therapist.
Can I ignore the interactions in my final mixlogit model, despite the fact that I included the interactions in my design?

3/ I also want to compare the results of DCE and BWS. As in BWS, MNL model is specified with all attributes being dummy-coded. As I've seen all previous studies used the same model for both methods, and rescaled the estimates from both methods for comparison (ref: Krucien et al.2017 https://doi.org/10.1002/hec.3459, and Whitty et al. 2014 https://doi.org/10.1177/0272989X14526640). In this case, I'm just wondering if that's ok to re-specify my DCE model, by changing cost from a continuous variable to dummy variables - this results in a model different from what I used in my design.

I hope my question makes sense to you.

Thank you very much and look forward to your reply.

Anh
quynhanhho95
 
Posts: 3
Joined: Thu Mar 07, 2024 9:55 pm

Re: Model for DCE deisgn and analysis

Postby Michiel Bliemer » Fri Mar 08, 2024 3:40 pm

1) You would typically use a likelihood ratio test to compare models with and without certain interactions.
General model: MNL with interactions cost*therapist
Restricted model: MNL without interactions cost*therapist

If you have three interaction effects, then you would be estimating three additional parameters and hence your Chi^2 distribution for testing the hypothesis that the General model has a significantly better model fit than the Restricted model would have 3 degrees of freedom.
You can also look at statistical significance of the three parameters, but a likelihood ratio test is a better way.

2) If you include cost*therapist into your utility function, then you would have
U = ... + b1*X + (b2 + b3*therapist1 + b4*therapist2 + b5*therapist3)*cost + ...
In other words, your cost coefficient would be b2+b3 in case of therapist1, b2+b4 in case of therapist2, and b2+b5 in case of therapist3. So that means that you would obtain four WTP values for each attribute X, namely one for each type of therapist. This is fairly staightforward.

You can certainly exclude interactions that you included in the design in your final model, you can deviate from your original specification during the design phase.

3) I am not very familiar with BWS, but to answer your last question, yes you can dummy code cost without a problem, this simply assumes a nonlinear effect for cost. Note that for WTP calculations a dummy coded cost does make things more complicated.

Michiel
Michiel Bliemer
 
Posts: 1733
Joined: Tue Mar 31, 2009 4:13 pm

Re: Model for DCE deisgn and analysis

Postby quynhanhho95 » Wed Mar 13, 2024 1:42 pm

Hi Michiel,

Thank you very much for your explanation. I've used the likelihood ratio test to compare three models:
- Model1: MNL main effects and cost is linear
- Model2: MNL main effects and cost is dummy
- Model3: General model: MNL with interactions cost*therapist (same as my design model)

Model3 performs better than model2 (larger likelihood ratio and Prob(>Chi2) < 0.05).
Model 2 performs better than model1.

Model3 appears to be the best-fit model. However, considerable differences can be seen in the effect of therapist1, therapist2, therapist3 in model3 compared to model2,

-----------------------------------------------------
Variable | model1 model2 model3
-------------+---------------------------------------
...
therapist1 | 0.625*** 0.637*** 1.005*
therapist2 | 0.873*** 0.643*** 3.139***
therapist3 | 0.881*** 0.777*** 0.032
cost | -0.013*** -0.001
cost30 | -0.552**
cost60 | -1.391***
cost100 | -1.150***
cost_thera~1 | -0.010
cost_thera~2 | -0.051***
cost_thera~3 | 0.008*
-------------+---------------------------------------
ll | -921.871 -912.941 -898.642
aic | 1863.743 1849.882 1823.284
bic | 1924.551 1922.853 1902.335
-----------------------------------------------------
Legend: * p<0.05; ** p<0.01; *** p<0.001


1/ I am planning to write up two papers, paper1 is DCE, and paper2 is a comparison between DCE and BWS. I guess that I'd better use the same model specification for DCE in these two papers. Or can I use model3 for the paper1, and model2 for the paper2?

2/ Choosing model2 would help me to compare the DCE results with BWS results using the model-based approach. However, given the likelihood ratio test above, what reasons should I state in the paper for my choice of model2 instead of model3? One reason I can think of is because of the lack of empirical suggesting the combining effect cost and therapist on the uptake/use of the program (despite that this interaction was added during the design phase based on my interview with 20 people)? And can I say something like excluding these interactions can help facilitate my comparison with BWS later on?

Thank you once again, Michiel, = and look forward to your advice.

Anh
quynhanhho95
 
Posts: 3
Joined: Thu Mar 07, 2024 9:55 pm

Re: Model for DCE deisgn and analysis

Postby Michiel Bliemer » Wed Mar 13, 2024 2:29 pm

1) I think using different model specifications across different papers is fine.

2) Note that the likelihood ratio test is only valid for nested models, where the parameters of one model is a subset of parameters in another model. When you dummy code, you change the parameters and technically the model is no longer nested. For non-nested models, you should compare AIC and BIC to assess model fit. Model fit is not the only criterion, other criteria are how explainable the model is and how useful to interpret measures. For example, dummy coding the cost attribute may or estimating a nonlinear model (eg. log(cost)) may increase model fit but it means that WTP values are no longer a single value and that may not be preferred for appraisal. So there are different criteria when assessing models, and model fit is only one of them.

Michiel
Michiel Bliemer
 
Posts: 1733
Joined: Tue Mar 31, 2009 4:13 pm

Re: Model for DCE deisgn and analysis

Postby quynhanhho95 » Thu Apr 04, 2024 9:24 am

Thank you very much, Michiel, for your insightful comments. It's really helpful.
Cheers,
Anh
quynhanhho95
 
Posts: 3
Joined: Thu Mar 07, 2024 9:55 pm


Return to Choice experiments - general

Who is online

Users browsing this forum: No registered users and 14 guests