Dear Ngeners,
we are designing a DCE to elicit patient and nurse preferences for a particular process which occurs commonly in hospitals - we want to use the same design for both groups so that we can directly compare results. We will have double the sample size for patients as nurses and have already collected pilot data from 20 patients and 10 nurses. This is an unlabelled experiment with two generic alts and a SQ.
I am in the process of updating the original design (d-eff) to a Bayesian design using the priors estimated from the pilot data. I have estimated separate models for patients and nurses and now plan to use a model averaging design to incorporate the different priors (mnl,d,mean) as well as evaluate a rppanel design for patients (we don't have enough data to estimate a rppanel model for nurses). I have managed to obtain one design using the following syntax (model1=patients, model2=nurses, model3=rppanel patients):
Design
;alts(model1) = A*, B*, elsewhere
;alts(model2) = A*, B*, elsewhere
;alts(model3) = A*, B*, elsewhere
;rows = 36
;block = 6
;eff = 2*model1(mnl,d,mean) + model2(mnl,d,mean)
;rdraws = gauss(3)
;bdraws = gauss(3)
;rep = 1000
;model(model1):
U(A) = conA[(n,2.2,0.6)] + b1[(n,0.6,0.2)]*invite[1,2] + b2[(n,-0.1,0.2)]*number[1,2] + b3[(n,0.3,0.2)]*family[1,2] + b4.effects[(n,0.5,0.3)|(n,-0.3,0.2)]*involve[1,2,3]
+ b5[(n,0.4,0.2)]*content[1,2] + b6.effects[(n,-0.3,0.2)|(n,0.2,0.3)]*confid[1,2,3] /
U(B) = conB[(n,2.4,0.6)] + b1*invite + b2*number + b3*family + b4*involve + b5*content + b6*confid /
U(elsewhere) = b1*invitesq[2] + b3*familysq[2]
;model(model2):
U(A) = conA[0] + b1[(n,0.3,0.2)]*invite[1,2] + b2[(n,-0.6,0.2)]*number[1,2] + b3[(n,0.4,0.2)]*family[1,2] + b4.effects[(n,0.9,0.4)|0]*involve[1,2,3]
+ b5[(n,1.2,0.3)]*content[1,2] + b6.effects[(n,-0.1,0.4)|(n,0.7,0.5)]*confid[1,2,3] /
U(B) = conB[0] + b1*invite + b2*number + b3*family + b4*involve + b5*content + b6*confid /
U(elsewhere) = b1*invitesq[2] + b3*familysq[2]
;model(model3):
U(A) = conA[n,2.5,0.9] + b1[n,0.8,0.4]*invite[1,2] + b2[n,-0.3,0.3]*number[1,2] + b3[n,0.5,0.4]*family[1,2] + b4.effects[n,0.7,0.4|n,-0.6,0.4]*involve[1,2,3]
+ b5[n,0.7,0.5]*content[1,2] + b6.effects[n,-0.4,0.3|n,0.3,0.4]*confid[1,2,3] /
U(B) = conB[n,2.7,0.9] + b1*invite + b2*number + b3*family + b4*involve + b5*content + b6*confid /
U(elsewhere) = b1*invitesq[2] + b3*familysq[2] $
After one design I get the following message:
ERROR: A random design could not be generated after 2000000 attempts. There were 0 row repetitions, 1772009 alternative repetitions, and 227991 cases of dominance
Finished, at 8:40:51 PM, 1/11/2015
It makes me quite nervous that only one valid design is found. The problem (I think) is that the SQ alt was chosen so rarely in the pilot data (to the extent that it was not chosen at all, ever, for the nurse group). In the case of patients we think this probably reflects true preferences; in the case of nurses we think they might not be revealing their "true" preferences as the SQ alt reflects common practice, but it goes against official policy (we are hoping that collecting the data under different circumstances might encourage more participants to considerer the SQ alt). [I should not that alts & levels were carefully chosen after extensive qualitative research - we have good reason to believe that the SQ alt for nurses is the current dominant alt].
My questions are: (i) is the problem with identifying valid designs due to the dominance of the SP alts over the SQ? Or are there other problems with my syntax that might be causing this problem?
(ii) If the former is true, should we consider dropping the SQ alt? In the case of patients, this may be reasonable, but this is not at all desirable for nurses as we would like to identify which elements of current practice (which are contrary to the new policy) are most influential in this choice.
Any suggestions would be greatly appreciated.
Thanks in advance,
Jean.