Dropping the Staus Quo after piloting?

This forum is for posts that specifically focus on Ngene.

Moderators: Andrew Collins, Michiel Bliemer, johnr

Dropping the Staus Quo after piloting?

Postby jmspi » Sun Jan 11, 2015 9:12 pm

Dear Ngeners,

we are designing a DCE to elicit patient and nurse preferences for a particular process which occurs commonly in hospitals - we want to use the same design for both groups so that we can directly compare results. We will have double the sample size for patients as nurses and have already collected pilot data from 20 patients and 10 nurses. This is an unlabelled experiment with two generic alts and a SQ.

I am in the process of updating the original design (d-eff) to a Bayesian design using the priors estimated from the pilot data. I have estimated separate models for patients and nurses and now plan to use a model averaging design to incorporate the different priors (mnl,d,mean) as well as evaluate a rppanel design for patients (we don't have enough data to estimate a rppanel model for nurses). I have managed to obtain one design using the following syntax (model1=patients, model2=nurses, model3=rppanel patients):

Design
;alts(model1) = A*, B*, elsewhere
;alts(model2) = A*, B*, elsewhere
;alts(model3) = A*, B*, elsewhere
;rows = 36
;block = 6
;eff = 2*model1(mnl,d,mean) + model2(mnl,d,mean)
;rdraws = gauss(3)
;bdraws = gauss(3)
;rep = 1000
;model(model1):
U(A) = conA[(n,2.2,0.6)] + b1[(n,0.6,0.2)]*invite[1,2] + b2[(n,-0.1,0.2)]*number[1,2] + b3[(n,0.3,0.2)]*family[1,2] + b4.effects[(n,0.5,0.3)|(n,-0.3,0.2)]*involve[1,2,3]
+ b5[(n,0.4,0.2)]*content[1,2] + b6.effects[(n,-0.3,0.2)|(n,0.2,0.3)]*confid[1,2,3] /
U(B) = conB[(n,2.4,0.6)] + b1*invite + b2*number + b3*family + b4*involve + b5*content + b6*confid /
U(elsewhere) = b1*invitesq[2] + b3*familysq[2]
;model(model2):
U(A) = conA[0] + b1[(n,0.3,0.2)]*invite[1,2] + b2[(n,-0.6,0.2)]*number[1,2] + b3[(n,0.4,0.2)]*family[1,2] + b4.effects[(n,0.9,0.4)|0]*involve[1,2,3]
+ b5[(n,1.2,0.3)]*content[1,2] + b6.effects[(n,-0.1,0.4)|(n,0.7,0.5)]*confid[1,2,3] /
U(B) = conB[0] + b1*invite + b2*number + b3*family + b4*involve + b5*content + b6*confid /
U(elsewhere) = b1*invitesq[2] + b3*familysq[2]
;model(model3):
U(A) = conA[n,2.5,0.9] + b1[n,0.8,0.4]*invite[1,2] + b2[n,-0.3,0.3]*number[1,2] + b3[n,0.5,0.4]*family[1,2] + b4.effects[n,0.7,0.4|n,-0.6,0.4]*involve[1,2,3]
+ b5[n,0.7,0.5]*content[1,2] + b6.effects[n,-0.4,0.3|n,0.3,0.4]*confid[1,2,3] /
U(B) = conB[n,2.7,0.9] + b1*invite + b2*number + b3*family + b4*involve + b5*content + b6*confid /
U(elsewhere) = b1*invitesq[2] + b3*familysq[2] $

After one design I get the following message:
ERROR: A random design could not be generated after 2000000 attempts. There were 0 row repetitions, 1772009 alternative repetitions, and 227991 cases of dominance
Finished, at 8:40:51 PM, 1/11/2015

It makes me quite nervous that only one valid design is found. The problem (I think) is that the SQ alt was chosen so rarely in the pilot data (to the extent that it was not chosen at all, ever, for the nurse group). In the case of patients we think this probably reflects true preferences; in the case of nurses we think they might not be revealing their "true" preferences as the SQ alt reflects common practice, but it goes against official policy (we are hoping that collecting the data under different circumstances might encourage more participants to considerer the SQ alt). [I should not that alts & levels were carefully chosen after extensive qualitative research - we have good reason to believe that the SQ alt for nurses is the current dominant alt].

My questions are: (i) is the problem with identifying valid designs due to the dominance of the SP alts over the SQ? Or are there other problems with my syntax that might be causing this problem?
(ii) If the former is true, should we consider dropping the SQ alt? In the case of patients, this may be reasonable, but this is not at all desirable for nurses as we would like to identify which elements of current practice (which are contrary to the new policy) are most influential in this choice.

Any suggestions would be greatly appreciated.

Thanks in advance,

Jean.
jmspi
 
Posts: 6
Joined: Thu May 15, 2014 3:31 pm

Re: Dropping the Staus Quo after piloting?

Postby Michiel Bliemer » Tue Jan 13, 2015 1:23 pm

Hi Jean,

I do not think there is something wrong with your syntax, but there are a few issues to consider.

1) There are 20,736 possible choice tasks, of which 15,048 do not contain a dominant alternative. This means that about 25% of the choice tasks contain a dominant alternative. When trying to generate an initial (random) design using the default swapping algorithm, in which you try to find 36 rows WITHOUT ANY dominant alternatives, it is quite unlikely to find one. Decreasing the number of rows to 24 or 18 will make it easier for Ngene to find a design. The modified Federov algorithm may work better, but somehow I did not get that to work, will look into it.
2) Note that you are generating 3^10 = 59,049 draws for each design. Using a panel model, evaluating such a design takes 59,049,000 draws (since rep = 1000). You may have to decrease the number of Bayesian priors and make some fixed.
2) You state that A and B are unlabelled, yet you give them different constants. If the alternatives are really generic, they cannot have different constants. I suggest you move the constant to the 'elsewhere' alternative and remove them from A and B. This also saves an additional coefficient.
3) For the nurse model, there is no reason that you cannot estimate a panel mixed logit model. Note that a panel model takes multiple responses from a single respondent into account, something that multinomial logit cannot do. Simply make only the constant a random coefficient and leave all other coefficients as fixed, then you should be able to estimate a panel mixed logit model, which in this case turns into a panel error component model.
Michiel Bliemer
 
Posts: 1885
Joined: Tue Mar 31, 2009 4:13 pm

Re: Dropping the Staus Quo after piloting?

Postby Andrew Collins » Tue Jan 13, 2015 3:21 pm

Hi Jean

Following on from Michiel's post, the modified Federov algorithm will work, but you will need to set the number of candidates to about 25000.
Code: Select all
;alg=mfederov(candidates=25000)

It will take quite a while for each improved design to be generated, so as Michiel has suggested, you would need to scale back the number of draws dramatically.

Andrew
Andrew Collins
 
Posts: 78
Joined: Sat Mar 28, 2009 4:48 pm

Re: Dropping the Staus Quo after piloting?

Postby jmspi » Thu Jan 15, 2015 3:31 pm

Many thanks to you both for your suggestions.

I have included all the different suggestions (although Michiel, when I estimate a RPPANEL model for the nurses using the prior data in Nlogit, the estimate for the status quo constant "blows up" ie. =100, I am assuming because no-one chose this option. All other coefficients for the rppanel model are then unchanged compared to an mnl model assuming a forced choice between the two SP alts. For this reason I set the nurse SQ constant prior to 0 for the updated design and I am not sure if this is correct as suspect it is very likely to be a negative coefficient?)

I have decreased the rows to 12 with a block of 4 (3 csets per block) as then we can give only two blocks to patients (these are very sick patients in hospital), whist give 3 blocks to nurses. I am assuming then we would need (roughly) half the estimated sample size using the s-estimate for the patients and a third for nurses. Is this reasoning correct?

I was also unsure about which parameters to treat as bayesian - I assumed fixed priors for those parameters with statistically sig coefficients in the pilot data and bayesian for the rest. Is this the best approach?

This is my updated syntax which is currently running:

Design
;alts(model1) = A*, B*, elsewhere
;alts(model2) = A*, B*, elsewhere
;alts(model3) = A*, B*, elsewhere
;alts(model4) = A*, B*, elsewhere
;rows = 12
;block = 4
;eff = 2*model1(mnl,d,mean) + model2(mnl,d,mean)
;alg=mfederov(candidates=25000)
;rdraws = gauss(3)
;bdraws = gauss(3)
;rep = 1000
;model(model1):
U(A) = b1[0.6]*invite[1,2] + b2[(n,-0.1,0.2)]*number[1,2] + b3[0.3]*family[1,2] + b4.effects[0.5|(n,-0.3,0.2)]*involve[1,2,3]
+ b5[0.4]*content[1,2] + b6.effects[(n,-0.3,0.2)|(n,0.2,0.3)]*confid[1,2,3] /
U(B) = b1*invite + b2*number + b3*family + b4*involve + b5*content + b6*confid /
U(elsewhere) = SQ[-2.3] + b1*invitesq[2] + b3*familysq[2]
;model(model2):
U(A) = b1[(n,0.3,0.2)]*invite[1,2] + b2[-0.6]*number[1,2] + b3[0.4]*family[1,2] + b4.effects[0.9|(u,-0.1,0.1)]*involve[1,2,3]
+ b5[1.2]*content[1,2] + b6.effects[(n,-0.1,0.4)|(n,0.7,0.5)]*confid[1,2,3] /
U(B) = b1*invite + b2*number + b3*family + b4*involve + b5*content + b6*confid /
U(elsewhere) = SQ[0] + b1*invitesq[2] + b3*familysq[2]
;model(model3):
U(A) = b1[0.6]*invite[1,2] + b2[n,-0.1,0.2]*number[1,2] + b3[0.3]*family[1,2] + b4.effects[0.5|n,-0.3,0.2]*involve[1,2,3]
+ b5[0.4]*content[1,2] + b6.effects[n,-0.3,0.2|n,0.2,0.3]*confid[1,2,3] /
U(B) = b1*invite + b2*number + b3*family + b4*involve + b5*content + b6*confid /
U(elsewhere) = SQ[-2.3] + b1*invitesq[2] + b3*familysq[2]
;model(model4):
U(A) = b1[n,0.3,0.2]*invite[1,2] + b2[-0.6]*number[1,2] + b3[0.4]*family[1,2] + b4.effects[0.9|0]*involve[1,2,3]
+ b5[1.2]*content[1,2] + b6.effects[n,-0.1,0.4|n,0.7,0.5]*confid[1,2,3] /
U(B) = b1*invite + b2*number + b3*family + b4*involve + b5*content + b6*confid /
U(elsewhere) = SQ[0] + b1*invitesq[2] + b3*familysq[2] $

I am hoping this is now OK - Ngene has already found 8 valid designs in about 20 minutes, which I figure is a good sign.

Thanks again. This forum is really valuable.

Best regards,

Jean.
jmspi
 
Posts: 6
Joined: Thu May 15, 2014 3:31 pm

Re: Dropping the Staus Quo after piloting?

Postby jmspi » Thu Jan 15, 2015 3:44 pm

Sorry, one more question - there appears to be, in early designs, some obvious signs of attribute inbalance - I expect because of using the Mod Feder row based algorithm.

Is the best solution to wait until Ngene finds a design which I am happy with in terms of attribute balance?

Many thanks, Jean.
jmspi
 
Posts: 6
Joined: Thu May 15, 2014 3:31 pm

Re: Dropping the Staus Quo after piloting?

Postby Andrew Collins » Thu Jan 15, 2015 4:09 pm

By default, the modified Federov algorithm does not target attribute level balance, and without some constraints imposed, is likely to keep the same amount of level balance as the search continues.

The solution is to place limits on how many times each level can appear. See the "Non-balanced discrete attribute levels" section in the model property in the syntax reference section of the manual (p.228 for the 1.1.2 version of the manual). If there are, say, 12 rows, you could specify something like
Code: Select all
*involve[1,2,3](3-5,3-5,3-5)

Here each level can appear between 3 and 5 times. Full level balance is unlikely to be achieved. The above settles for near balance. If Ngene struggles to find a design, you may need to relax the constraints, say to 2-6 in the above example.
Andrew Collins
 
Posts: 78
Joined: Sat Mar 28, 2009 4:48 pm

Re: Dropping the Staus Quo after piloting?

Postby jmspi » Thu Jan 15, 2015 9:07 pm

Many thanks Andrew. Strangely, after imposing the constraint you suggested on all attributes for a reasonably high amount of balance (for example, 5-7 appearances in 12 rows), this designs appears to be performing better than the previous one in terms of d-error, s-estimates and the number of valid designs produced in a short amount of time. Look promising.

Thanks again,
Jean.
jmspi
 
Posts: 6
Joined: Thu May 15, 2014 3:31 pm

Re: Dropping the Staus Quo after piloting?

Postby Michiel Bliemer » Fri Jan 16, 2015 8:12 am

If no one chooses the SQ alternative, then indeed it will not be possible to estimate a constant for that alternative, so that alternative has to be removed from the choice set. But clearly in your design you will need to put in a constant, and setting it to zero while you believe it should be negative (and actually quite a large negative value) may negatively impact your design. So you may consider setting the value not equal to 0 but rather equal to a modest negative number.

Since you are using Gaussian quadrature, I would suggest to use the following. You can set:

;bdraws = gauss(2,1,3,2,3,3,2,4,3,2)

or something like that. This means that it will take 2 draws for the first Bayesian prior that it encounters, 1 draw for the next Bayesian prior it encounters (essentially making it fixed), 3 draws for the next etc. You use 1 or 2 if the standard error is relatively small, and you use 3 (or even 4) if the standard error is relatively large. The total number of draws will then be 2*1*3*2*3*3*2*4*2*2. By changing some 3's to 2's (or even 1's) you should be able to get the number of draws down a bit. This way of Gaussian quadrature was proposed in: Bliemer, Rose, and Hess (2008) Approximation of Bayesian efficiency in experimental choice designs. Journal of Choice Modelling, Vol. 1, No. 1, pp. 98-126.
Michiel Bliemer
 
Posts: 1885
Joined: Tue Mar 31, 2009 4:13 pm


Return to Choice experiments - Ngene

Who is online

Users browsing this forum: No registered users and 35 guests

cron