Pilot study for unlabeled design

This forum is for posts that specifically focus on Ngene.

Moderators: Andrew Collins, Michiel Bliemer, johnr

Pilot study for unlabeled design

Postby huiyee917 » Mon Jul 11, 2022 12:32 pm

Dear all, I'm trying to generate the syntax for the pilot study. I really appreciate it if you could help me to check if I got the syntax correct? I tried to run it in Ngene, but it gave a very big sample size. Just wondering what went wrong? Thank you very much.

Two alternatives: Treatment A and Treatment B.
Six attributes:
OS= Survival
WT=Waiting time
QOL=Quality of Life (1=best, 2=moderate, 3=worst)
HL= Hair loss
PN=Peripheral neuropathy
OOP=Out-of-pocket cost

Design
;alts = TxA*, TxB*
;rows = 18
;block = 2
;eff = (mnl,d)
;alg = mfederov (candidates = 1000)
;model:
U(TxA) = b0 [0] + b1 [0.005] * OS [24,36,48] + b2 [-0.001] * WT [30, 75, 120] + b3.dummy [0.002|0.004] * QOL [2, 1, 3] + b4 [-0.001] * HL [25, 60, 95] + b5 [-0.001] * PN [25, 45, 65] + b6 [-0.002] * OOP [50, 150, 250] /
U(TxB) = b1 * OS + b2 * WT + b3 * QOL + b4 * HL + b5 * PN + b6 * OOP
$
huiyee917
 
Posts: 3
Joined: Mon Jul 11, 2022 9:22 am

Re: Pilot study for unlabeled design

Postby Michiel Bliemer » Mon Jul 11, 2022 2:22 pm

Your syntax looks fine but your priors are probably not accurate. For example, your priors 0.002 and 0.004 for b3 are extremely small, you are essentially indicating that QOL is not important at all for decision-makers, and therefore you would need a very large sample size to pick up this effect. For a pilot study you would generally use (near) zero priors and ignore the sample size estimates (since they are meaningless), then estimate parameters on the pilot data and then use these estimated parameters are priors for your main study. For your pilot study, you simply use 10% of the total sample size that you have budgeted for.

For a pilot study I would typically use very small positive/negative priors to indicate the preference order of the attribute levels only (to avoid dominant alternatives). I would not use the modified Federov algorithm because it does not guarantee attribute level balance and for the numerical attributes the middle level will likely not appear much in the design.

Code: Select all
Design
;alts = TxA*, TxB*
;rows = 18
;block = 2
;eff = (mnl,d)
;model:
U(TxA) = b1[0.0001]              * OS[24,36,48]
       + b2[-0.0001]             * WT[30, 75, 120]
       + b3.dummy[0.0001|0.0002] * QOL[2, 1, 3]
       + b4[-0.0001]             * HL[25, 60, 95]
       + b5[-0.001]              * PN[25, 45, 65]
       + b6[-0.002]              * OOP[50, 150, 250]
       /
U(TxB) = b1 * OS + b2 * WT + b3 * QOL + b4 * HL + b5 * PN + b6 * OOP
$


Michiel
Michiel Bliemer
 
Posts: 1705
Joined: Tue Mar 31, 2009 4:13 pm

Re: Pilot study for unlabeled design

Postby huiyee917 » Tue Jul 12, 2022 12:49 pm

Thank you very much, Michiel for your prompt reply. I have some follow-up questions:
1. Does the D-error matter in generating a pilot design?
2. I read about dummy-coding all the variables (including the numerical variables) to generate the pilot design. What's your thought on this?
3. Two designs below, one only dummy-coded the categorical variables, another one dummy-coded all the variables (categorical + numerical). Can you help me to check if they are correct?
4. Design 1 has a lower D-error, but Design 2 has a higher D-error. Does it mean that Design 1 is a better design?

Thanks again for your help.

Hui Yee

Design 1
Design
;alts = TxA*, TxB*
;rows = 18
;block = 2
;eff = (mnl,d)
;model:
U(TxA) = b0 [0] + b1 [0.05] * OS [24,36,48] + b2 [-0.01] * WT [30, 75, 120] + b3.dummy [0.02|0.04] * QOL [2, 1, 3] + b4 [-0.01] * HL [25, 60, 95] + b5 [-0.01] * PN [25, 45, 65] + b6 [-0.02] * OOP [50, 150, 250] /
U(TxB) = b1 * OS + b2 * WT + b3 * QOL + b4 * HL + b5 * PN + b6 * OOP
$


Design 2
Design
;alts = TxA*, TxB*
;rows = 18
;block = 2
;eff = (mnl,d)
;model:
U(TxA) = b0 [0] + b1.dummy [0.03|0.05] * OS [36, 48, 24] + b2.dummy [-0.01|-0.02] * WT [75, 120, 30] + b3.dummy [0.02|0.04] * QOL [2, 1, 3] + b4.dummy [-0.01|-0.02] * HL [60, 95,25] + b5.dummy [-0.01|-0.02] * PN [45, 65,25] + b6.dummy [-0.01|-0.02] * OOP [150, 250, 50] /
U(TxB) = b1 * OS + b2 * WT + b3 * QOL + b4 * HL + b5 * PN + b6 * OOP
$
huiyee917
 
Posts: 3
Joined: Mon Jul 11, 2022 9:22 am

Re: Pilot study for unlabeled design

Postby Michiel Bliemer » Tue Jul 12, 2022 5:03 pm

1. The D-error is case specific, so you cannot say if 0.1 is good or bad. For each case, the lower the better, but you cannot compare D-errors across studies.

2. Dummy coding all attributes in a pilot study with (near) zero priors is fine, I sometimes do that as well. It creates more parameters and therefore a larger design, but it also adds more variation in the data which can be useful.

3. They look fine but of course I cannot be responsible for any errors. I do notice that your priors are different across the two scripts, for example b1[0.05] * OS[24,36,48] would be consistent with b1.dummy[0.6|1.2] * OS[36,48,24], but your priors are very different.

4. No, you cannot compare D-errors if the utility functions are different. In your case, the number of parameters are entirely different across the two models.

Michiel
Michiel Bliemer
 
Posts: 1705
Joined: Tue Mar 31, 2009 4:13 pm

Re: Pilot study for unlabeled design

Postby huiyee917 » Wed Jul 13, 2022 11:11 am

Thank you very much for your explanation, Michiel. Really appreciate it.

Best,
Hui Yee
huiyee917
 
Posts: 3
Joined: Mon Jul 11, 2022 9:22 am


Return to Choice experiments - Ngene

Who is online

Users browsing this forum: Google [Bot] and 5 guests

cron