Ngene code using constraints
Posted: Mon Jun 24, 2024 6:23 pm
Dear reader,
For an international research project on Colorectal Cancer I am planning a DCE study. My team and I are researching what the preferences are for using AI in deciding on treatment options for patients. We have established the following attributes and levels:
AI involvement in selection treatment option:
1. Oncologists identify treatment options (No AI involvement)
2. Oncologists identify treatment options based on AI suggestions
3. Only AI identifies treatment options
Time Until Cancer Grows (Progression-free survival):
1. 3 months
2. 9 months
3. 12 months
Certainty of experienced severe side effects:
1. Yes, there is high certainty about the expected severe side effects from the treatment options.
2. No, there is no certainty about the expected severe side severe effects from the treatment options.
Risk of undertreatment:
(Restriction in design: should not be higher when AI is involved than when it is not)
The alternatives are:
1. High risk – 20%
2. Moderate risk – 10%
3. Low risk – 1%
Risk of overtreatment:
(restriction in design: should not be higher when AI is involved than when it is not)
1. High risk – 40%
2. Moderate risk – 20%
3. Low risk – 10%
Time until treatment decision is made:
1. 1 day
2. 1 week
3. 2 weeks
The levels constructed in Sawtooth in the same order as displayed here. I wrote a code (see below) to implement into Sawtooth. We want both the attributes Risk of undertreatment and Risk of overtreatment to have a restriction in the design. In the choice tasks; when the attribute on AI states that AI is not involved (level 1) the value of should of the 2 to risk attributes should always have a higher value (e.g. so level 1 20% and 40%) then when when AI is involved (e.g. level 2 and 3).
In addition, due to some of our expectations I have added some negative priors as we expect higher risk of under- and overtreatment and certainty of side effects to have a negative prior.
Could you please check our Ngene code below and if this is correctly formulated, especially the part for the constraints so we can import into sawtooth?
design
; alts = altA*, altB*
; eff = (mnl, d, mean)
; bdraws = halton(1000)
; rows = 36
; block = 3
; alg = mfederov
; cond:
if(altA.AIinvolvement. = [2,3], altA.RiskUndertreatment >= altB.RiskUndertreatment.), if(altA. AIinvolvement = [2,3], altA.RiskOvertreatment. >= altB.RiskOvertreatment.),
if(altB.AIinvolvement. = [2,3], altB.RiskUndertreatment >= altA.RiskUndertreatment),if(altB.AIinvolvement = [2,3], altB.RiskOvertreatment >= altA.RiskOvertreatment))
; model:
U(altA) = b1.dummy[(u, -0.1, -0.05)|(u, -0.15, -0.1)] * timecancergrows[2, 3, 1]
+ b2.dummy[0|0|0] * AIinvolvement[2, 3, 1]
+ b3.dummy[(u, -0.1, -0.15)] * CertaintySideEffects[1, 2]
+ b4.dummy[(u, -0.2, -0.1)|(u, -0.3, -0.2)] * RiskUndertreatment[2, 3, 1]
+ b5.dummy[(u, -0.1, -0.05)|(u, -0.15, -0.1)] * RiskOvertreatment[2, 3, 1]
+ b6.dummy[0|0|0] * TimeUntilDecision[1, 2, 3]
/
U(altB) = b1.dummy * timecancergrows[1, 2, 3]
+ b2.dummy * AIinvolvement
+ b3.dummy * CertaintySideEffects
+ b4.dummy * RiskUndertreatment
+ b5.dummy * RiskOvertreatment
+ b6.dummy * TimeUntilDecision
$
Kind regards,
Suzanne
For an international research project on Colorectal Cancer I am planning a DCE study. My team and I are researching what the preferences are for using AI in deciding on treatment options for patients. We have established the following attributes and levels:
AI involvement in selection treatment option:
1. Oncologists identify treatment options (No AI involvement)
2. Oncologists identify treatment options based on AI suggestions
3. Only AI identifies treatment options
Time Until Cancer Grows (Progression-free survival):
1. 3 months
2. 9 months
3. 12 months
Certainty of experienced severe side effects:
1. Yes, there is high certainty about the expected severe side effects from the treatment options.
2. No, there is no certainty about the expected severe side severe effects from the treatment options.
Risk of undertreatment:
(Restriction in design: should not be higher when AI is involved than when it is not)
The alternatives are:
1. High risk – 20%
2. Moderate risk – 10%
3. Low risk – 1%
Risk of overtreatment:
(restriction in design: should not be higher when AI is involved than when it is not)
1. High risk – 40%
2. Moderate risk – 20%
3. Low risk – 10%
Time until treatment decision is made:
1. 1 day
2. 1 week
3. 2 weeks
The levels constructed in Sawtooth in the same order as displayed here. I wrote a code (see below) to implement into Sawtooth. We want both the attributes Risk of undertreatment and Risk of overtreatment to have a restriction in the design. In the choice tasks; when the attribute on AI states that AI is not involved (level 1) the value of should of the 2 to risk attributes should always have a higher value (e.g. so level 1 20% and 40%) then when when AI is involved (e.g. level 2 and 3).
In addition, due to some of our expectations I have added some negative priors as we expect higher risk of under- and overtreatment and certainty of side effects to have a negative prior.
Could you please check our Ngene code below and if this is correctly formulated, especially the part for the constraints so we can import into sawtooth?
design
; alts = altA*, altB*
; eff = (mnl, d, mean)
; bdraws = halton(1000)
; rows = 36
; block = 3
; alg = mfederov
; cond:
if(altA.AIinvolvement. = [2,3], altA.RiskUndertreatment >= altB.RiskUndertreatment.), if(altA. AIinvolvement = [2,3], altA.RiskOvertreatment. >= altB.RiskOvertreatment.),
if(altB.AIinvolvement. = [2,3], altB.RiskUndertreatment >= altA.RiskUndertreatment),if(altB.AIinvolvement = [2,3], altB.RiskOvertreatment >= altA.RiskOvertreatment))
; model:
U(altA) = b1.dummy[(u, -0.1, -0.05)|(u, -0.15, -0.1)] * timecancergrows[2, 3, 1]
+ b2.dummy[0|0|0] * AIinvolvement[2, 3, 1]
+ b3.dummy[(u, -0.1, -0.15)] * CertaintySideEffects[1, 2]
+ b4.dummy[(u, -0.2, -0.1)|(u, -0.3, -0.2)] * RiskUndertreatment[2, 3, 1]
+ b5.dummy[(u, -0.1, -0.05)|(u, -0.15, -0.1)] * RiskOvertreatment[2, 3, 1]
+ b6.dummy[0|0|0] * TimeUntilDecision[1, 2, 3]
/
U(altB) = b1.dummy * timecancergrows[1, 2, 3]
+ b2.dummy * AIinvolvement
+ b3.dummy * CertaintySideEffects
+ b4.dummy * RiskUndertreatment
+ b5.dummy * RiskOvertreatment
+ b6.dummy * TimeUntilDecision
$
Kind regards,
Suzanne