Problem(?) with Partial Profile Design Search
Posted: Thu Jan 16, 2025 4:06 pm
Hi All
I am generating a partial profile design using a constructed candidate set and the modified Fedorov algorithm. But the design search results seem quite weird – it seems like it should be doing better quicker.
Here’s what I get just running the syntax below in terms of design search in the output box:
Evaluation Time MNL D-Error
1 2:12:32 PM, 1/16/2025 0.086856
4382 3:14:14 PM, 1/16/2025 0.086523
8763 3:51:40 PM, 1/16/2025 0.086307
Curiously it took 4381 to get the first design that improved the D-Error and then a further 4381 before you get the next design that improved the D-Error. This isn’t just a fluke I’ve run similar syntax with a slightly different candidate set and got this same evaluation sequencing i.e. 2nd set to improve the D-Error was exactly double the number of evaluations as the first set.
I then just did a series where I re-ran the syntax but just stopped at the first evaluation. So presumably this is just another set of random choices from the candidate set.
This is what I got for the D-Error for 5 evaluations like this:
0.086052
0.087535
0.086037
0.086264
0.086264
So for 4 of the 5 extra (presumably just random choice) evaluations I got a better D-Error than the best design generated after 8763 evaluations of the Modified Fedorov algorithm.
This seems really odd OR my idea of what might be going on is very off-track (which is probably quite likely).
Here’s the syntax:
Design
;alts = Option1*, Option2*, Option3*
;rows = 200
;block = 20
;eff = (mnl,d)
;alg = mfederov(candidates = Candidate_Student_4.csv)
;model:
U(Option1) = b1.dummy[-0.1 | -0.2 | -0.3 | -0.4] * Price[1,2,3,4,5] +
b2.dummy[0.025 | 0.05 ] * GPVisit[1,2,3] +
b3.dummy[0.01 | 0.04 | 0.07] * NonGPVisit[1,2,3,4] +
b4.dummy[0.01 ] * Ambulance[1,2] +
b5.dummy[0.01 | 0.02 | 0.03 ] * TravelIns[1,2,3,4] +
b6.dummy[0.01 | 0.02 ] * DentalEmerg[1,2,3] +
b7.dummy[0.01 | 0.07 | 0.13 | 0.2] * DentalPrevent[1,2,3,4,5] +
b8.dummy[0.01 ] * Optical[1,2] +
b9.dummy[0.01 | 0.02 | 0.03 ] * Physio[1,2,3,4] +
b10.dummy[0.01 | 0.02 | 0.02 | 0.03 ] * VideoDoctor[1,2,3,4,5] +
b11.dummy[0.01 | 0.02 | 0.07 ] * Vaccine[1,2,3,4] +
b12.dummy[0.01 ] * PharmacyDemand[1,2] +
b13.dummy[0.01 | 0.02 ] * H&W[1,2,3] +
b14.dummy[0.01 | 0.02 ] * PartnerDiscount[1,2,3] +
b15.dummy[0.01 ] * Accomodation[1,2] +
b16.dummy[0.01 | 0.02 ] * Service[1,2,3] +
b17.dummy[0.01 ] * Repatriation[1,2] /
U(Option2) = b1 * Price +
b2 * GPVisit +
b3 * NonGPVisit +
b4 * Ambulance +
b5 * TravelIns +
b6 * DentalEmerg +
b7 * DentalPrevent +
b8 * Optical +
b9 * Physio +
b10 * VideoDoctor +
b11 * Vaccine +
b12 * PharmacyDemand +
b13 * H&W +
b14 * PartnerDiscount +
b15 * Accomodation +
b16 * Service +
b17 * Repatriation /
U(Option3) = b1 * Price +
b2 * GPVisit +
b3 * NonGPVisit +
b4 * Ambulance +
b5 * TravelIns +
b6 * DentalEmerg +
b7 * DentalPrevent +
b8 * Optical +
b9 * Physio +
b10 * VideoDoctor +
b11 * Vaccine +
b12 * PharmacyDemand +
b13 * H&W +
b14 * PartnerDiscount +
b15 * Accomodation +
b16 * Service +
b17 * Repatriation
$
Cheers
Roger
I am generating a partial profile design using a constructed candidate set and the modified Fedorov algorithm. But the design search results seem quite weird – it seems like it should be doing better quicker.
Here’s what I get just running the syntax below in terms of design search in the output box:
Evaluation Time MNL D-Error
1 2:12:32 PM, 1/16/2025 0.086856
4382 3:14:14 PM, 1/16/2025 0.086523
8763 3:51:40 PM, 1/16/2025 0.086307
Curiously it took 4381 to get the first design that improved the D-Error and then a further 4381 before you get the next design that improved the D-Error. This isn’t just a fluke I’ve run similar syntax with a slightly different candidate set and got this same evaluation sequencing i.e. 2nd set to improve the D-Error was exactly double the number of evaluations as the first set.
I then just did a series where I re-ran the syntax but just stopped at the first evaluation. So presumably this is just another set of random choices from the candidate set.
This is what I got for the D-Error for 5 evaluations like this:
0.086052
0.087535
0.086037
0.086264
0.086264
So for 4 of the 5 extra (presumably just random choice) evaluations I got a better D-Error than the best design generated after 8763 evaluations of the Modified Fedorov algorithm.
This seems really odd OR my idea of what might be going on is very off-track (which is probably quite likely).
Here’s the syntax:
Design
;alts = Option1*, Option2*, Option3*
;rows = 200
;block = 20
;eff = (mnl,d)
;alg = mfederov(candidates = Candidate_Student_4.csv)
;model:
U(Option1) = b1.dummy[-0.1 | -0.2 | -0.3 | -0.4] * Price[1,2,3,4,5] +
b2.dummy[0.025 | 0.05 ] * GPVisit[1,2,3] +
b3.dummy[0.01 | 0.04 | 0.07] * NonGPVisit[1,2,3,4] +
b4.dummy[0.01 ] * Ambulance[1,2] +
b5.dummy[0.01 | 0.02 | 0.03 ] * TravelIns[1,2,3,4] +
b6.dummy[0.01 | 0.02 ] * DentalEmerg[1,2,3] +
b7.dummy[0.01 | 0.07 | 0.13 | 0.2] * DentalPrevent[1,2,3,4,5] +
b8.dummy[0.01 ] * Optical[1,2] +
b9.dummy[0.01 | 0.02 | 0.03 ] * Physio[1,2,3,4] +
b10.dummy[0.01 | 0.02 | 0.02 | 0.03 ] * VideoDoctor[1,2,3,4,5] +
b11.dummy[0.01 | 0.02 | 0.07 ] * Vaccine[1,2,3,4] +
b12.dummy[0.01 ] * PharmacyDemand[1,2] +
b13.dummy[0.01 | 0.02 ] * H&W[1,2,3] +
b14.dummy[0.01 | 0.02 ] * PartnerDiscount[1,2,3] +
b15.dummy[0.01 ] * Accomodation[1,2] +
b16.dummy[0.01 | 0.02 ] * Service[1,2,3] +
b17.dummy[0.01 ] * Repatriation[1,2] /
U(Option2) = b1 * Price +
b2 * GPVisit +
b3 * NonGPVisit +
b4 * Ambulance +
b5 * TravelIns +
b6 * DentalEmerg +
b7 * DentalPrevent +
b8 * Optical +
b9 * Physio +
b10 * VideoDoctor +
b11 * Vaccine +
b12 * PharmacyDemand +
b13 * H&W +
b14 * PartnerDiscount +
b15 * Accomodation +
b16 * Service +
b17 * Repatriation /
U(Option3) = b1 * Price +
b2 * GPVisit +
b3 * NonGPVisit +
b4 * Ambulance +
b5 * TravelIns +
b6 * DentalEmerg +
b7 * DentalPrevent +
b8 * Optical +
b9 * Physio +
b10 * VideoDoctor +
b11 * Vaccine +
b12 * PharmacyDemand +
b13 * H&W +
b14 * PartnerDiscount +
b15 * Accomodation +
b16 * Service +
b17 * Repatriation
$
Cheers
Roger