How to compare similar design?

This forum is for posts that specifically focus on Ngene.

Moderators: Andrew Collins, Michiel Bliemer, johnr

How to compare similar design?

Postby Yang Wang » Fri Jan 28, 2022 7:52 pm

Hi, I would like to consult how to compare two similar design.
Since we had a pilot survey, I used the priors obtained to create a Bayesian design with random distributed parameters.
The random parameters could be travel time, cost, and ASC of car or a combination of two or three of them. So I conducted several design with the combination of the random parameters.
However the results of those design seem to be very similar to me since D-error value and their probabilities of the four alternatives.
So I am wondering is there any criteria can be applied to find the most optimal design?

?Bayesian Design travel time, cost and car asc
;alts = train*, bus*, car*, plane*
;rows = 40
;block = 8
;eff = (mnl,d, mean)
;alg = mfederov(candidates=9000)
;require:
bus.costb <train.costt
;model:
U(train)=b1[0.331067]+a1[(n, -0.171902,0.15)]*timet[3.25,3.75,4,4.25,4.75](7-9,7-9,7-9,7-9,7-9)+a2[(n,-0.005966,0.003)]*costt[30,40,50,60,70](7-9,7-9,7-9,7-9,7-9)+a3[-0.085255]*headwayt[1,2,3,4]+a4[-0.4]*waitt[0.083,0.16,0.25,0.33]/
U(bus)= b2[-0.219142] + a1*timeb[8,8.25,8.5,8.75,9]+a2*costb[15,20,25,30,35]+a3*headwayb[2,4,6,8]+a4*waitt[0.083,0.16,0.25,0.33]/
U(car)= b3[(n,-0.3015453,0.9)]+a1*timec[6,6.5,7,7.5]+a2*costt[30,40,50,60,70](7-9,7-9,7-9,7-9,7-9)/
U(plane)= a1*timep[1,1.25,1.5, 1.75, 2](1-9,1-9,1-9,1-9,1-9)+a2*costp[70,90,110,130,150]+a3*headwayp[4,8,12,24]+a4*waittp[0.5,0.75,1,1.25]$

Best,
Yang
Yang Wang
 
Posts: 3
Joined: Thu Jan 27, 2022 12:55 am

Re: How to compare similar design?

Postby Michiel Bliemer » Fri Jan 28, 2022 8:39 pm

Note that you can only compare the D-error of designs when you use the exact same priors, you cannot compare D-errors when using different priors.

Your syntax will generate a Bayesian efficient design. Which design would you like to compare it with? You can use the alg = eval command to evaluate an existing design.

A few comments:

1. car does not have the same attributes, so you should not include car in the dominance checks. In the syntax below I removed the asterisk from car and made the syntax a bit more readable.

2. It is highly unusual to use the same generic parameter for travel time in train, bus, car, and plane. Clearly the experience of travel time in different modes is entirely different (in the train you can read a book, in the car you need to drive, the time in the bus is less comfortable than in the train, etc), so it is common to use different coefficients for different modes of transport.

3. A candidate set of 9000 is quite large and therefore the algorithm will be slow. You could consider a smaller candidate set, e.g. 5000.

4. Your attribute level balance constraints 7-9 etc are quite narrow, Ngene will likely struggle to find a feasible design that fits all constraints.

Code: Select all
?Bayesian Design travel time, cost and car asc
design
;alts = train*, bus*, car, plane*
;rows = 40
;block = 8
;eff = (mnl,d, mean)
;bdraws = gauss(3)
;alg = mfederov(candidates=5000)
;require:
bus.costb <train.costt
;model:
U(train) = b1[0.331067]
         + a1[(n, -0.171902,0.15)] * timet[3.25,3.75,4,4.25,4.75](6-10,6-10,6-10,6-10,6-10)
         + a2[(n,-0.005966,0.003)] * costt[30,40,50,60,70](6-10,6-10,6-10,6-10,6-10)
         + a3[-0.085255]           * headwayt[1,2,3,4]
         + a4[-0.4]                * waitt[0.083,0.16,0.25,0.33]
         /
U(bus)   = b2[-0.219142]
         + a1                      * timeb[8,8.25,8.5,8.75,9]
         + a2                      * costb[15,20,25,30,35]
         + a3                      * headwayb[2,4,6,8]
         + a4                      * waitt[0.083,0.16,0.25,0.33]
         /
U(car)   = b3[(n,-0.3015453,0.9)]
         + a1                      * timec[6,6.5,7,7.5]
         + a2                      * costt[30,40,50,60,70](6-10,6-10,6-10,6-10,6-10)
         /
U(plane) = a1                       * timep[1,1.25,1.5, 1.75, 2](1-9,1-9,1-9,1-9,1-9)
         + a2                       * costp[70,90,110,130,150]
         + a3                       * headwayp[4,8,12,24]
         + a4                       * waittp[0.5,0.75,1,1.25]
$


Michiel
Michiel Bliemer
 
Posts: 1730
Joined: Tue Mar 31, 2009 4:13 pm

Re: How to compare similar design?

Postby Yang Wang » Fri Jan 28, 2022 10:09 pm

Thanks a lot for the correction.

I would like to compare two design with same attributes and levels. For example, one design with travel time as random distributed parameter a1[(n, -0.171902,0.15)], and another one with train time with fixed prior a1[-0.171902].

The results of the two design seems to be very similar regarding the D-error and probabilities. What else I should check to be able to know which design is better?

Best,
Yang
Yang Wang
 
Posts: 3
Joined: Thu Jan 27, 2022 12:55 am

Re: How to compare similar design?

Postby Michiel Bliemer » Fri Jan 28, 2022 10:26 pm

These designs have a similar D-error since the models are identical (both MNL) and the mean prior value is also identical.

Note that a1[(n, -0.171902,0.15)] is NOT a random parameter, but rather a Bayesian prior.
A random parameter refers to a mixed logit model, whereas a random (Bayesian) prior refers to analyst uncertainty about the true value of the parameter.

You can specify a random paraneter in Ngene as a1[n,mean,stdev], without any round brackets. But then you need to switch to the rppanel model and given the huge computation times it takes to optimise for such a model it is practically not feasible in your case.

Michiel
Michiel Bliemer
 
Posts: 1730
Joined: Tue Mar 31, 2009 4:13 pm

Re: How to compare similar design?

Postby Yang Wang » Fri Jan 28, 2022 10:55 pm

Thank you for the kind explanation.

Sorry for the confusion. My intention is to compare the two design: one with Bayesian prior, and another one with fixed prior.

As you mentioned, since the two design are both MNL and the mean prior is also identical, the D-error are similar.

So how should I know which design is better? Is there quantitative criteria to compare the two design?

Best,
Yang
Yang Wang
 
Posts: 3
Joined: Thu Jan 27, 2022 12:55 am

Re: How to compare similar design?

Postby Michiel Bliemer » Sat Jan 29, 2022 7:49 am

A (locally efficient) design optimised wth fixed priors can loose a lot of efficiency if the actual parameters deviate from the assumed priors.
A (Bayesian efficient) design optimised with Bayesian priors is more robust against prior misspecification and will loose less efficiency when the actual parameters deviate from the priors.

I would always prefer a Bayesian efficient design over a locally efficient design because it is more robust. Usually the mean D-error of a Bayesian efficient design is slightly larger than that of a locally efficient design, but if they are similar in your case then it is an easy choice: the Bayesian efficient design would be preferable.

Michiel
Michiel Bliemer
 
Posts: 1730
Joined: Tue Mar 31, 2009 4:13 pm


Return to Choice experiments - Ngene

Who is online

Users browsing this forum: SupphaCH and 13 guests