Efficient design with SQ alt. having dummy variables

This forum is for posts that specifically focus on Ngene.

Moderators: Andrew Collins, Michiel Bliemer, johnr

Re: Efficient design with SQ alt. having dummy variables

Postby AnzeJap » Mon Jul 25, 2022 7:12 pm

Sure, I understand orthogonal designs ignore priors. (I think I am not stating the question clearly as this puzzles me a bit, I am sorry I thought my dilemma makes sense :| )

I was just confused if I can use either design codes for my attribute level (linear coding for continuous att. and effects coding for categorical att.) or original attribute levels (estimation values) in my orthogonal design (for the pilot), and that it does not make any difference on my design.

The thing is that my pilot survey (orthogonal) is prepared already and being put up on-line and I have some little time to correct it if I made a mistake by using design codes. My main study will still be based on an efficient design with priors form the pilot and using actual values of att. levels - the code I started with in this blog.

Thank you!
AnzeJap
 
Posts: 20
Joined: Tue Jun 21, 2022 5:26 pm

Re: Efficient design with SQ alt. having dummy variables

Postby Michiel Bliemer » Mon Jul 25, 2022 8:00 pm

Using an orthogonal array with design coding is fine. The design would not have looked different if you would have used estimating coding, that is what I was saying. An orthogonal array is an orthogonal array, it does not know anything about your choice model or your priors. Of course in the survey instrument you replace the design codes 0,1,2,... with something that is meaningful to the respondent. That process is called relabelling. After relabelling the design is still orthogonal.
Michiel Bliemer
 
Posts: 1705
Joined: Tue Mar 31, 2009 4:13 pm

Re: Efficient design with SQ alt. having dummy variables

Postby AnzeJap » Mon Jul 25, 2022 8:04 pm

Dear Michiel,

clear now. Thank you very much for your prompt reply and kind assistance.

Anže
AnzeJap
 
Posts: 20
Joined: Tue Jun 21, 2022 5:26 pm

Re: Efficient design with SQ alt. having dummy variables

Postby AnzeJap » Thu Aug 18, 2022 3:04 pm

Dear Michiel,

just to let you know, we have now completed the pilot study (n=56) I was asking you about (also in this thread) and I was just wondering if there is any 'benchmark' value on either t-ratios for betas or mean/se ratio, beyond which you consider the prior to be simply too unreliable to use as an input for constructing a Bayesian efficient design. We have two parameters close to 0.5 (mean/se) and one of 0.2. The latter worries me more as it is also the monetary attribute (important for estimating WTA in the final stage, hopefully). It is also the case that the estimate on the monetary attribute is negative (-.00006), but we expected to be positive as it is defined as compensation to forest owners for providing extra ecosystem services. The s-error is relatively large (app. 800) I guess due to this uncertainty (b-error seems ok, of app. 50).

I was also reading on the blog titled "Understanding S-sample output" (http://www.choice-metrics.com/forum/vie ... ?f=2&t=969) where similar issues were discussed, but I did not see any conclusions on values of t-ratios (or mean/se) - of course if there are any.

Just for info I have copied the code with priors from the pilot study below.

Thank you!

Design
;alts=alt1, alt2, sq
;rows=18
;block=2
;eff=(mnl,d)
;alg = mfederov
;require:
sq.atr1 = 7,
sq.atr2 = 0.4,
sq.atr3 = 0,
sq.atr4 = 0.3,
sq.atr5 = 0
;model:
U(alt1) = b1[-0.00385] * atr1[7,20,34]
+ b2[-0.01320] * atr2[0.4,5,15]
+ b3.effects[-0.08044|0.23305] * atr3[1,2,0]
+ b4[0.00538] * atr4[0.3,5,20]
+ b5.effects[0.33685|0.19576] * atr5[1,2,0]
+ b6[-0.000060358] * atr6[150,300,450,600,750,900]
/
U(alt2) = b1 * atr1
+ b2 * atr2
+ b3 * atr3
+ b4 * atr4
+ b5 * atr5
+ b6 * atr6
/
U(sq) = b0[-0.93745]
+ b1 * atr1
+ b2 * atr2
+ b3 * atr3
+ b4 * atr4
+ b5 * atr5
+ b6 * atr6_sq[0]
$
AnzeJap
 
Posts: 20
Joined: Tue Jun 21, 2022 5:26 pm

Re: Efficient design with SQ alt. having dummy variables

Postby Michiel Bliemer » Thu Aug 18, 2022 3:50 pm

There is no rule on what is considered a sufficient t-ratio, but if you are using Bayesian prior distributions then you simply use both the parameter as well its reliability (i.e., the standard error). For example, if beta = 0.5 and se = 0.8, then you could use a normally distributed prior with mean 0.5 and standard deviation 0.8, which is defined in Ngene as b[(n,0.5,0.8)], and where you use ;eff = (mnl,d,mean) and set ;bdraws = ....

If the standard error is very large (and hence the t-ratio is very small), then this normal distribution becomes very wide. This means that taking draws at the extremes of the distribution becomes more likely, which could create issues in computing D-error (they may become very large). This can be mitigated by using ;eff = (mnl,d,median).

In your example you using local priors with fixed values, not Bayesian priors with random distributions. You may want to include the unreliability of the priors via Bayesian distributions.

Note that your pilot parameter estimates are still the best guesses you have, despite them not being statistically significant.

Michiel
Michiel Bliemer
 
Posts: 1705
Joined: Tue Mar 31, 2009 4:13 pm

Re: Efficient design with SQ alt. having dummy variables

Postby AnzeJap » Thu Aug 18, 2022 8:42 pm

Thank you very much for replying.

Looking at different 'bdraws' algorithms (Halton, Gaus...), are there any (ch. 7.1.5) practical criteria upon which the sim. method can be selected?
AnzeJap
 
Posts: 20
Joined: Tue Jun 21, 2022 5:26 pm

Re: Efficient design with SQ alt. having dummy variables

Postby Michiel Bliemer » Thu Aug 18, 2022 10:44 pm

I generally prefer gauss(3) but that only works up to about 8 Bayesian priors (the rest will need to be fixed) since the number of draws will then be 3^8.
If you have a large number of parameters, I recommend sobol(1000) or sobol(2000). Also, if you want to use the median, gauss cannot be used and then sobol is the best alternative.

You can find more information here:
Bliemer, M.C.J., J.M. Rose, and S. Hess (2008) Approximation of Bayesian efficiency in experimental choice designs. Journal of Choice Modelling, Vol. 1, pp. 98-127.

Michiel
Michiel Bliemer
 
Posts: 1705
Joined: Tue Mar 31, 2009 4:13 pm

Re: Efficient design with SQ alt. having dummy variables

Postby AnzeJap » Fri Aug 19, 2022 2:51 pm

Thank you! I have 9 Bayesian par. so I'll go with sobol I guess.
AnzeJap
 
Posts: 20
Joined: Tue Jun 21, 2022 5:26 pm

Re: Efficient design with SQ alt. having dummy variables

Postby AnzeJap » Wed Sep 07, 2022 2:40 pm

Dear Michiel,

if I may, I would like to ask you as well about choosing the most appropriate Bayesian eff. design from a list of designs, Ngene creates. This might be trivial question, but is it so that you simply go for a design with lowest possible A- C- and S-error and high B-error or do you need to account for other aspects too. For example, how do you consider attribute level balance as well, because in my experience in some point, (even S-error is decreasing by running the code for some time) it starts to go worse for some attributes - some levels (1 out of 6 levels for the payment in a 36 rows-design) starts to occur very seldom, only once in one alternative in the entire design. Or several levels occur only twice, and so on. I guess having so much imbalance is not good, or? So I was thinking to perhaps choose the design not so efficient according to D- A- B- S- error, but one that is more balanced as well. However, I do not know if there are any benchmarks - min. no. of occurrence of each level. The chapter 7.1.10 has something on level balance but more general (and 7.6 on evaluating the efficiency - but only as min. of D-error is what you are going for.)

Thank you!
Anže
AnzeJap
 
Posts: 20
Joined: Tue Jun 21, 2022 5:26 pm

Re: Efficient design with SQ alt. having dummy variables

Postby Michiel Bliemer » Wed Sep 07, 2022 2:52 pm

Some degree of attribute level balance is desirable yes (although not strictly necessary). While the swapping algorithm will automatically aim for a high level of attribute level balance, the modified Federov cannot. That is why it is a good idea to impose restrictions on the minimum and maximum number of times each attribute level should appear in the design, see script below where I for example added 4-8 to indicate that level 7, 20, and 34 should each appear between 4 and 8 times within the design (where 6 would be perfect attribute level balance given that you have 18 rows). Note that such restrictions are typically only required for numerical attributes. For qualitative attributes, which require dummy or effects coding, attribute level balance will automatically be more or less satisfied to allow the estimation of all dummy/effects coded coefficients.

Code: Select all
Design
;alts=alt1, alt2, sq
;rows=18
;block=2
;eff=(mnl,d)
;alg = mfederov
;require:
sq.atr3 = 0,
sq.atr5 = 0
;model:
U(alt1) = b1[-0.00385] * atr1[7,20,34](4-8,4-8,4-8)
+ b2[-0.01320] * atr2[0.4,5,15](4-8,4-8,4-8)
+ b3.effects[-0.08044|0.23305] * atr3[1,2,0]
+ b4[0.00538] * atr4[0.3,5,20](4-8,4-8,4-8)
+ b5.effects[0.33685|0.19576] * atr5[1,2,0]
+ b6[-0.000060358] * atr6[150,300,450,600,750,900](2-4,2-4,2-4,2-4,2-4,2-4)
/
U(alt2) = b1 * atr1
+ b2 * atr2
+ b3 * atr3
+ b4 * atr4
+ b5 * atr5
+ b6 * atr6
/
U(sq) = b0[-0.93745]
+ b1 * atr1_sq[7]
+ b2 * atr2_sq[0.4]
+ b3 * atr3
+ b4 * atr4_sq[0.3]
+ b5 * atr5
+ b6 * atr6_sq[0]
$


After imposing such constraints, you can simply look at the D-error and pick the best one. I could typically not look at probabilty balance (B-estimate), you can usually ignore that one.

Michiel
Michiel Bliemer
 
Posts: 1705
Joined: Tue Mar 31, 2009 4:13 pm

PreviousNext

Return to Choice experiments - Ngene

Who is online

Users browsing this forum: No registered users and 6 guests

cron