Page 1 of 1
Dummy coding
Posted:
Fri Nov 07, 2014 4:49 am
by mdann
Hey,
my questions relatest to Bayesian dummy coding. Can I use the normal distribution with n or is it better to use u...Would it be possible to give me an example for a bayesian dummy syntax?
Let's say Mean is 0.5, standard error of mean is 0.2, SD is 0.3 and standard error of SD is 0.15.
Would it be
b1[(n,0.5,0.3)] or better b1[(u,(u,0.2,0.8)]? or something else?
Thank you in advance!!
Micha
Re: Dummy coding
Posted:
Fri Nov 07, 2014 6:35 am
by Michiel Bliemer
If I understand correctly, you want to use Bayesian priors for randomly distributed parameters in a dummy coded attribute?
b1.dummy[n,(n,0.5,0.2),(u,0.15,0.45)|...] * A[...
So this means that the random parameter is normally distributed, in which the mean has a normal Bayesian prior and a uniform prior for the standard deviation. I am not sure if this is exactly what you want, you do not provide the details of what you want to estimate. Please note that using random parameters (mixed logit) in conjuction with Bayesian priors will lead to many draws and large calculation times.
Michiel
Re: Dummy coding
Posted:
Fri Nov 07, 2014 7:34 am
by mdann
Thank you Michiel,
my Question is, if it`s exactly correct to use a normal distribution for dummy coded variables or if I´ve to choose something like:
b1.dummy[u,(u,0.5,0.2),(u,0.15,0.45)|...] * A[...
Regards
Micha
Re: Dummy coding
Posted:
Fri Nov 07, 2014 4:55 pm
by Michiel Bliemer
You can choose any distribution. For representing a bayesian prior for a standard deviation, it must be guaranteed that it is positive, hence for those specific cases, a bayesian uniform distribution is suitable.
Re: Dummy coding
Posted:
Fri Nov 07, 2014 9:37 pm
by mdann
So would this be a suitable bayesian syntay?
Design
;alts = alt1,alt2,alt3
;rows= 12
;eff=(rp,d,median)
;rep=50
;rdraws= halton(50)
;model:
U(alt1) = b1[1.04] + pa[-0.07] * A[10,20,30,40] + rf.dummy[-0.28] * B[0,1] + al.dummy[n,(n,-4.66,1.23),(u,3.1,5.86)] * C[0,1] + lf.dummy[-2.69|n,(n,-4.97,1.71),(u,3.06,6.62)|n,(n,-3.48,1.1),(u,1.98,4.16)] * D[1,2,3,4] + ob.dummy[n,(n,-0.48,0.54),(u,0.87,1.79)] * E[0,1] + bz[n,(n,0.2,0.05),(u,0.06,0.11)] * F[5,10,15,20] /
U(alt2) = b1[1.04] + pa * A + rf.dummy * B + al.dummy * C + lf.dummy * D + ob.dummy * E + bz * F $
Re: Dummy coding
Posted:
Sat Nov 08, 2014 10:25 pm
by mdann
or is this a correcht bayesian sytax?
Design
;alts = alt1,alt2,alt3
;rows = 12
;eff = (rp,d,median)
;rep = 50
;rdraws = halton(100)
;model:
U(alt1) = b1[1.04] + pa[-0.07] * A[10,20,30,40] + rf.dummy[-0.28] * B[0,1] + al.dummy[(u,-9,0.33)] * C[0,1] + lf.dummy[-2.69|(u,-9.82,-0.08)|(u,-6.55,-0.4)] * D[1,2,3,4] + ob.dummy[(u,-1.81,0.85)] * E[0,1] + bz[n,(n,0.2,0.05),(u,0.06,0.11)] * F[5,10,15,20] /
U(alt2) = b1 + pa * A + rf * B + al *C + lf * D + ob * E + bz * F $
Re: Dummy coding
Posted:
Mon Nov 10, 2014 8:08 am
by Michiel Bliemer
I am unsure what model you would like to estimate. In the first model you state that lf is a random dummy coded parameter to estimate, while in the second syntax lf is a fixed dummy coded parameter, etc. So they are different models. Both use Bayesian priors correctly as far as I can see.
Please note that 100 Halton draws is too little for estimating 5 random coefficients in the first model (think more about 200-500). Note that 50 repetitions is too little for getting stable results (think more about 500-1000). But since you are using an RP model, ;rep will be ignored. Note that you will need to also set the number of bdraws (in the first syntax you have 10 Bayesian priors, therefore you will need a lot of draws, many more than 1000).
Re: Dummy coding
Posted:
Mon Nov 10, 2014 4:39 pm
by mdann
Thank you.
My problem is, what is the difference between mnl, rp and rppanel? And what is the best to use?
Re: Dummy coding
Posted:
Mon Nov 10, 2014 4:44 pm
by Michiel Bliemer
MNL is a multinomial logit model, RP is a cross-sectional mixed multinomial logit model, and RPPANEL is a panel mixed multinomial logit model. I will have to refer to you do a standard textbook in discrete choice methods. A great source is the book by Kenneth Train, which you can download from:
http://eml.berkeley.edu/books/choice2.html
Re: Dummy coding
Posted:
Mon Nov 10, 2014 4:49 pm
by mdann
Thank you!!!!