From WTP estimates to prior parameters

This forum is for posts covering broader stated choice experimental design issues.

Moderators: Andrew Collins, Michiel Bliemer, johnr

From WTP estimates to prior parameters

Postby jonashl » Wed May 14, 2014 7:18 pm

I have an experiment where I have some knowledge from previous research on the magnitude of WTP for (most of) the features in my DCE.
So my question is, how do I get from payment estimates to the regression estimates used in the description of priors?

My idea is to do the following:
1. Get min-max estimates for WTP based on previous research
2. Make an educated guess on a min-max parameter value for the price parameter, e.g. -0.01 to -0.0006
3. Use the price parameter to transform the WTP estimates to parameter intervals to construct uniform priors for the other attributes.
I am not sure wether this should be done by a) multiplying the mean price parameter prior with the WTP estimate limits, or allow for wider intervals by b) multiplying low price prior limit with low WTP and correspondingly with the high limits.

What do you think of this approach? Is there a better strategy? And would you go for maximum uncertainty in parameter priors based on price prior*WTP (ie widest uniform limits under step 3.) or multiply with the mean expected price parameter?

Looking forward to your response!
jonashl
 
Posts: 9
Joined: Fri Mar 22, 2013 8:37 pm

Re: From WTP estimates to prior parameters

Postby johnr » Thu May 15, 2014 6:35 pm

Hi

Whilst approach b would allow for more uncertainty in your priors, it may cause issues. The wider the priors you use, the harder it is to optimise the design. If the priors are too wide, then you choice probabilities will be all over the place across draws, and as the Bayesian D-error is the average D-error over draws, and given that averages get pulled towards outliers, the Bayesian D-error is likely to be very large. So to the S-errors, as well. Nevertheless, the range of the priors should reflect your uncertainty, so if you are completely uncertain about what priors to use you should use quite wide priors. Nevertheless, experience will suggest to try and avoid using too wide priors as much as possible.

John
johnr
 
Posts: 171
Joined: Fri Mar 13, 2009 7:15 am

Re: From WTP estimates to prior parameters

Postby jonashl » Thu May 15, 2014 8:43 pm

Thankyou for the response :)

What do think of the overall method of going from WTP estimates to prior parameters?
I am particularly concerned with how to best go go about setting the price parameter (which will then determine the level of the rest of the parameters). I know that the price parameter will most likely be a small (<0.01) negative number, but I am concerned about the consequences for potential biased estimates if I set the level wrong - e.g. setting the price prior to -0.006 if it is in fact -0.0006?
jonashl
 
Posts: 9
Joined: Fri Mar 22, 2013 8:37 pm

Re: From WTP estimates to prior parameters

Postby jonashl » Thu May 15, 2014 9:30 pm

And one more follow-up:
Can you give a quantification of how wide priors will be problematic? Are for example the priors in the example below considered "wide"?
U(alt1)=b1[(u,-0.06,-0.03)]*A[-1,0] + b2[(u,0.04,-0.09)]*B[0,1] + b3.dummy[(u,0.09,0.18)|(u,0.06,0.12)]*C[10,20,40] +b4[(u,-0.011,-0.0006)]*price[20,50,100]
jonashl
 
Posts: 9
Joined: Fri Mar 22, 2013 8:37 pm

Re: From WTP estimates to prior parameters

Postby johnr » Fri May 16, 2014 11:45 am

Hi

As you state below, the issue is that you need to specify the model in preference space, not WTP, so you need to convert the WTP estimates to the constituent components, beta_k and beta_c. Given that you have no knowledge of the price parameter, beta_c, then you proposal sounds like the most sensible approach to the issue.

In reference to your query, the theory says that (asymptotically) the design should not bias the estimates - it impacts upon the standard errors. Whilst there are some that argue the design will influence peoples decisions, if that is the case, then I would suggest we all pack up and go home - it basically says that every published paper using SP is design specific and nothing can be generalised from any published result. Of course, it is possible that if you do certain things with the design, you might induce certain effects - if you use certain designs for example, they are known to induce lexicographic behaviour for example, however these are extreme (but unfortunately common) mistakes. The real issue is that if you get the priors wrong, you get the wrong standard errors. This is why we talk about theoretical minimum sample sizes – the calculation is based on the assumption you got the priors correct. So the worst case scenario would be according to the theory (and my personal experience for what it is worth) is that you will need a larger sample size than what you thought you might have needed.

With regards to your second question, it is the unfortunate matter of how long is a piece of string. Think of it this way, given that utility = beta *X, if you fix X (say for one design iteration), but have a wide spread of betas in your distribution you are drawing from, then U will vary greatly also. So if you have two utility functions, and the betas are jumping all over the place, then their utilities will also be all over the place. The problem is that the AVC matrix is a function of the probabilities, so it will fluctuate greatly over the draws of betas. The D-error, based on the AVC matrix, will also vary greatly, and will exhibit a number of possible outliers. The Bayesian D-error, being the average over D-errors based on the set of draws, will be pulled to the outlier values (as averages are). Now if you have a narrower set of possible betas, the utilities won’t vary as much, and following through, you are less likely to have problems with the optimisation. How large is too large a range? Difficult to say, but I would start by looking at your syntax and calculating the marginal contribution to utility for each of your attributes (i.e., beta*X) across the range of values assumed at the extreme levels. For example, consider your price attribute.

Beta_lower = -0.0006
Beta_upper = -0.011
X_lower = 20
X high = 100

So if the program were to take the lowest beta draw and apply it to two alternatives, one with the lowest price level, and one with the highest, then the marginal utility contributions would be -0.012 and -0.22. Now consider the program drew the highest beta value possible. The marginal contributions to utilities would then be -0.06 and -1.1. So in effect, you are allowing the price to contribute to overall utility somewhere between -0.012 and -1.1. To get why this might be a problem, do the same exercise for the other attributes. You will see that the other attributes are contributing nearly nothing to the utility which may be the case, but it means that in order to find these effects relative to the price effect, the design (any design) will struggle – and this will come through in terms of the programs ability to locate a good design.

Hopefully, you can see why I say it is a how long is a piece of string type problem. It will depend not just on the parameter and the X, but on the other parameters and Xs also. And every problem will have different parameters and different Xs, so there is no single answer to your question other than, it depends on the problem! But you can do the type of pre-design thinking I suggest above (and have suggested in other posts) to minimise these types of issues arising.

John
johnr
 
Posts: 171
Joined: Fri Mar 13, 2009 7:15 am


Return to Choice experiments - general

Who is online

Users browsing this forum: No registered users and 29 guests

cron