Impact of Priors on WTP Estimates

This forum is for posts covering broader stated choice experimental design issues.

Moderators: Andrew Collins, Michiel Bliemer, johnr

Impact of Priors on WTP Estimates

Postby rafael_lionello » Tue Nov 12, 2024 10:42 am

Dear Ngene moderators,

I’m using NGENE to develop an efficient design aimed at estimating the brand equity of various consumer goods brands, employing a mixed logit model in WTP space. I have two questions regarding the potential impact of brand-specific priors on WTP estimates.

First, I’d like to understand if the mean WTP estimates are sensitive to these priors. For example, if a brand has a lower prior (e.g., a lower initial WTP level), it might appear with lower prices in the choice sets. Would this increase its selection frequency, potentially leading to an overestimation of its WTP? Or is this interpretation flawed?

Second, I’m curious if the standard deviation of the WTP distribution might also be influenced by design priors. For example, Heineken—a strong brand in the beer category—exhibits a high WTP and a high standard deviation. If the efficient design frequently presents Heineken at higher price levels (relative to other brands due to its high prior), could this lead more price-sensitive respondents to ascribe lower preference to it, while those less sensitive remain willing to pay the higher prices? Does this reasoning make sense?

Finally, if these issues are possible, might an orthogonal design—lacking dependency on priors—help mitigate them?

Thank you in advance for any insights or experiences you can share.
My code follows below. Heinken is the brand = 5.

Code: Select all
Design
;alts = Alt1*, Alt2*, Alt3*, Optout
;rows = 80
;block = 10
;eff = (mnl,d,mean)
;bdraws = mlhs(500)
;alg = mfederov(candidates = 5000, stop=total(20000 iterations))
;require:
Alt1.brand <> Alt2.brand,
Alt1.brand <> Alt3.brand,
Alt2.brand <> Alt3.brand
;model:
U(Alt1)=  b_brand.dummy[(u,0.3,0.6)|(u,0.0,0.3)|(u,0.3,0.6)|(u,0.1,0.4)|(u,0.3,0.6)|(u,0.1,0.4)|(u,0.1,0.4)] * brand[1, 2, 3, 4, 5, 6, 7, 8] +
          b_price.dummy[(u,-0.2,-0.1)|(u,-0.3,-0.2)|(u,-0.4,-0.3)|(u,-0.5,-0.4)|(u,-0.6,-0.5)|(u,-0.7,-0.6)|(u,-0.8,-0.7)|(u,-0.9,-0.8)] * price[3.9,4.3,4.8,5.3,5.9,6.5,7.2,8.0,3.5] +
          b_taste.dummy[(u,0,0.3)] * taste[2,1] /

U(Alt2)=  b_brand.dummy*brand + b_price*price + b_taste.dummy*taste /
U(Alt3)=  b_brand.dummy*brand + b_price*price + b_taste.dummy*taste $
rafael_lionello
 
Posts: 11
Joined: Sat Jul 24, 2021 3:14 am

Re: Impact of Priors on WTP Estimates

Postby Michiel Bliemer » Tue Nov 12, 2024 5:22 pm

The literature suggests that you should not get biased parameter estimates. In theory, with enough sample size, you should be able to recover the same parameter estimates with any experimental design; random, orthogonal, efficient with uninformative priors, or efficient with informative priors. The main impact of the experimental design is on the standard errors of the parameter estimates, where efficient designs increase the precision (i.e., reduce standard errors).

So while the priors influence the experimental design, they should not bias the WTP and I am also not aware that there is an impact on the stdev of the WTP. Note that it is differences in price levels that matter in discrete choice models, not the absolute price levels that are presented.

If you are still worried about the effect of priors on the WTP, then you could always use uninformative (near)zero priors, which offers more flexibility than orthogonal designs because an orthogonal design with your specified dimensions may not exist and would also not be able to avoid dominant alternatives.

A few further comments:
* You need to specify a constant, e.g. U(Optout) = b0[..] or add the same constant to Alt1, Alt2 and Alt3. This constant also needs a prior.
* Using 5000 candidates is fine but quite large, it takes a long time to cycle through the entire candidate set. I usually use 1000 or 2000 candidate at most so that each row can be swapped more quickly across the 80 rows. The default in Ngene is 2000.

Michiel
Michiel Bliemer
 
Posts: 1885
Joined: Tue Mar 31, 2009 4:13 pm

Re: Impact of Priors on WTP Estimates

Postby rafael_lionello » Thu Nov 21, 2024 8:29 am

Dear Michiel,

Thank you very much for the clarifications and suggestions. I’ve updated the design based on your feedback, including the constant and adjustments to the candidate set size. The updated code is as follows:

Code: Select all
Design 
;alts = Alt1*, Alt2*, Alt3*, Optout 
;rows = 80 
;block = 10 
;eff = (mnl,d) 
;alg = mfederov(candidates = 2000, stop=total(20000 iterations)) 
;require: 
Alt1.brand <> Alt2.brand, 
Alt1.brand <> Alt3.brand, 
Alt2.brand <> Alt3.brand 
;model: 
U(Alt1)  =  b_brand.dummy[0|0|0|0|0|0|0] * brand[1, 2, 3, 4, 5, 6, 7, 8] + 
            b_price.dummy[0|0|0|0|0|0|0|0] * price[3.9,4.3,4.8,5.3,5.9,6.5,7.2,8.0,3.5] + 
            b_taste.dummy[0|0] * taste[2,3,1] / 

U(Alt2)  =  b_brand.dummy*brand + b_price*price + b_taste.dummy*taste / 
U(Alt3)  =  b_brand.dummy*brand + b_price*price + b_taste.dummy*taste / 
U(Optout)=  b0[0] 
$


Does this look correct now?

Additionally, compared to using informative priors, could uninformative priors potentially increase the mean WTP, as the trade-offs between attributes would be less significant in such cases?

Best regards,
Rafael Lionello
rafael_lionello
 
Posts: 11
Joined: Sat Jul 24, 2021 3:14 am

Re: Impact of Priors on WTP Estimates

Postby Michiel Bliemer » Thu Nov 21, 2024 4:51 pm

Looks good but I would do 200,000 iterations instead of 20,000.

With uninformative priors you will get comparisons across all pairs of attribute levels, so I do not see how this would impact WTP. The trade-offs become less, so the standard errors in model estimation may be larger and hence the precision of WTP may be reduced, but the value itself should not be affected (unbiased).

Michiel
Michiel Bliemer
 
Posts: 1885
Joined: Tue Mar 31, 2009 4:13 pm

Re: Impact of Priors on WTP Estimates

Postby rafael_lionello » Sat Nov 23, 2024 12:54 am

Perfect. Since I have a reasonable sample size (n=1000-2000), I will assume that the larger standard errors is not a significant issue.

Thank you very much again for your orientations!

Rafael
rafael_lionello
 
Posts: 11
Joined: Sat Jul 24, 2021 3:14 am


Return to Choice experiments - general

Who is online

Users browsing this forum: No registered users and 0 guests

cron