optimal probability balance

This forum is for posts that specifically focus on Ngene.

Moderators: Andrew Collins, Michiel Bliemer, johnr

optimal probability balance

Postby lemon » Sat Jul 05, 2014 12:38 am

Dear Ngene developers,

I am working on designing a choice experiment for a developing country in order to determine WTP for water and wastewater service attributes and was wondering if you would be able to comment on my issue of striking optimal probability balance. The following code has been generated, using Dz-efficient design. My main concern is that the code posted below generates designs that do not strike optimal probability balance of 70%-30% or 80%-20%, it is always 50%-50%. I was wondering if you would be able to comment on how exactly probability balance is calculated and what might be wrong with my code? I am also not sure what price levels to choose (now it is expressed as % of a baseline bill, numbers are chosen to make an intuitive sense), is there any particular guidance, does this choice affect probability balance?

?Attributes:
? A-Quality of water: 3,2,1,0 (clean and potable, clean and potable but with some shortcomings, somewhat dirty, very dirty)
? B-Water pressure: 1,0 (strong, adequate)
? C-Frequency of intermittence: 3,2,1,0 (once in 10 years, few times yearly, once a month, several times weekly)
? D-Duration of intermittence: 12,5,2,1(12 hours, 5 hours, 2 hours, 0 hours)
? E-Bill-only non-negative in % of a baseline bill

? Description:
? no SQ, since people cannot opt out
? no obvious deterioration, since connection attributes are fixed and water quality and water pressure are fixed to a baseline
? zero prices are only for baseline in terms of Water Quality (0) and Water pressure (0)

Design

;alts=alt1*, alt2*
;rows=24
;block=4
;eff=(mnl,d)

;cond:
? assign increase in bill of 0% to these cases when both water quality and water pressure take value 0
if (alt1.a=0 and alt1.b=0,alt1.e=0),
if (alt2.a=0 and alt2.b=0,alt2.e=0),
if (alt1.a<>0 or alt1.b<>0, alt1.e=[0.5,0.7,1.5,2]),
if (alt2.a<>0 or alt2.b<>0, alt2.e=[0.5,0.7,1.5,2])

;model:
U(alt1)=b1.dummy[-0.003|-0.002|-0.001]*A[0,1,2,3]+b2.dummy[-0.001]*B[0,1]+b3.dummy[-0.003|-0.002|-0.001]*C[0,1,2,3]+b4.dummy[-0.003|-0.002|-0.001]*D[12,5,2,1]+b5[-0.001]*E[0,0.5,0.7,1.5,2]/
U(alt2)=b1*A+b2*B+b3*C+b4*D+b5*E $


Many thanks in advance. I really appreciate.

Regards,
Adiya
lemon
 
Posts: 3
Joined: Sun Jun 29, 2014 4:21 am

Re: optimal probability balance

Postby johnr » Mon Jul 07, 2014 10:22 am

Dear Adiya

I presume you are talking about choice probabilities being 0.50/0.50 by looking at the probabilities, not the b-error. To understand why this is the case, try performing the following thought experiment. Multiply a dummy variable (1 or 0) by one of your priors, say for the first attribute b1.dummy[-0.003|-0.002|-0.001]*A[0,1,2,3]. What is the contribution to utility if in the design, the first dummy level is selected? It should be -0.003 * 1 = -0.003. What about the other levels, including the base level? Perform a similar thought experiment in relation to the other attributes. You will see that the contribution of any one attribute to overall utility is very small. So what happens when you add all the X * betas together to get the overall utility for that alternative? It will be very small (near zero). Because you have used small priors for the second alternative, it too will have a very small utility.

Now think about how the probabilities are calculated, and what will happen if both utilities are rather small (not far from zero). The probabilities MUST be close to 0.5/0.5 because of the priors you have selected - they are not adding much to observed utility which if you frame it in terms of scale (you have a very low scale overall), the unobserved effects are dominating the design and the choices are mostly random (0.5/0.5). Consider the following priors and run the design:

U(alt1)=b1.dummy[-0.3|-0.2|-0.1]*A[0,1,2,3]+b2.dummy[-0.1]*B[0,1]+b3.dummy[-0.3|-0.2|-0.1]*C[0,1,2,3]+b4.dummy[-0.3|-0.2|-0.1]*D[12,5,2,1]+b5[-0.1]*E[0,0.5,0.7,1.5,2]/

You will see a greater dispersion (still not that much wider but better).

In answer to your second question " I am also not sure what price levels to choose (now it is expressed as % of a baseline bill, numbers are chosen to make an intuitive sense), is there any particular guidance, does this choice affect probability balance?", consider what it is that you are trying to achieve when generating an efficient design. What you are tying to do is minimise the elements in the AVC matrix (or whatever objective depending on your efficiency measure used). This means that you want to use levels for the Xs that you plan on using in the study. That is because you want to work with the AVC matrix related to what you data will look like (you will estimate models on your data, not the design). A simple rule of thumb is to try to optimise for what you plan on doing in the field. If you spot problems, then adapt the design then, but in the first instance, let the problem rule the design, not the design rule the problem. Hence, you need to decide how you want to show the levels to respondents (perhaps you could pilot different versions and see what respondents think first), and then optimise the design for that problem.

Hope this helps.

John
johnr
 
Posts: 171
Joined: Fri Mar 13, 2009 7:15 am

Re: optimal probability balance

Postby lemon » Thu Jul 10, 2014 8:35 pm

Dear John,

Many thanks for your clarification. I really appreicate. I just wanted to follow up on your explanations.

I understood that the probabilities must be close to 50-50% because of the small priors I assumed in the design. These small priors are in the design to indicate a sign rather than a value. We don't have any prior values to insert in the design and were thinking of obtaining them after a pilot. Do you think it is still fine to implement this design with 50-50% probabilities (b-error is almost 100%) in the pilot? How bad is it for efficiency to have the unobserved effects dominating the design?

Once we obtain priors from the pilot, we should be fine in stricking 80-20% or 70-30% probability balance, shouldn't we?

In terms of pricing, does it make a difference statistically if I include a level for the price attribute, for example, 0.1 (i.e 10% increase for a baseline price) or (100+10%)*baseline price given that only relative utilities matter?

Many thanks in advance.

Regards,
Adiya
lemon
 
Posts: 3
Joined: Sun Jun 29, 2014 4:21 am

Re: optimal probability balance

Postby Michiel Bliemer » Fri Jul 11, 2014 3:28 pm

When choosing priors close to zero (as you have), choice probabilities become meaningless, such that the efficiency is only optimised on the attribute levels, not on the probabilities. Such designs are also called utility neutral designs in the literature. This is not a problem at all for a pilot study, since the combinations of attribute levels are still optimised.

Once you have obtained priors and you optimise the D-error, your choice probabilities will automatically be close to 80-20% in case of two alternatives, 70-30% in case of 3 alternatives, 60-40% in case of 4 alternatives.

If the price is available in each alternative and has the same coefficient, then indeed both will give the same result as only relative utilities matter in a multinomial logit model.
Michiel Bliemer
 
Posts: 1885
Joined: Tue Mar 31, 2009 4:13 pm


Return to Choice experiments - Ngene

Who is online

Users browsing this forum: No registered users and 3 guests

cron