Efficient design and ASC

This forum is for posts covering broader stated choice experimental design issues.

Moderators: Andrew Collins, Michiel Bliemer, johnr

Efficient design and ASC

Postby jveld » Mon Jul 07, 2014 5:04 pm

Hi,

Currently I’m working on a DCE project were we have used a D-efficient (D-optimal) design to develop our choice sets. I got (part) of the data now and although we had an unlabeled design, I do find a significant difference between the alternative specific constants. This is weird because obviously, why would anyone prefer option A over option B, if A and B in itself do not mean anything?

Then I thought of a left-to-right bias, however, if I look at the number of time people chose option A over option B, I see a normal distribution ranging from never to always preferring A over B.

When I counted the number of people choosing A and B per choice task, most of the choice tasks show a 55-45% frequency of preference for A-B or B-A. However, in three choice tasks the percentage of choosing A or B was 70-30. In two of these cases respondents preferred A over B and in one case respondents preferred B over A. We had 9 choice tasks to begin with and now an unequal number of somewhat more dominant preferences. So if I removed 1 of the more dominant choice tasks where respondents preferred A over B from my dataset, I did not find a significant difference between the ASC for A and B.

Is the above a consequence of working with an efficient design (that has a minimal but unequal number of choice tasks)? Can I assume there is no enormous bias going on? Can I just use a generic constant for my two choice options per choice tasks?

I hope you can help me with this!

kind regards,
Jorien
jveld
 
Posts: 3
Joined: Fri Jul 04, 2014 7:08 pm

Re: Efficient design and ASC

Postby johnr » Tue Jul 08, 2014 11:32 pm

Hi JorienThanks for your question. I don't think that you need to worry about significant biases from the design. I always estimate ASCs for unlabelled experiments and in the vast majority of cases have found them to be statistically significant, independent of the design used (and I use a combination of orthogonal and efficient designs in my day to day work). Constants in SP experiments are by and large meaningless, particularly when unlabelled experiments are used. They reflect the average unobserved effect related to a set of hypothetical alternatives, where respondents observe multiple tasks when in reality they see typically only one in real markets. They are really only an artefact of the experiment and although they may reflect in part preference for a labelled alternative, they include a whole bunch of other unknowns. Hence, you would typically calibrate them or in the case of an unlabelled experiment ignore them completely. Note you still need to estimate them as they are accounting for something. Given the above, it is not clear to me what you mean by using a generic constant across the two alternatives. Personally, I would use the ASCs in estimation to account for differences in average error, but then ignore them in any post application of the results.

Re the probabilities, early work in efficient designs promoted things such as utility balance and minimum overlap (e.g. Huber and Zwerina 1996). Whilst these papers were pioneering in their day, we now know that these types of constraints only constrain efficiency and of all the things you might want to avoid, utility balance is probably the biggest one when it comes to statistical efficiency. Indeed, an (D-) efficient design will attempt to produce 0.7/03 probabilities in binary logit models (see Kaninnen 2002). You also don't want dominant alternatives - this implies infinite scale in that task, which can in econometric terms, lead to what is called model separation. As per above, 0.7/0.3 seems to be a sweet spot.

Do designs cause bias? There are two answers to this - yes and no! The argument for no: Asymptotically, one should be able to retrieve the population parameter estimates irrespective of the design. Many famous people actually use random designs. You also have RP data which we estimate the same models on, but seem to not ever comment on, though the same issues must exist for such data. In this respect, the work of McFadden 1974 which everyone cites, but few seem to have read, suggest that even in finite samples, the model will retrieve the parameters (the paper is quite extensive and covers a lot of material!). So orthogonal versus efficient design, should not impact upon the estimates (after accounting for scale) - the impact should be only on the standard errors (this is mostly what Michiel and I found in out 2011 paper where we compared empirically several design types). The argument for yes: there is evidence for what are known as design artefacts - this is not specific to just SP experiments, but relates to any survey really. In SP terms, it might be possible that a design will induce certain types of behaviour. In one example, I was interested in dating choices. I used a Street and Burgess design for a pilot amongst my (much younger) colleagues and found some funny results. One of the attributes was whether the potential partner was a single parent (or not). Given the design type, one alternative in the design always a single parent and the second always not. It turns out that (at least amongst my colleagues), it would be acceptable to date an axe murdering, chain smoking, alcoholic, neo Nazi psychopath, as long as they don't have kids. Because there was no attribute level overlap, in my small sample, they choose only on this attribute and the design allowed them to do this. Is this a problem - again yes and no. It might be that if I scaled the sample up to the population, I might have found the same preferences, in which case, any design should have found this same result. Or it might be that I had a biased pilot sample (young, single, and obviously not too discerning in who they are prepared to date) but over the population, there would be preferences for other attributes that this design (any design would pick up), given that the estimates are population averages anyway.

Hope this helps.

John
johnr
 
Posts: 171
Joined: Fri Mar 13, 2009 7:15 am

Re: Efficient design and ASC

Postby jveld » Wed Jul 09, 2014 5:05 pm

Hi John,

Thanks for your reply, this helps indeed!

What I meant with respect to the generic constant is that I would like to include a constant for option A and B together as opposed to the opt-out. Since all my attributes are effects coded, this constant will then tell me the preference of the population for engaging in my program versus the opt-out (if I’m right on this point).

Jorien
jveld
 
Posts: 3
Joined: Fri Jul 04, 2014 7:08 pm

Re: Efficient design and ASC

Postby johnr » Thu Jul 10, 2014 9:25 am

Hi Jorien

Sorry it wasn't clear to me that there was a SQ alternative involved. In that case the answer is simple. You cant test whether the ASCs are statistically equal to each other or not and if they are, treat them as generic, else you should leave them as alternative specific. You can either do this by estimating the two models and conducting a LL ratio test, use a wald test for the restrictions, or do a simple t-test on the model with them treated as alternative specific using the parameter variances and covariances which you can obtain from the VC matrix for the model (Biogeme computes these tests automatically for you if you are using that). Again, the ASCs are picking up all sorts of survey artifacts, and hence I always argue that you should not assume what you can test for.

You don't want to simply treat them as generic without such test. In doing so, you are assuming that the mean error variance between the two is the same, which if there is any left to right bias, may not be the case. This will then potentially bias the rest of your parameters. I have problems with the usual approach of simply placing a constant for the SQ alternative (equivalent to a generic non-SQ constant; this is common in the environmental economics literature for example), without conducting such tests.

John
johnr
 
Posts: 171
Joined: Fri Mar 13, 2009 7:15 am


Return to Choice experiments - general

Who is online

Users browsing this forum: No registered users and 29 guests

cron