Interactions and experimental design

This forum is for posts covering broader stated choice experimental design issues.

Moderators: Andrew Collins, Michiel Bliemer, johnr

Interactions and experimental design

Postby davidj » Mon Sep 07, 2015 6:53 pm

Hi Ngene Team,

I wanted to get your advice on developing experimental designs to accommodate interaction effects.

I am hoping to develop an experimental design that would accommodate the estimation of models of a range of interaction effects, such as age, gender, income etc. And models that will also be suitable to estimate ICLV models.

My design consists of 5 attributes, each with three levels and two alternatives. Consequently I am deciding between the following:

1. Completing an Bayesian efficient design with main effects
Pros: efficient design with smaller sample size and more reliable parameters, even with some misspecification
Cons: Interactions/covariates not included in the design, which may impact on reliability

2. Completing fractional factorial design with foldover inclusion
Pros: accounts for two-way interactions
Cons: increase sample size required to achieve reliable parameters


I am aware that the literature states that efficient designs outperform orthogonal designs, even when priors are miss specified, as well as reduce sample size. I am also aware that interactions can be included in efficient designs ie through simulations or segmented designs as per the ngene manual.

However, given I want to test so many interaction effects; I am leading to develop a fractional factorial design. This method also seems most prominent in the literature.

How do you usually handle such situations and what would you recommend?

Thank you so much for the assistance. I look forward to your feedback.

David
davidj
 
Posts: 14
Joined: Tue May 27, 2014 3:21 pm

Re: Interactions and experimental design

Postby johnr » Tue Sep 08, 2015 11:57 am

Hi David

The issue of interaction effects is an interesting one, however as with most things design related, there is a lot of miscommunication out in the literature about the subject matter. I will try and answer this systematically, but forgive me if I digress at times.

1. The language of main effects + interaction effect designs is a hangover from the literature dealing with linear models, and does not naturally translate over to non-linear models neatly. Consider the following design which results in a zero correlation structure for the main and interaction effects in the traditional sense

Design
;alts = alt1, alt2
;rows = 8
;orth = sim
;foldover
;model:
U(alt1) = b1 * A[-1,1] + b2 * B[-1,1] + b3 * A*B /
U(alt2) = b1 * A + b2 * B + b3 * A*B $

a. I want you to generate the design and copy and paste it into excel (I don't know how to post images, so unfortunately I will have to describe the process I want you to follow. Hopefully it will translate - if you email me directly, I will send you the actual files also).
b. Create the two interaction columns (i.e., for the first and second alternatives).
c. Calculate the correlation structure of the main and interaction effects just to confirm that they are identified (according to traditional linear land thinking).
d. Now, for the main and interaction effects, compute X'X where X is the design (both main and interaction effects). This can be done using the matrix multiplication function in Excel (mmult()), and matrix transposition function (transpose()). You need to first select a block of cells 6*6 in size, type the formula (which will appear in the first cell), and press shift+ctrl+enter simultaneously. My equation looks thus: =MMULT(TRANSPOSE(B2:G17),B2:G17), where the design (main effects and interactions are located in cells B2:G17.
e. You should obtain a matrix, 6*6 in size, where the elements of the leading diagonal are all 16, and the off-diagonals are all zeros. I placed this matrix in cells B29:G34.
f. Now we want to take the inverse of this matrix. Select another set 6*6 of blank cells, and use the minverse() function, where the cells obtained in e are located in the brackets. In my spreadsheet, I used =MINVERSE(B29:G34).
g. You should obtain a matrix, 6*6 in size, where the elements of the leading diagonal are all 0.0625, and the off-diagonals are all zeros.

So what did we do that for? In linear (regression) land, the AVC matrix of a design is sigma*(X'X)^-1. Sigma is a scalar that scales all elements in (X'X)^-1 equally, hence we will ignore it to simplify what I am trying to say. So let us examine (X'X)^-1 only for this discussion, which is a non-scaled version of the AVC matrix you would obtain if you used this design in a linear regression model. If you follow my logic above, step g. gives you (X'X)^-1, the non-scaled version of the AVC matrix you would obtain if you used this design in a linear regression model. If you examine it, you get small values for the leading diagonals (0.0625), and zero values for the off-diagonals. These are the parameter variances (the square roots of which are the standard errors) and the parameter covariances. So the design I generated not only has a good correlation structure (if you assume zero correlations is good), but it translates through to the model in terms of producing very low standard errors (maximises t-ratio) and zero parameter covariances. Note, as per my previous posts, we don't care about the correlation structure of X (at least we shouldn't) as the aim is to estimate a model based on the design (so we care about the properties of the model), and this example shows why for linear models, you want an orthogonal (or if not orthogonal, a design with zero correlations). The parameters of the model will not be correlated (zero covariances), hence the influence of B1 on Y is independent of B2 (again, zero covariance between B1 and B2).

As an aside, for the off-diagonals which are all zero, multiplying them by sigma has no influence. Hence sigma, which relates to Y and the Betas (econometrics 101), will affect the standard errors only.

2. Now for the complicated part. We are now going to simulate choice data using the same design and see how it goes in practice. I will use Nlogit (you can use whatever you want to for this), so will explain how to set this up for Nlogit format. Note that other software may use other formats so translate this to whatever is the appropriate format for the estimation software you are going to use.

a. First of all, each row in Nlogit is an alternative, not a choice task (multiple rows are choice tasks). The design output from Ngene is different in that each row is a choice task. Hence, you need to reformat the design (with the interaction terms) such that it will look like this (I will explain everything in detail below). For the moment, you will see that Altij ={1,2}, where Altij = 1 is the first alternative, and Altij = 2 is the second. The attributes (main and interaction effects), simply sit next to the relevant alternative... I have created another variable called Cset which is always equal to 2. This will tell Nlogit how many alternatives each choice task has (which is fixed at two in this design). The Resp variable is simply a respondent index (I am assuming each (fold-over) block is assigned to a different respondent).

Resp Block Cset Altij Choice Att1 Att2 Int
1 1 2 1 1 -1 -1 1
1 1 2 2 0 -1 1 -1
1 1 2 1 0 -1 1 -1
1 1 2 2 1 1 -1 -1
1 1 2 1 1 1 -1 -1
1 1 2 2 0 1 -1 -1
1 1 2 1 0 1 1 1
1 1 2 2 1 -1 1 -1
1 1 2 1 0 1 1 1
1 1 2 2 1 -1 -1 1
1 1 2 1 0 1 -1 -1
1 1 2 2 1 1 1 1
1 1 2 1 1 -1 1 -1
1 1 2 2 0 1 1 1
1 1 2 1 0 -1 -1 1
1 1 2 2 1 -1 -1 1
2 2 2 1 1 1 1 1
2 2 2 2 0 1 -1 -1
2 2 2 1 0 1 -1 -1
2 2 2 2 1 -1 1 -1
2 2 2 1 1 -1 1 -1
2 2 2 2 0 -1 1 -1
2 2 2 1 1 -1 -1 1
2 2 2 2 0 1 -1 -1
2 2 2 1 1 -1 -1 1
2 2 2 2 0 1 1 1
2 2 2 1 0 -1 1 -1
2 2 2 2 1 -1 -1 1
2 2 2 1 0 1 -1 -1
2 2 2 2 1 -1 -1 1
2 2 2 1 0 1 1 1
2 2 2 2 1 1 1 1

b. The choice variable is a little more complicated to construct. The assumption, is that people will choose the alternative that maximises their utility, where Unsj = Vnsj + Ensj, and Vnsj = beta*Xnsj and Ensj is IID EV1 distributed (n = respondent, s choice task and j alternative). So we need to construct Vnsj and Ensj first. Let us assume the following utility function:

Vnsj = -0.5*X1 - 0.6*X2 + 0.8*int(eraction)

such that for example, V111 for the first alternative will be V111 = -0.5*-1 - 0.6*-1 + 0.8*1 = 1.9 and V112 = -0.5*-1 - 0.6*1 + 0.8*-1 = 0.9. You can work out the values for the remaining Vnsj of the design.

Now we need to construct the Ensj values, which are randomly distributed IID EV1. This can be simulated using the following equation =-LN(-LN(RAND())) where ln is log and rand() is a random value. This simulates a random draw from the randomly distributed IID EV1 term. Each alternative will have its own randomly drawn value, so you can drag this down (copy and paste) the equation for each alternative and choice task.

Now you have both Vnsj and Ensj for each alternative and choice task, and can compute Unsj by summing the two together. Under utility maximisation, respondents will choose the alternative that maximums their utility. The choice variable above, I constructed using a simple if statement such that choice = 1 if Unsj > Unsi, or zero otherwise.

c. If you have been able to follow the above, you should have choice data for 2 respondents. Create 100 respondents by copying the data 50 times stacked under each other (recall that the design is blocked into 2, so 50 * 2 = 100 respondents). Make sure you do not paste special as you still want the random draws from the error terms to vary over respondents, choice tasks and alternatives. I would save this, least you want to loose the thing.

3. Open another Excel spreadsheet, and copy and paste special the simulated data for the 100 respondents into the new spreadsheet. Paste special so that the choice index is now fixed, otherwise it will change every time you do something. Save this file somewhere as a csv file.

4. Open Nlogit (or whatever software you choose) and open the CSV data. Estimate the following model (you can do alt specific too if you want, but I am trying to keep this simple)

nlogit
;lhs=choice,cset,Altij
;choices=A,B
;model:
U(A) = b1*Att1 + b2*Att2 + B3*Int /
U(B) = b1*Att1 + b2*Att2 + B3*Int $

for my data I got b1 = -0.48066, b2 = -0.62534 and b3 = 0.77920, (I assumed -0.5, -0.6 and 0.8 when I generated the data, so pretty damn close). Note you will get different values, given that you took different random draws (different draws from the rand() function) for the error term, but they should be close to the parameters you assumed when generating the model.

5. Now heres the kicker. Open the AVC matrix for the model you estimated (it can be found in the project bar under the Matrices folder and is called VARB). This is what I got:

0.00486822 0.00176692 -0.00187315
0.00176692 0.00502271 -0.00208152
-0.00187315 -0.00208152 0.00536592

Now, what do we see when we look at the off-diagonal elements - I see non-zero values, which may look small, but recall that these are for 100 respondents. If you multiple them by sqrt(50) it will give you a better idea of what is happening (the AVC matrix is divisible by sqrt(N) where N is the number of design replications, not respondents - even though we have 100 respondents, the design is replicated only 50 times - multiplying the elements of the AVC matrix by sqrt(50) normalises it to N = 1, which is equivalent to what I assumed when I was working in linear land above (so I can compare apples to apples). Now I get

0.034423514 0.012494011 -0.013245171
0.012494011 0.035515923 -0.014718569
-0.013245171 -0.014718569 0.037942784

Now, I will give you a moment to appreciate what you have just done.

Lets summarise this ... You took a "main effects plus two way interaction effects design" and showed that the parameters are correlated, which means that it is not a "main effects and two way interaction design" as main effects and two way interaction designs should have zero covariances. Or did you mean you took a "main effects plus two way interaction effects design" generated for a linear model, and applied it to a non-linear model, and assumed it would translate, but obviously it didn't, in which case,

a) should we call it a "main effects plus two way interaction effects design" if, when applied to the model you estimated, it is not doing what it should, or alternatively,
b) when generating such a design, call it something else (in case you may not have noticed, if I were forced to choose between a and b, I would pick b).

So lets go back to basics... the AVC matrices of non-linear models are dependent on the betas (repeat the above with different priors if you don't believe me yet - simply replace the betas I assumed when I generated the design). Design theory (in both linear and non-linear land) is about minimising the elements of the AVC matrix of the design (for both main effects and interaction effects) - recall, we care about the model results, not the design. If the betas were all zero, you would have found in the above that the covarances are equal to zero - again, try it if you don't believe me - simply set the priors/betas = 0 in the simulation task.

Ergo, when people talk about main effects plus interaction designs as they do, they are really either

1. talking about linear models,or
2. talking about non-linear models generated under the assumption of an MNL model, with zero priors (all the betas are zero), optimised for D-error

I presume we are not talking about linear models, otherwise I am on the wrong forum, in which case I apologise profusely for wasting everyone's time (Ngene deals with non-linear choice models only at the moment). The point is, that if you want to discuss design generation for choice models, you should be talking about the assumptions under which you generated the design, and then perhaps the properties of the resulting design, not starting with the properties of the design and ignoring or pretending that it was generated under a certain set of assumptions (whether you knew you were making these assumptions or not is a moot point, you are regardless of whether you knew or not).

So when you say "Completing fractional factorial design with foldover inclusion", I read you are generating a design for an MNL model under the assumption of local priors equal to zero assuming D-error as your optimality criterion and that you are using tricks developed for linear models (the foldover etc.) as your algorithm to locate the design (i.e., search only amongst orthogonal designs (a constraint) and use the foldover). If these are you assumptions, I am fine with it, but what I want to eradicate completely from the choice modelling literature is linear land language and thinking when generating and more importantly writing about designs. What I am calling for is a change in dialogue - that we need to talk about and discuss the assumptions used when generating designs for choice models and treat designs as outputs of these assumptions, not inputs which is how the literature (myself included in the past) have tended to treat them. What I don't understand however is the comment that Bayesian efficient designs don't include interaction terms. Why do they not? As I said above, what you called "fractional factorial design with foldover inclusion" is actually a D-efficient design assuming an MNL model under the assumption of zero priors - it is an EFFICIENT DESIGN - just efficient under a particular set of assumptions. So why is it special? The fact is that it isn't - the zero priors assumption is just an assumption, that could be equally as valid if you assumed other non-zero priors.

Indeed, you can assume priors for interaction terms for Bayesian designs also. You can set them at zero if you want to ensure that they can be estimated (that is the combinations will occur over the design at minimum), use non-zero local priors (if you know the direction), or use weak priors (uniforms either side of zero).

Conclusion: I am not anti (2 - 2 in your post), however I am very anti the language used when applied to discrete choice literature (not just by you but others so please don't take it personally). Until we start talking assumptions -> designs, rather than start with discussing designs that are assumed to be (but in non-linear land are not) assumption free, I'm afraid that the literature will simply fail to move forward. We need to stop with the orthogonal versus efficient design debate NOW, as orthogonal designs are efficient designs (MNL, zero local priors, D-error). The debate we need to be having is what are the appropriate assumptions that make sense for discrete choice models (which is the most important debate that is not being debated in the literature), but we can only have that debate when people stop thinking linear land like. Until then, we are stuck in no-(wo)mans land.

John
johnr
 
Posts: 171
Joined: Fri Mar 13, 2009 7:15 am

Re: Interactions and experimental design

Postby davidj » Sun Sep 13, 2015 4:36 pm

Hi John,

Firstly, thank you for the feedback. The support on this forum is fantastic, and you certainly provide interesting and thought provoking discussion. I also agree that there is much confusion and conflicting arguments in the literature.

My main thought process for using the fractional orthogonal design was due to the assumption that I have little to no information on the relative priors for the interaction effects. The final analysis was also complicated as I want to:

• Test a range of models various models MNL, MMNL and Nested following data collection.
• Test a range of interactions ( 10+ socio demographic factors) and also two way interactions
• Complete segmented models ( 3+ potential groups)
• Include Latent variables in the analysis possibly through the sequential approach

I recognise the flaws in the design process as I should be optimising the design to the final design however thought given the range of uncertainty, my thought was a fractional factorial design with its assumptions of no prior information would be the best way forward,

On the contrary I also recognise that another factor to consider is the sample size required. There is again much debate in the literature as to the required sample size without a concrete method in use. For example, Lancsar and Louviere in Conducting Discrete Choice Experiments to Inform Healthcare Decision Making stated that “our empirical experience is that one rarely requires more than 20 respondents per version to estimate reliable models.” Whilst, in your own paper “Sample size requirements for stated choice experiments” you ouline other methods for estimating sample size requirements such as srs and rules of thumb etc.


So my question are really:

1. Given the level of uncertainty and the assumptions of no prior information, is it reasonable to complete a fractional factorial design using foldover in this instance
2. If I wanted to complete an efficient design and include all potential possible interactions and also all possible two way interactions, how would the design look in Ngene. My thoughts are I would only include main attributes (recognizing the limitations of not including interactions) and the design would appear something like:

Design
;alts = cv, AT1*, AT2*
;rows = 24
;eff = (mnl,d, mean)
;bdraws = halton(1000)
;model:

U(cv) = b2[(U, -10.01,0)] * A.ref[3000] + b3[(U, -0.01,0)] * B.ref[20] + b4[(U,0,0.001)] * C.ref[850] + b5[(U, -0.001,0)] * D.ref[180] + b6[(U, 0,0.01)] * E.ref[10] /
U(AT1) = b7 + b2 * A.piv[10,20,30] + b3 * B.piv[-20%,-40%,-60%] + b4 * C.piv[-85%,-70%,-55%] + b5 * D.piv[-100%,-90%,-80%] + b6 * E.piv[0%,10%,20%]/
U(AT2) = b8 + b2 * A.piv[10,20,30] + b3 * B.piv[-20%,-40%,-60%] + b4 * C.piv[-85%,-70%,-55%] + b5 * D.piv[-100%,-90%,-80%] + b6 * E.piv[0%,10%,20%]$

Whereas if I was completing a fractional factorial design the design would look like:

Design
;alts = AT1*, AT2*
;rows = 36
;orth = sim
;eff = (mnl,d)
;block = 8
;foldover
;model:

U(AT1) = b2 * A[35,40,50,60] + b3 * B[16,12,8] + b4 * C[127.5,255,382.6] + b5 * D[0,18,36] + b6 * E[8,9,10]/
U(AT2) = b8 + b2 * A[35,40,50,60] + b3 * B[16,12,8] + b4 * C[127.5,255,382.6] + b5 * D[0,18,36] + b6 * E[8,9,10]$

3. If I do complete the fractional factorial design, can you then use the sample size requirements ‘rules of thumb’, ie 20 respondents per choice block?

Thanks again for all your assistance, and I am sure I speak on behalf of everyone on the forum in saying we would be lost without the support.

Cheers

David
davidj
 
Posts: 14
Joined: Tue May 27, 2014 3:21 pm

Re: Interactions and experimental design

Postby johnr » Tue Sep 15, 2015 10:27 am

Hi David

Please don't read my post as being negative of your approach. It wasn't meant to be. I'm simply trying to tie up the language that is being thrown about. My argument is simple - rather than start with the desire to generate and/or use an orthogonal design (or any other design), we should be starting with the assumptions (our beliefs about the world/design problem) and work from there. Personally, either I would have to be having a very bad day to argue with someone about the assumptions they made when generating there design, or the assumptions would have to be patently wrong. In terms of priors, I don't think that one can ever be correct, so I have no problems with someone arguing that they have assumed zero priors. Hence, by logical extension, I have no problem with the use of orthogonal designs (or designs with zero attribute correlations). All I am asking is that we as a community start by acknowledge the assumptions we are making and treat the design as an output of those assumptions, not an input into the design process.

Now, in terms of your specific question, if you are assuming zero priors for an MNL model under D-error, then the output design will likely be orthogonal or near orthogonal, so there are no issues there, at least for me. However, under one interpretation, one could argue that this is not the same as has been argued in the past, by myself no less, as assuming that you have no information about the parameters. Indeed, under this argument, you are clearer assuming that the population parameters take a very particular value , namely zero. One could argue that you are not imposing a particular sign on the parameter as with assuming the parameter were -1 or 1 say, however at the end of the day, you are still assuming the parameters take one particular value over the population - albeit zero. Under this argument, the design is optimized for zero priors, it will be better suited for finding statistical significance for parameters that are close to zero, but if not close to zero, then it will be less better suited. Taking this line of argument further, strictly speaking, only Bayesian efficient designs truly reflect uncertainty about the true population parameters because they are optimised over a range of priors, not just for one single value (if you read the original papers on Bayesian efficient designs, they were introduced for the specific purpose of representing analyst uncertainty about the true population parameters). So I guess what I am saying is that if you wish to assume zero priors, do so, own it, and wear the badge with pride, however understand the argument you are really making for doing so.

And as you say, at the end of the day, it really boils down to sample size, as you can trade to some point, sample size for precision, as you suggest, however there need not be any debate on the issue, it is very clear what the sample size calculation is. We provide the theoretical minimum sample size calculation in NGene, and it is not based on experience, pseudo science, witch doctors or visits to your local oracle. It is based on the mathematics provided to us by none other than the great Dan McFadden in his seminal 1974 paper, which everyone cites, but as seems to be the case in this day and age, being more than 5 years old nobody actually reads. The S-error is the 'theoretical minimum' sample size as it is based on the betas (priors). It requires knowledge of the betas in its calculation, however, here is the kicker, whilst we compute the S-error in generating designs, it carries through to the modelling once you get real data. It is not a design issue or calculation, we have simply applied it to designs. What it basically says is that statistical significance is a fickle beast. Just because we found something was not statistically significant, perhaps if we added just 2 more respondents, it might have been!

So back to your question. The only concern I would have is your reference to rules of thumb for sample size calculations. In Hensher, Rose and Greene (2005), we provide an alternative rule of thumb for RP data (yep, I was dumb back then).

"Experience suggests that the best strategy for sampling is choice-based sampling with minimum sample sizes of 50 decision makers choosing each alternative. For example, two alternatives suggest a minimum sample size of 100 decision makers; three alternatives a minimum of 150 decision makers and so forth. Sampling on the chosen alternative to conform to a minimum quota of 50 decision makers per alternative involves a non-random sampling process which in turn means that the sample is unlikely to represent the true population market shares."

Why is RP different to SP in sample size? The same econometric models are being estimated on both right? The same principles should therefore apply to both, so why not 50 respondents choosing each alternative in SP? The reason is that the above rule of thumb is patently stupid (yes, I just called myself stupid), even for RP data. Why not 49 respondents, or perhaps 51? The fact of the matter is that the sample size theoretically impacts on the standard errors of the econometric model being estimated, so when someone says N = 20, or N = 50 is good, the question is, good based on what criteria? The answer inevitably is, experience, but experience is not a criteria, and it is subjective, not objective. N = 20 might be too little, or too many. I once helped a PhD student who did a pilot using an orthogonal design, and for N = 10, every parameter was statistically significant and of the right sign and magnitude. N = 20 would actually have been overkill in terms of statistical significance. On the other hand, could I trust a model with such a small sample size to be generalizable to the population? That is the missing question.

Both syntax you provide are fine. Both will work, however your question is really which mimics your reality better. That I cannot answer. A non-answer I know, but the only answer I have.

John




From personal experience, designs that are generated for MNL typically perform well for MMNL models, particularly as the number of choice tasks shown to each respondent increases, assuming that a panel specification.
johnr
 
Posts: 171
Joined: Fri Mar 13, 2009 7:15 am

Re: Interactions and experimental design

Postby davidj » Thu Sep 17, 2015 10:49 am

Thanks John again for the feedback and discussion. It is certainly an interesting and thought provoking topic.

Following on from the points you raised, I have a few follow up questions.


1. Determination of sample size for orthogonal designs:

How do you think the literature has got away with using rules of thumb for sample size for so long?

Given the calculation of sample size (s estimate calculated as N=[1.96xSE(Bk)/Bk]^2) requires the input of prior parameters, there is no way of determining sample size for orthogonal designs without priors.

In the literature it appears that only the rules of thumb are used? Is that correct and ‘accepted’? Or how else do ‘experts’ get around this for orthogonal designs? SRS and ESRS strategies have been mentioned previously.


2. Optimising fractional factorial designs using d-error:

If you do need to complete a fractional factorial design, do you recommend optimising based on d-error? To me this makes sense as opposed to choosing random choice sets.

Also, if you were including the fold over condition, does the ‘eff’ property take this into account:

Ie:
Design
;alts = AT1*, AT2*
;rows = 36
;orth = sim
;eff = (mnl,d)
;block = 8
;foldover
;model:

U(AT1) = b2 * A[35,40,50,60] + b3 * B[16,12,8] + b4 * C[127.5,255,382.6] /
U(AT2) = b8 + b2 * A[35,40,50,60] + b3 * B[16,12,8] + b4 * C[127.5,255,382.6] $


3. Main effects only with multiple interactions:

If you were completing a Bayesian efficient design with main effects only (no interactions) in the design (but wanted to estimate multiple interactions in the model), would you recommend to increase the sample size to compensate the potential misspecification in the prior values?

For example, if the S-estimate is 250 for the main effects design, increase this to say 500 to account for the misspecification:

Design
;alts = cv, AT1*, AT2*
;rows = 24
;eff = (mnl,d, mean)
;bdraws = halton(1000)
;model:

U(cv) = b2[(U, -0.01,0)] * A.ref[300] + b3[(U, -0.01,0)] * B.ref[20] /
U(AT1) = b7 [(U, 0,0.001)] + b2 * A.piv[10,20,30] + b3 * B.piv[-20%,-40%,-60%] /

U(AT2) = b8[(U, 0,0.001)] + b2 * A.piv[10,20,30] + b3 * B.piv[-20%,-40%,-60%]

Thank you again for all the comments and feedback, I do appreciate it.

Kind regards

David
davidj
 
Posts: 14
Joined: Tue May 27, 2014 3:21 pm

Re: Interactions and experimental design

Postby johnr » Thu Sep 17, 2015 1:57 pm

Hi David

1. I'm guessing that it really hasn't been that much of an issue in the past. Design theory for DCMs was largely lifted from the literature using traditional conjoint and hence linear models, and it worked (and still does). I think many people are getting too tied up about generating the perfect efficient design, and as I have said at previous conferences, the fact that people were using orthogonal designs for near on 30 years does not mean that since we have started looking at the mathematical properties of the logit model, that the results of all those previous studies are now somehow wrong. It's a case of if it ain't broke (and it ain't), why fix it syndrome that collectively occurred.

Anyway, the point is that in most cases, the sample sizes used in the past have probably been in excess of the sample required for statistical significance and hence, it has never really been an issue. Likewise, people probably never thought that if they collected a larger sample, then statistically insignificant parameters might have been found to be statistically significant. Once you have your sample, thinking about what-ifs is very rare and the modelling fever often kicks in.

Personally, I have certain things that I look for when generating a design that I believe may be more important than obtaining the lowest D-error possible. First and foremost is dormancy, which can wreck havoc on logit type models. In econometrics, dominant alternatives can lead to what is called model separation (if you have ever had a bad break up, it's sort of the same thing), or what the choice modelling community would refer to as scale issues (infinite scale basically as the error variance for that choice task should be theoretically zero). I think of this however more in terms of an estimation problem - I prefer to think that the data is always right, but we are estimating the wrong model - if we have deterministic choices due to the design, why are we estimating a model that assumes stochastic error?

2. Yes, and please note you can still impose the constraint that the X matrix be orthogonal and use non-zero (even Bayesian) priors. In your syntax, you are assuming zero priors (Again, I'm not questioning this, just stating it) but it need not be that way. My take on it is if you want to use an orthogonal design, why not use the best one available rather than the very first one you find. Not all orthogonal designs are equal.

3. The S-error is an interesting on for Bayesian designs because it is computed (in your case) as the mean S-error over the draws. Means are pulled to (improbable but not necessarily impossible) outliers values, and hence the mean S-error may be upwardly biased (or downwardly). But you could use the maximum S-error if you wanted to (Ngene provides all the error measures by draw taken, so you can find the maximum value over the draws - be careful however as if you use too few draws, this may be misleading). I typically use the median instead of the mean, but I'm one out of pretty much everyone else, so feel free to ignore. And I would always as a rule of thumb :D treat S-error as the theoretical minimum anyway. I would suggest that you look at your priors however. For A.ref, the level is 300 whilst of B.ref it is 20. Assuming the midpoint of your Bayesian draws, the beta would be -0.005 for both. If you multiply 300 by -0.005 you get -1.5 whereas for B.ref, you get -0.005 * 20 = -0.1. Even though you are trying to capture the sign, you are also implying that A.ref has 15 times more impact (on average) upon utility than B.ref does. Not sure you wish to do this.

John
johnr
 
Posts: 171
Joined: Fri Mar 13, 2009 7:15 am

Re: Interactions and experimental design

Postby davidj » Thu Sep 17, 2015 5:04 pm

Thanks John!

At this stage I am going to go with a Bayesian efficient ‘main effects only’ design. Using the s-error as a minimum for sample size :D

My only reservation is not including the interactions (10+ demographics and two-way interactions) in the design, and including them in the estimation of the utility specification and the final design, as I know this isn’t an ideal design approach?

But I guess that is life, nothing can be 100% perfect and cover every possible situation or unknowns.

Thanks again for the help.
davidj
 
Posts: 14
Joined: Tue May 27, 2014 3:21 pm

Re: Interactions and experimental design

Postby johnr » Thu Sep 17, 2015 5:48 pm

Hi David

Why not include them in the Bayesian design, ala

Design
;alts = cv, AT1*, AT2*
;rows = 24
;eff = (mnl,d, mean)
;bdraws = halton(1000)
;model:
U(cv) = b2[(U, -0.01,0)] * A.ref[300] + b3[(U, -0.01,0)] * B.ref[20] + B9[(U, -0.001,0.001)]*A*B /
U(AT1) = b7 [(U, 0,0.001)] + b2 * A.piv[10,20,30] + b3 * B.piv[-20%,-40%,-60%] + B9*A*B /
U(AT2) = b8[(U, 0,0.001)] + b2 * A.piv[10,20,30] + b3 * B.piv[-20%,-40%,-60%] + B9*A*B $

?

John
johnr
 
Posts: 171
Joined: Fri Mar 13, 2009 7:15 am

Re: Interactions and experimental design

Postby davidj » Fri Sep 18, 2015 11:05 am

Hi John,

I agree that that would be an improvement. My concern is how to handle a greater number of attributes and a range of interactions (covariates, such as socio demographics), that may or may not be significant.

For example, say I had 5 attributes and wanted to include 3 potential socio demographic factors: age (under 30), income ( high income earner), sex (male)


Design
;alts = cv, AT1*, AT2*
;rows = 24
;eff = (mnl,d, mean)
;bdraws = halton(1000)
;model:


U(cv) = b2[(U, -0.01,0)] * A.ref[300] + b3[(U, -0.01,0)] * B.ref[20] + b4[(U,0,0.001)] * C.ref[850] + b5[(U, -0.001,0)] * D.ref[180] + b6[(U, 0,0.01)] * E.ref[10] + B9[(U, -0.001,0.001)]*A*B + B10[(U, -0.001,0.001)]*A*C + B11[(U, -0.001,0.001)]*A*D + B12[(U, -0.001,0.001)]*A*E + B13[(U, -0.001,0.001)]*B*C + B14[(U, -0.001,0.001)]*B*D + B15[(U, -0.001,0.001)]*B*E + B16[(U, -0.001,0.001)]*C*D + B17[(U, -0.001,0.001)]*C*E + B18[(U, -0.001,0.001)]*D*E + B19[(U, -0.001,0.001)]* age[0,1] + B20[(U, -0.001,0.001)]* income[0,1] + B20[(U, -0.001,0.001)]* sex[0,1]/

U(AT1) = b7 + b2 * A.piv[10,20,30] + b3 * B.piv[-20%,-40%,-60%] + b4 * C.piv[-85%,-70%,-55%] + b5 * D.piv[-100%,-90%,-80%] + b6 * E.piv[0%,10%,20%] + B9*A*B + B10*A*C + B11*A*D + B12*A*E + B13*B*C + B14*B*D + B15*B*E + B16*C*D + B17*C*E + B18*D*E + B19* age[0,1] + B20* income[0,1] + B20* sex[0,1]/

U(AT2) = b8 + b2 * A.piv[10,20,30] + b3 * B.piv[-20%,-40%,-60%] + b4 * C.piv[-85%,-70%,-55%] + b5 * D.piv[-100%,-90%,-80%] + b6 * E.piv[0%,10%,20%] B9*A*B + B10*A*C + B11*A*D + B12*A*E + B13*B*C + B14*B*D + B15*B*E + B16*C*D + B17*C*E + B18*D*E + B19* age[0,1] + B20* income[0,1] + B20* sex[0,1] $

The more variables you want to test, the more complicated the design gets. For example, say you then wanted to test if the socio-demographic factors then interacted with the attributes ie attribut 'A' age?

Given that I am not sure if the covariates or interactions will be significant, my thought process was to complete the Bayesian design for main attributes only to reduce complexity and accept the lack of efficiency.

How do you usually handle such situations?

PS - I am really enoying the discussion :D

Cheers

David
davidj
 
Posts: 14
Joined: Tue May 27, 2014 3:21 pm

Re: Interactions and experimental design

Postby johnr » Fri Sep 18, 2015 11:51 am

Hi David

I'm not sure I would worry about the covariates for a starter, which may sound strange as the objective is to mimic the utility function as best as possible, however this is not done very often - in fact only once that I am aware of. Michiel and I did this for a conference paper and it is in Ngene, but I wouldn't do it the way you do in your syntax. Age and gender are constants across the alternatives (you don't change gender if you are looking at alternative cv and start looking at ALT1 for example - your gender is fixed - unless you are Miley Cyrus in which case you might be gender fluid when undertaking the experiment). If you take your syntax, Ngene will treat gender and age as if they are attributes of the alternatives and not characteristics of the respondents and hence vary these over the choice tasks. If you check the manual there is a means of including these variables however it basically involves generating different sub designs for each socio-demographic class. Again, Michiel and I worked out the theory to do this, and it is in Ngene, but I have never seen it actually used in practice (pretty much summaries most of my research actually :D ).

Secondly, the way to think about these types of problems is to optimise on your own concerns. Ideally you would want to put Bayesian priors on everything as you should be uncertain about all the parameters (else why do the study), however if this is going to be problematic, the next best thing is to place them on the parameters where misspecification is likely to be an issue (you can only determine this by generating a design and fixing the design, using fixed parameter vary them over a range of priors and see which ones cause the greatest loss of efficiency). This is a lot of work. So the next best thing is to do it on the parameters that are most important for you. In this case, main effects. You can place Bayesian priors on the main effects and fixed priors (zeros if you want) on the interaction effects (or vice versa).

John
johnr
 
Posts: 171
Joined: Fri Mar 13, 2009 7:15 am

Next

Return to Choice experiments - general

Who is online

Users browsing this forum: No registered users and 18 guests

cron