**Moderators:** Andrew Collins, Michiel Bliemer, johnr

36 posts
• Page **2** of **4** • 1, **2**, 3, 4

Thank you so much for re-confirming, Michiel!

- JvB
**Posts:**37**Joined:**Mon Mar 22, 2021 12:17 am

When putting the code like this:

Design

;alts = alt1*, alt2*, SQ*

;rows = 18

;block = 2

;eff = (mnl,d)

;alg = mfederov

;require:

SQ.amount = 3, SQ.period = 3

;reject:

alt1.amount = 3 AND alt1.period = 3,

alt2.amount = 3 AND alt2.period = 3

;model:

U(alt1) = b1[-0.0001] * contrib[1.8,2.5,3.2](6,6,6)

+ b2.dummy[0.0003|0.0002|0.0001] * amount[0,1,2,3] ? 0 = 300, 1 = 600, 2 = 900, 3 = unlimited amount

+ b3.dummy[0.0003|0.0002|0.0001] * period[0,1,2,3] ? 0 = 12, 1 = 42, 2 = 72, 3 = unlimited period

/

U(alt2) = b1 * contrib

+ b2 * amount

+ b3 * period

/

U(SQ) = b0[0]

+ b1 * contrib_sq[1.6]

+ b2 * amount

+ b3 * period

$

If after estimation, parameter b0 (being only part of the U(SQ)) has a negative sign there is a general aversion against the status quo, isn´t it?

Design

;alts = alt1*, alt2*, SQ*

;rows = 18

;block = 2

;eff = (mnl,d)

;alg = mfederov

;require:

SQ.amount = 3, SQ.period = 3

;reject:

alt1.amount = 3 AND alt1.period = 3,

alt2.amount = 3 AND alt2.period = 3

;model:

U(alt1) = b1[-0.0001] * contrib[1.8,2.5,3.2](6,6,6)

+ b2.dummy[0.0003|0.0002|0.0001] * amount[0,1,2,3] ? 0 = 300, 1 = 600, 2 = 900, 3 = unlimited amount

+ b3.dummy[0.0003|0.0002|0.0001] * period[0,1,2,3] ? 0 = 12, 1 = 42, 2 = 72, 3 = unlimited period

/

U(alt2) = b1 * contrib

+ b2 * amount

+ b3 * period

/

U(SQ) = b0[0]

+ b1 * contrib_sq[1.6]

+ b2 * amount

+ b3 * period

$

If after estimation, parameter b0 (being only part of the U(SQ)) has a negative sign there is a general aversion against the status quo, isn´t it?

- JvB
**Posts:**37**Joined:**Mon Mar 22, 2021 12:17 am

The constant for the SQ alternative is often slightly positive because most people prefer alternatives that they are familiar with (and resist change), which is also referred to as inertia effects. If the SQ constant is negative, then that indicates that people are keen to change and try something new.

Note that there may also be order effects at play. If the SQ alternative is always shown on the right (as third alternative), then it is often somewhat less chosen since people typically read from left to right and hence alternatives on the left are often chosen slightly more often than alternatives on the right. To account for order effects, in an unlabelled experiment one would add constants to all-but-one alternatives to account for this possible order bias. In a labelled experiment one will also need to randomise the order of the alternatives to disentangle the label constant from the order costant. In case you always had the SQ alternative on the right in your experiment, and not for some respondents on the left, then it is no longer possible to disentangle the order constant from the SQ constant, therefore a negative sign may actually be the result of ordering effects (which is a design artefact).

Michiel

Note that there may also be order effects at play. If the SQ alternative is always shown on the right (as third alternative), then it is often somewhat less chosen since people typically read from left to right and hence alternatives on the left are often chosen slightly more often than alternatives on the right. To account for order effects, in an unlabelled experiment one would add constants to all-but-one alternatives to account for this possible order bias. In a labelled experiment one will also need to randomise the order of the alternatives to disentangle the label constant from the order costant. In case you always had the SQ alternative on the right in your experiment, and not for some respondents on the left, then it is no longer possible to disentangle the order constant from the SQ constant, therefore a negative sign may actually be the result of ordering effects (which is a design artefact).

Michiel

- Michiel Bliemer
**Posts:**1387**Joined:**Tue Mar 31, 2009 4:13 pm

Hi Michiel,

thank you for pointing that out.

As I have an unlabelled experiment with the status quo always being on the left and a negative coefficient for the status quo constant I suppose that this is not due to any ordering effect as the ordering effect would make status quo be choosen more often (positive constant) and not a negative constant.

Or would you still recommend to switch the position of status quo and Alt1 and Alt2 to disentangle a possible order effect from SQ constant?

I was also wondering if it might make sense to check if the aversion against status quo is an effect or a distortion - so if the decisions of all the respondents who never choose the status quo in any of the 9 decisions are independent from socio-demographic factors or from their impression on the complexity of the experiment.

The coefficient for SQ constant b0 looks not so small and is also highly significant - anyhow in other studies the constant is way higher - I was wondering if it needs to be investigated further or if the value can be interpreted somehow?!

b0 -0.9618

b_Beitr -0.4862

b_HoeheEEE300 1.4049

b_HoeheEEE600 1.0240

b_HoeheEEE900 0.4527

b_HoeheEEEunb 0.0000

b_ZeitEEE12 1.0609

b_ZeitEEE42 0.5951

b_ZeitEEE72 0.2733

b_ZeitEEEunb 0.0000

Thank you very much.

Best.

J.

thank you for pointing that out.

As I have an unlabelled experiment with the status quo always being on the left and a negative coefficient for the status quo constant I suppose that this is not due to any ordering effect as the ordering effect would make status quo be choosen more often (positive constant) and not a negative constant.

Or would you still recommend to switch the position of status quo and Alt1 and Alt2 to disentangle a possible order effect from SQ constant?

I was also wondering if it might make sense to check if the aversion against status quo is an effect or a distortion - so if the decisions of all the respondents who never choose the status quo in any of the 9 decisions are independent from socio-demographic factors or from their impression on the complexity of the experiment.

The coefficient for SQ constant b0 looks not so small and is also highly significant - anyhow in other studies the constant is way higher - I was wondering if it needs to be investigated further or if the value can be interpreted somehow?!

b0 -0.9618

b_Beitr -0.4862

b_HoeheEEE300 1.4049

b_HoeheEEE600 1.0240

b_HoeheEEE900 0.4527

b_HoeheEEEunb 0.0000

b_ZeitEEE12 1.0609

b_ZeitEEE42 0.5951

b_ZeitEEE72 0.2733

b_ZeitEEEunb 0.0000

Thank you very much.

Best.

J.

- JvB
**Posts:**37**Joined:**Mon Mar 22, 2021 12:17 am

Indeed if your SQ asc is negative then I would not expect any significant ordering effect. If the interpretation of the constant is important for your study, then in your design you would preferably alternate the position of the SQ alternative between left and right across respondents (not within respondent). If the interpretation of the constant is not important, then there is no real need to randomise the position of the SQ alternative.

There can be several other reasons for a significant SQ constant. It would be that people somehow have an aversion against level 3 (unlimited) for the dummy coded attributes, which always appear in the SQ alternative. I do not know what the attributes mean, but sometimes there could be a "warm glow" effect when for example people select socially desirable alternatives where the two hypothetical alternatives reflect attribute levels that are perceived 'better', e.g. if they lead to reductions in emissions or less deaths or something like that. If neither of these interpretations make sense, then you can indeed add covariates to the SQ alternative so identify whether certain types of people were more likely to select the SQ. Further, if you would estimate a different model, e.g. a latent class model, the SQ constant may actually no longer be significant.

Michiel

There can be several other reasons for a significant SQ constant. It would be that people somehow have an aversion against level 3 (unlimited) for the dummy coded attributes, which always appear in the SQ alternative. I do not know what the attributes mean, but sometimes there could be a "warm glow" effect when for example people select socially desirable alternatives where the two hypothetical alternatives reflect attribute levels that are perceived 'better', e.g. if they lead to reductions in emissions or less deaths or something like that. If neither of these interpretations make sense, then you can indeed add covariates to the SQ alternative so identify whether certain types of people were more likely to select the SQ. Further, if you would estimate a different model, e.g. a latent class model, the SQ constant may actually no longer be significant.

Michiel

- Michiel Bliemer
**Posts:**1387**Joined:**Tue Mar 31, 2009 4:13 pm

Thank you very much for pointing that out.

Due to the fact that the SQ constant is negative, wouldn´t it make more sense to check if certain types of people were more likely to never choose the status quo in any of the 9 choice situations instead of adding covariates to the SQ alternative to identify whether certain types of people were more likely to select the SQ?

Due to the fact that the SQ constant is negative, wouldn´t it make more sense to check if certain types of people were more likely to never choose the status quo in any of the 9 choice situations instead of adding covariates to the SQ alternative to identify whether certain types of people were more likely to select the SQ?

- JvB
**Posts:**37**Joined:**Mon Mar 22, 2021 12:17 am

Apologies for late response, I just returned from the International Choice Modelling Conference in Iceland.

To me that is the same thing; investigating which people choose the SQ or investigating which people do not choose the SQ, since utility functions are relative. You can add covariates to the SQ alternative, or you can add covariates to the other alternatives, both will give you the same interpretation. But instead of estimating this models immediately, it would indeed be useful to do an aggregate analysis or to analyse correlations between covariates and choice so that you know which covariates you need to account for in the choice model.

Michiel

To me that is the same thing; investigating which people choose the SQ or investigating which people do not choose the SQ, since utility functions are relative. You can add covariates to the SQ alternative, or you can add covariates to the other alternatives, both will give you the same interpretation. But instead of estimating this models immediately, it would indeed be useful to do an aggregate analysis or to analyse correlations between covariates and choice so that you know which covariates you need to account for in the choice model.

Michiel

- Michiel Bliemer
**Posts:**1387**Joined:**Tue Mar 31, 2009 4:13 pm

Dear Michiel,

thank you very much for clarifying.

I have now conducted a pretest (N=42) with the code see above and used the priors to adjust my syntax for the main design like this:

_____

Design

;alts = alt1*, alt2*, SQ*

;rows = 18

;block = 2

;eff = (mnl,d)

;alg = mfederov

;require:

SQ.amount = 3, SQ.period = 3

;reject:

alt1.amount = 3 AND alt1.period = 3,

alt2.amount = 3 AND alt2.period = 3

;model:

U(alt1) = b1[-0.4862] * contrib[1.8,3.3,4.8](6,6,6)

+ b2.dummy[1.4049|1.0240|0.4527] * amount[0,1,2,3] ? 0 = 300, 1 = 600, 2 = 900, 3 = unlimited amount

+ b3.dummy[1.0609|0.5951|0.2733] * period[0,1,2,3] ? 0 = 12, 1 = 42, 2 = 72, 3 = unlimited period

/

U(alt2) = b1 * contrib

+ b2 * amount

+ b3 * period

/

U(SQ) = b0[-0.9618]

+ b1 * contrib_sq[1.6]

+ b2 * amount

+ b3 * period

$

____

The Output looks like this now:

MNL efficiency measures

D error 0.493984

A error 0.893288

B estimate 51.166539

S estimate 43.015307

Prior b1 b2(d0) b2(d1) b2(d2) b3(d0) b3(d1) b3(d2)

Fixed prior value -0.4862 1.4049 1.024 0.4527 1.0609 0.5951 0.2733

Sp estimates 1.653408 2.487262 3.834589 18.563006 3.550729 10.411256 43.015307

Sp t-ratios 1.524285 1.242783 1.000914 0.454917 1.040153 0.607442 0.298844

MNL probabilities

Choice situation alt1 alt2 sq

1 0.451705 0.403521 0.144773

2 0.29659 0.634508 0.068902

3 0.376395 0.536163 0.087442

4 0.616607 0.293487 0.089907

5 0.28599 0.659023 0.054987

6 0.662606 0.259065 0.078329

7 0.3135 0.482557 0.203943

8 0.446107 0.257407 0.296486

9 0.640034 0.293807 0.066159

10 0.523909 0.395487 0.080605

11 0.34223 0.601956 0.055814

12 0.52643 0.411339 0.062231

13 0.188474 0.727753 0.083772

14 0.468465 0.460661 0.070874

15 0.207055 0.718658 0.074286

16 0.615444 0.266225 0.118331

17 0.334718 0.279748 0.385534

18 0.396442 0.407332 0.196226

_______

I now have some questions:

1. I suppose that it is fine to put the prior for the constant b0 in the syntax but that Ngene is not using it for anything as it does not have any influence on the attribute levels?

2. Is there anything wrong with my design or why is the d error for my design with the priors from the pretest higher than the one for the initial design (that was D error 0.461262)?

3. I suppose that the S estimate = 43.015307 means that my design needs to be shown to minimum of 43 respondents and due to the fact that I have 2 blocks the minimum sample size is 86? But the computation does not take into account a specific power level?

4. When I for example want to take a look at heterogenous preferences between men and women in my analysis I should at least have 86 male and 86 female respondends, right?

5. I might also want to analyse my data with mixed MNL and HB Mixed MNL models. Does that have an influence on the minimum sample size?

Your advice is highly appreciated.

Thank you very much in advance.

Best,

J.

thank you very much for clarifying.

I have now conducted a pretest (N=42) with the code see above and used the priors to adjust my syntax for the main design like this:

_____

Design

;alts = alt1*, alt2*, SQ*

;rows = 18

;block = 2

;eff = (mnl,d)

;alg = mfederov

;require:

SQ.amount = 3, SQ.period = 3

;reject:

alt1.amount = 3 AND alt1.period = 3,

alt2.amount = 3 AND alt2.period = 3

;model:

U(alt1) = b1[-0.4862] * contrib[1.8,3.3,4.8](6,6,6)

+ b2.dummy[1.4049|1.0240|0.4527] * amount[0,1,2,3] ? 0 = 300, 1 = 600, 2 = 900, 3 = unlimited amount

+ b3.dummy[1.0609|0.5951|0.2733] * period[0,1,2,3] ? 0 = 12, 1 = 42, 2 = 72, 3 = unlimited period

/

U(alt2) = b1 * contrib

+ b2 * amount

+ b3 * period

/

U(SQ) = b0[-0.9618]

+ b1 * contrib_sq[1.6]

+ b2 * amount

+ b3 * period

$

____

The Output looks like this now:

MNL efficiency measures

D error 0.493984

A error 0.893288

B estimate 51.166539

S estimate 43.015307

Prior b1 b2(d0) b2(d1) b2(d2) b3(d0) b3(d1) b3(d2)

Fixed prior value -0.4862 1.4049 1.024 0.4527 1.0609 0.5951 0.2733

Sp estimates 1.653408 2.487262 3.834589 18.563006 3.550729 10.411256 43.015307

Sp t-ratios 1.524285 1.242783 1.000914 0.454917 1.040153 0.607442 0.298844

MNL probabilities

Choice situation alt1 alt2 sq

1 0.451705 0.403521 0.144773

2 0.29659 0.634508 0.068902

3 0.376395 0.536163 0.087442

4 0.616607 0.293487 0.089907

5 0.28599 0.659023 0.054987

6 0.662606 0.259065 0.078329

7 0.3135 0.482557 0.203943

8 0.446107 0.257407 0.296486

9 0.640034 0.293807 0.066159

10 0.523909 0.395487 0.080605

11 0.34223 0.601956 0.055814

12 0.52643 0.411339 0.062231

13 0.188474 0.727753 0.083772

14 0.468465 0.460661 0.070874

15 0.207055 0.718658 0.074286

16 0.615444 0.266225 0.118331

17 0.334718 0.279748 0.385534

18 0.396442 0.407332 0.196226

_______

I now have some questions:

1. I suppose that it is fine to put the prior for the constant b0 in the syntax but that Ngene is not using it for anything as it does not have any influence on the attribute levels?

2. Is there anything wrong with my design or why is the d error for my design with the priors from the pretest higher than the one for the initial design (that was D error 0.461262)?

3. I suppose that the S estimate = 43.015307 means that my design needs to be shown to minimum of 43 respondents and due to the fact that I have 2 blocks the minimum sample size is 86? But the computation does not take into account a specific power level?

4. When I for example want to take a look at heterogenous preferences between men and women in my analysis I should at least have 86 male and 86 female respondends, right?

5. I might also want to analyse my data with mixed MNL and HB Mixed MNL models. Does that have an influence on the minimum sample size?

Your advice is highly appreciated.

Thank you very much in advance.

Best,

J.

- JvB
**Posts:**37**Joined:**Mon Mar 22, 2021 12:17 am

My answers:

1. You MUST use a prior for the constant since it influences the choice probabilities and hence the D-error. The fact that constants are not optimised on does not mean that they have no influence.

2. You cannot compare D-error values when priors are different, you can only compare D-errors across designs when the priors are identical.

3. Correct, you will need at minimum 2 * 43 = 86 respondents, assuming that the priors are very close to the true parameters (which is of course a big assumption). Ngene uses a 5% significance level for this calculation, so with these priors and with 86 respondents there is a 95% chance that all parameters are statistically significant in estimation.

4. You may be able to estimate gender effects with more or with less male/female respondents, there is no way to tell. Perhaps 10 males and females is enough, but perhaps you need hundreds. Most researchers have a sample size of around 1,000 respondents so that is more than enough for estimating effects of different demographics.

5. Yes for mixed logit models you need a much larger sample size since you will also need to estimate the standard deviation of the distribution. But without reliable priors for both the mean and standard deviation of the random parameter distribution it is very difficult to predict the sample size you need. With a sample size below 100 you will struggle to estimate mixed logit models, one would typically have at least 500 respondents. But it is all case specific, you may be able to estimate mixed logit models with 100 respondents.

Michiel

1. You MUST use a prior for the constant since it influences the choice probabilities and hence the D-error. The fact that constants are not optimised on does not mean that they have no influence.

2. You cannot compare D-error values when priors are different, you can only compare D-errors across designs when the priors are identical.

3. Correct, you will need at minimum 2 * 43 = 86 respondents, assuming that the priors are very close to the true parameters (which is of course a big assumption). Ngene uses a 5% significance level for this calculation, so with these priors and with 86 respondents there is a 95% chance that all parameters are statistically significant in estimation.

4. You may be able to estimate gender effects with more or with less male/female respondents, there is no way to tell. Perhaps 10 males and females is enough, but perhaps you need hundreds. Most researchers have a sample size of around 1,000 respondents so that is more than enough for estimating effects of different demographics.

5. Yes for mixed logit models you need a much larger sample size since you will also need to estimate the standard deviation of the distribution. But without reliable priors for both the mean and standard deviation of the random parameter distribution it is very difficult to predict the sample size you need. With a sample size below 100 you will struggle to estimate mixed logit models, one would typically have at least 500 respondents. But it is all case specific, you may be able to estimate mixed logit models with 100 respondents.

Michiel

- Michiel Bliemer
**Posts:**1387**Joined:**Tue Mar 31, 2009 4:13 pm

Dear Michiel,

thank you very much for your helpful advice.

If I especially think about estimating MMNL for my main survey data, would you recommend having three instead of two blocks? I´ve read that some authors recommend having as much blocks as possible, so I could at least have three instead of two?!

Furthermore, if I decide after pretest and for my main survey to take seven choice tasks instead of nine (to not overwhelm the respondents), the syntax would be adjusted as follows, right? I am not perfectly sure about the

Design

;alts = alt1*, alt2*, SQ*

;rows = 14

;block = 2

;eff = (mnl,d)

;alg = mfederov

;require:

SQ.amount = 3, SQ.period = 3

;reject:

alt1.amount = 3 AND alt1.period = 3,

alt2.amount = 3 AND alt2.period = 3

;model:

U(alt1) = b1[-0.4862] * contrib[1.8,3.3,4.8](5,5,4)

+ b2.dummy[1.4049|1.0240|0.4527] * amount[0,1,2,3] + b3.dummy[1.0609|0.5951|0.2733] * period[0,1,2,3] /

U(alt2) = b1 * contrib

+ b2 * amount

+ b3 * period

/

U(SQ) = b0[-0.9618]

+ b1 * contrib_sq[1.6]

+ b2 * amount

+ b3 * period

$

Thank you very much.

Best,

J.

thank you very much for your helpful advice.

If I especially think about estimating MMNL for my main survey data, would you recommend having three instead of two blocks? I´ve read that some authors recommend having as much blocks as possible, so I could at least have three instead of two?!

Furthermore, if I decide after pretest and for my main survey to take seven choice tasks instead of nine (to not overwhelm the respondents), the syntax would be adjusted as follows, right? I am not perfectly sure about the

as 14 is not divisible by 3 anymore.(5,5,4)

Design

;alts = alt1*, alt2*, SQ*

;rows = 14

;block = 2

;eff = (mnl,d)

;alg = mfederov

;require:

SQ.amount = 3, SQ.period = 3

;reject:

alt1.amount = 3 AND alt1.period = 3,

alt2.amount = 3 AND alt2.period = 3

;model:

U(alt1) = b1[-0.4862] * contrib[1.8,3.3,4.8](5,5,4)

+ b2.dummy[1.4049|1.0240|0.4527] * amount[0,1,2,3] + b3.dummy[1.0609|0.5951|0.2733] * period[0,1,2,3] /

U(alt2) = b1 * contrib

+ b2 * amount

+ b3 * period

/

U(SQ) = b0[-0.9618]

+ b1 * contrib_sq[1.6]

+ b2 * amount

+ b3 * period

$

Thank you very much.

Best,

J.

- JvB
**Posts:**37**Joined:**Mon Mar 22, 2021 12:17 am

36 posts
• Page **2** of **4** • 1, **2**, 3, 4

Return to Choice experiments - Ngene

Users browsing this forum: No registered users and 4 guests