by Michiel Bliemer » Fri Feb 11, 2022 8:45 am
A library of designs means that you create all choice tasks a priori, so you can check that all choice tasks make sense in advance. This gives the analyst full control. A library of designs will work in most survey instruments.
A pivot design relies on a value provided by the respondent earlier in the survey, e.g. a specific travel distance, say 5km, and then in the survey the respondent will be given choice tasks where travel distance has levels around 5km. This means that every respondent has different choice tasks, but it also means that it could create issues if the user provides very extreme input, e.g. 0km or 1000km, so you may need logic to check for unrealistic input or this could be avoided by not asking the respondent for a numerical value but rather to let them choose a specific pre-defined category. Not all survey instruments allow progamming pivots as it requires specific logic that creates choice tasks on the fly when applying absolute or relative pivots.
Pivot designs are great but just harder to implement, so that is why I generally recommend a library of designs since it is easier.
Regarding data analysis, you would generally pool the data and estimate a single model for both groups. This is the same as revealed preference data where everyone would have different levels, e.g. different people have a different travel distance and you would not estimate a separate model for each individual. But you COULD estimate different models for different distance classes. Estimating separate models for two groups means estimating double the number of parameters while perhaps some parameters are not statistically different across the two groups. By estimating a single joint model you can test and choose which parameters are generic across groups, and which are different across groups (this can be achieved by including interaction terms.