Dear Ngene Team,
Thank you for all your help and support. I really appreciate it.
I have another question, but it is about a different, more general topic. So, let me upload this posting here.
I have found some previous studies in which their choice experiments included a holdout task, which is also called a fixed, or a control choice task. It seems that this holdout task can be used to test predictive validity of a calibrated model.
I wonder if you have any recommendations on the design of the holdout task. For example,
1) How many holdout tasks are appropriate? (e.g., one vs. two?)
2) Can I select any attribute levels for a holdout task?
3) Can a holdout task be located anywhere in a series of choice tasks? Or, is there a more proper location (e.g., in the middle or at the very end)?
Thank you very much again. I'd really appreciate it if you could give me any inputs.
Best regards,
yb