by Michiel Bliemer » Sat Nov 04, 2023 9:24 am
D-errors are indeed only comparable for the same model specification and priors, which means the same model type (e.g. MNL), the same utility specifications, the same coding scheme (e.g. dummy coding) and the same priors. For example, you can compare efficient and orthogonal designs and you can compare designs generated using different algorithms.
If you increase the number of rows, the D-error will always improve since you capture more information. So it is not really fair to compare a design with 20 rows with a design with 30 rows, because the latter should always have a smaller D-error. The number of blocks does not affect the D-error.
Regarding your second example, suppose that you have the following:
b.dummy[0.3|0.2|0] * cat[1,2,3,0]
This would mean that level 1 is preferred over level 2, and that level 2 is preferred over level 3. So using a 0 prior in this case does indicate a preference order. Level 3 would however have equal preference as base level 0.
If you would use something like this:
b.dummy[(u,0.3,0.4)|(u,0.2,0.3)|(u,0.1,0.5)] * cat[1,2,3,0]
Then this would mean that level 1 is preferred over level 2, but level 3 could be anywhere from above level 1 to below level 2. Note that when requesting Ngene to check for dominant alternatives in an unlabelled experiment, e.g. by using ;alts = alt1*, alt2*, then Ngene uses the mean value of the priors, so in that case it would use 0.35 for level 1, 0.25 for level 2, and 0.3 for level 3, so it would put level 3 in between level 1 and level 3 when looking at the preference order.
Michiel