Page 1 of 1

Pilot study design - small sample

PostPosted: Thu Jan 23, 2020 3:15 am
by RoryC
Hi there,

I would appreciate some guidance on choice of design…
Experiment has 7 attributes with between 2 and 4 levels
Limited sample available: <4 to pilot, and sample size ~100 respondents (thinking of blocking into 3 - with respondents seeing 10-12 designs - though the prepilot work I've done, I have had push-back on the 12 card design - the content is a little complex)
I have no information on priors, though can reasonably assume a positive sign for all parameters (with the exception of a single level in one attribute, which I assume will be negative)

Questions:
1. I am unsure as to best approach for an initial design, given v. limited info on priors: start with an orthogonal design and then move to a d-efficient design after some data collection – if so, at what point - 10, 20?

2. Reading previous posts on the forum, there seems to be some leaning towards starting out with a d-efficient design using ‘small’ priors. What is small!?

3. If using Bayesian priors post pilot - are effects coding and Bayesian priors compatible (I read a post in the forum that suggested maybe they were not)?

4. I am unsure as to the requirement for an ASC in the design (mine is unlabelled). In several places in the forum it is advised not to include for unlabelled experiments, yet in the manual (see for example 7.2.2), the constant (b1) is included – what would be the recommendation here (is it plain wrong to use one, or just not deemed necessary?), and is this the case for both orthogonal and d-efficient designs? Does not including an ASC assume no right-left selection preference? (Sorry that was about 4 questions!)

Thank you so much in advance!
Rory

Re: Pilot study design - small sample

PostPosted: Thu Jan 23, 2020 8:22 am
by Michiel Bliemer
1. An orthogonal design is possible, but an efficient design with zero priors offers more flexibility.

2. If you know the sign of the priors, you can use "essentially zero" priors that indicate the sign, e.g. -0.00001 and 0.00001.The sign will be used to remove dominant alternatives (using the ;alts = alt1*, alt2* command in Ngene). While you are collecting the data, you could use the data so far to estimate models and see if some parameter estimates becomes more reliable and at that stage you could replace the design with a new design based on Bayesian priors.

3. Yes they are compatible, for example in Ngene you would use something like b.effects[(n,0.1,0.05),(n,-0.3,0.1)] * X[1,2,3].

4. You should not include constants when creating a design for an unlabelled experiment as the constant has no meaning (it refers to the value attached to a label). The example in 7.2.2 is a labelled experiment since the two alternatives have different attributes (B and C) and different parameters (b3 and b4) and in a labelled experiment constants should be added for J-1 alternatives. While you do not add constants in the design phase, you typically DO add constants when you estimate models to make sure that you correct for left-to-right bias. After model estimation, you ignore these constants.

Michiel

Re: Pilot study design - small sample

PostPosted: Mon Jan 27, 2020 10:22 pm
by RoryC
Thank you Michiel.

One further quick question - is there a rule of thumb for minimum number of respondents viewing each block?

In initial pre-pilot work I am getting quite a bit of push back on length. The choice task is fairly complex & ideally I'd like to cut down the number of choice tasks...

To get level balance I need a number rows that is a multiple of 12 - so had been going for 36, blocked into 3. With 90-100 respondents, that gives blocks of over 30 respondents, which I was comfortable with. What would be your recommendation on reducing the number of choice tasks - by adding a fourth block?

Thanks again!
Rory

Re: Pilot study design - small sample

PostPosted: Tue Jan 28, 2020 8:40 am
by Michiel Bliemer
Hi Rory,

There are no rules of thumb for the number of respondents per block. I refer to Rose and Bliemer (2013) for a discussion on sample size in choice experiments, which also reviews a fule rules of thumb in sample size calculations.

Rose, J.M. and M.C.J. Bliemer (2013) Sample size requirements for stated choice experiments. Transportation, Vol. 40, No. 5, pp. 1021-1041.

For complex choice tasks, 12 is indeed quite a large number. Most people will have 4 to 8 choice tasks in case of a complex experiment, while having more than 10 for a simple experiment. You could indeed choose 4 blocks of 9 choice tasks, or 6 blocks of 6 choice tasks (but perhaps that does not give you enough data).

Michiel