Pretend priors, constant and D-error queries

This forum is for posts that specifically focus on Ngene.

Moderators: Andrew Collins, Michiel Bliemer, johnr

Pretend priors, constant and D-error queries

Postby kateville » Thu Jun 21, 2012 8:27 pm

Dear Ngene team,

I'm running an unlabelled, 3-alternative (A, B and opt-out), efficient MNL design without any priors for now and using this design for my pilot study, after which I plan to use these priors to run a MMNL panel design (syntax at end). A few questions have arisen:

i) The design runs happily when all my priors are zero, but if I try to anticipate my pilot by inserting random (but tiny!) priors into the design, it comes back with “A random design could not be generated after 2000000 attempts. There were 0 row repetitions, 2601 alternative repetitions, and 1997399 cases of dominance”. Is this because the priors are nonsense?

ii) Is there a difference between including a constant term in both my unlabelled alternatives and not defining the opt-out option or leaving it out of one of the unlabelled alternatives and defining the opt-out alternative with an alternative-specific constant?

ii) Can you compare the magnitude of D-error between designs in a rough sense? I’m not sure whether to include an interaction or not: can I use the decrease in D-error between the two designs as a guide to how much efficiency is lost through including the interaction?

Many thanks for your help,
Kate

Design
;alts = A*, B*, Opt-out
;rows = 16
;eff = (mnl,d)
;model:
U(A) = b1 + b2.effects[0|0|0]*Location[0,1,2,3] + b3*Salary[100000,120000,200000,300000] + b4*Time[1,2,3,5] + b5.effects[0|0|0]*Structure[0,1,2,3] + b6.effects[0|0|0]*Specialty[0,1,2,3] + i1*Structure.effects[3]*Specialty.effects[0]
/
U(B) = b2*Location + b3*Salary + b4*Time + b5*Structure + b6*Specialty + i1*Structure[3]*Specialty[0]
/
U(Opt-out)= c1
$
kateville
 
Posts: 1
Joined: Fri Jun 15, 2012 10:48 pm

Re: Pretend priors, constant and D-error queries

Postby Andrew Collins » Tue Jul 24, 2012 3:45 am

Dear Kate

i) What is happening here is the by specifying priors, you are informing Ngene whether more of the associated attribute is better or worse. By specifying * against two of the alternatives, it is then checking for dominated alternatives. You are then finding that a design can't be generated as there is too much dominance. So long as you want to check for this, you may need to experiment with the design dimensions to be able to overcome dominance.

ii) I don't believe there should be any difference.

iii) You should not compare the d-error between designs. However, very large d-errors (say 2 and above) typically are associated with bad designs (likely extreme probabilities), and so if by introducing something like an interaction you go from very large d-errors to ones that are much less, then the comparison is useful in a broad sense.

Andrew
Andrew Collins
 
Posts: 78
Joined: Sat Mar 28, 2009 4:48 pm


Return to Choice experiments - Ngene

Who is online

Users browsing this forum: No registered users and 16 guests

cron