1. You can only compare efficiency (D-error, A-error, S-estimate, etc) across designs when using the same priors, so from your numbers I could not say which design is better. But I am generally very careful about taking priors from the literature, especially if they come from different literatures, because different countries have different cultures and currencies, and different models have different scale parameters (error variance), which does not allow you to directly use parameter values from another study. WTP and parameter ratios are often more transferrable, but you will still need to guess the scale parameter. See also:
https://www.sciencedirect.com/science/article/abs/pii/S17555345153008772. B-estimate = balance. A choice task is 100% balanced if each alternative is chosen with equal probabilty, and is 0% balanced if there exists a dominant alternative. Neither 0% nor 100% give you much information. The B estimate computes the average balance across all choice tasks. Note that using zero priors means equal probabilities and therefore 100% balance, but you should ignore this when you use zero priors as balance cannot be computed.
3. (Minimum) sample size estimates only exist when you have reliable priors, for example from a pilot study, and should otherwise be ignored. There is no other way to reliably determine minimum required sample size. Usually sample size is based on your available budget, can you afford 100? 500? 1000? I generally try to find budget to collect data from at least 1000 respondents. Once you have done a pilot study and obtained more reliable priors, then you may have better sample size estimates.
4. Optimising a design for a panel mixed logit is almost impossible given the huge amount of computation time required (often weeks or months). It is fine to optimise your design for estimating an MNL model, this design can also be used to estimate a panel mixed logit model. See also:
https://www.sciencedirect.com/science/article/abs/pii/S01912615090013985. No, the blocking is just a way to assign different respondents different choice tasks, but typically you do not need to account for blocking in my estimation. The only design artefact that you typically account for in model estimation is left-to-right bias, since alternatives shown on the left (or at the top) in a survey are chosen more often. If you have an unlabelled experiment, then you simply add constants to the utility functions. If you have a labelled experiment, you will need to randomise the order (across respondents) in which you show alternatives in the survey and add a dummy variable with the order in which each attribute appeared in the choice task to the utility functions.
Michiel