Critical Assessment of Fully Automated Structure Prediction


CAFASP4 MQAP


MQAP is a new category in CAFASP4: Model Quality Assessment Programs

To evaluate the capabilities of current programs to distinguish near-native from decoys, CAFASP4 will extend its scope by introducing a new category: Model Quality Assessment Programs (MQAP). This evaluation is of interest both for the Homology Modeling targets, where it is important to select the best model among a set of many good, "correct" ones as well as for the other targets, where the set may contain many incorrect models.

See the list of participating MQAPs .

An MQAP is a computer program that receives as input a 3D model in PDB format and produces as output a real number representing the quality of the model. (Notice that an MQAP is NOT a program to assess the quality of a model when comparing it to the native structure, like MaxSub or GDT; an MQAP only uses a model as input and does NOT use the native structure).

To gain insights on the state-of-the-art of MQAP programs, CAFASP4 will provide the infrastructure to assess their accuracy based on an evaluation carried out using the predicted models produced by the CAFASP4 participating servers. This experiment is similar to previous efforts, such as Moult's ProStar, but now the evaluation will be carried out in a fully automated fashion, in parallel with CAFASP4 and with publicly available programs available to all.

This experiment will be carried out as follows:

  • 1. Participating MQAP's must be submitted to CAFASP as source or object code before CAFASP4 begins. Valid MQAP programs should receive as input a file in PDB format, and produce a single real number as output. Any participating MQAP will be made available from the CAFASP4 site for download. Only those MQAPs that can successfully run locally at the CAFASP4 site will be considered. MQAP programs available thru web-servers will not be considered.
  • 2. For each CAFASP4 target, the following procedure will be applied.
    • After the CAFASP4 metaserver collects all predictions from the CAFASP4 registered servers, all the predictions will be converted into standard PDB files.
    • The CAFASP4 metaserver will run each of the registered MQAP's locally, with all the predictions for the given target as input. The CAFASP4 metaserver will record the output that each MQAP returns for each of the models. This will be used to rank order the predictions according to each individual MQAP. The rank ordered lists of all MQAP's will be published in the CAFASP4 site as soon as they are obtained.
    • The top 10 models of each MQAP will be considered as the top 10 "meta-predictions" produced by the MQAP. Thus, each MQAP will be considered as an additional CAFASP meta-predictor (or "meta-selector").
  • 3. When the experimental structures become available, the MQAP "meta-predictors" will be evaluated along the rest of the CAFASP servers, using the exact same procedures. In order to evaluate the specificity of an MQAP, its raw output will be converted into a Z-score using the distribution of raw outputs obtained by each MQAP from the evaluation of all the models of each target.
  • 4. Two categories of MQAP's will be accepted: a) MQAP's accepting full atom files and b) MQAPc's accepting incomplete models, e.g. C-alpha-only files, models with gaps, etc. This distinction is important because MQAP's can be designed differently to evaluate either type of models. Each participating MQAP will need to specify the category to which it should be placed. MQAP's of type a) will be evaluated only with those CAFASP models that are full-atom. MQAPc's of type b) will be evaluated using as input all the CAFASP models collected.
  • 5. The CAFASP results will distinguish regular CAFASP servers from the MQAP meta-predictors, and among the latter, an additional distinction will be made for the two types of MQAP's described above.
  • 6. After the experimental structures become available, the latter will also be given as input to each MQAP in order to see how well they score the native vis--vis the predicted models.
It is important to point out that a number of ab-initio methods select their models from a set of alternatives by also analyzing how they cluster in 3D space. Unfortunately, the procedure delineated above only allows us to evaluate how well an MQAP assigns a number to an individual model. However, CASP participants wishing to test the success of such clustering procedures, will be able to use the results of the evaluations of the various MQAP's, to produce meta-meta-predictions and submit them to CASP. We may consider extending this experiment in the following CAFASP5, to include MQAP programs that receive as input not individual models, but a set of them, but for CAFASP4 we will only assess MQAP's in the single-model mode.

MQAP's can perform any local computation (i.e. internet remote accesses to bring data are not allowed). This includes (but it is not required) a possible refinement preprocessing step to resolve any potential problems with the input. However, an MQAP should run efficiently, i.e. seconds (and not hours) of CPU per model.

Some MQAP's can return, in addition to the single number representing the model's quality, a per-residue confidence value (e.g. Verify-3D). Thus, as an option for such MQAP's, in this experiment we will also collect this information, and will make it available to all. However, at this point, there will be no evaluation beyond that described above.

To enter your MQAP in CAFASP contact Daniel Fischer .