Precision-Recall curves present the precision measure for selected points of recall, e.g., the precision for a case we require a recall of 0.1, the precision for a case we require a recall of 0.2, etc. (the required recall can be achieved by tuning the threshold of our decision). In this way, we can get more informative picture of an algorithmâ€™s performance (i.e., not just one single F1-measure which encodes the trade-off between the precision and the recall of one specific threshold).

Here you can find an example for such curves, for several examined methods - you should provide such a figure for each of the 20% and 100% runs, where each figure is composed of three curves: PMI-LIN, TFIDF-COS, and DICE-COVER.

Note: There's a simple tool in Excel for generating such figures for given pairs of precision-recall (=you just have to provide the precision value for each of the 0.1, 0.2… 1 recall points).