When we train machine learning models to identify these topics based on each user`s judgments, the resulting models show a similar correspondence between each other than the lawyers who trained them. This indicates that these models learn the behaviours of their coaches, even if they do so imperfectly. Therefore, we argue that additional work is needed to improve the evaluation process to ensure that all parties agree on identified data. During legal due diligence, lawyers identify a large number of files in a company`s contracts that may represent a risk during a transaction. In this article, we present a study of 9 lawyers conducting a simulation of 50 contracts on five themes. We note that counsel agree on the overall location of relevant material at a higher rate than in other Assessor contract studies, but they do not entirely agree on the extent of the relevant material. In addition, we do not find much difference between lawyers who have different duties of care. This work was funded under the European Union`s Horizon 2020 research and innovation programme under funding agreement 644753 (KConnect) and the Scientific Fund(FWF) P25905-N23 (ADMIRE) and I1094-N23 (MUCKE) projects. CLEF 2016: Experimental IR Meets Multilinguality, Multimodality, and Interaction pp 40-53 | Cite ace.