Relevance is the degree to which a query response satisfies a user who is searching for information.
There are two terms we probably want to highlight here.
Precision: The percentage of documents in the returned result that are relevant
Recall: the percentage of relevant results return out of all the relevant documents in the whole system.
For example, we have 100 documents in the whole system, a user conducts a query and 30 out of 100 are relevant and the the rest are irrelevant, which is 90 records.
Say a search algorithm returns only 10 records totally and 8 out of 10 are relevant. In this case, lets do the calculation:
Precision = 8 / 10 ~ 80%
Recall = 8 / 30 ~ 26%
Assume the software engineer is lazy and he/she just simply returned all the results, which is 100 documents completely. In that case, the recall is 20/20 ~ 100% because all the relevant documents have been returned but the precision is 20 / 100 ~ 20%, which is really low. 😦
“Once the application is up and running, you can employ a series of testing methodologies, such as focus groups, in-house testing, TREC tests and A/B testing to fine tune the configuration of the application to best meet the needs of its users.”