There are many situations that exist in science and technology where the performance of a set of subjects that are in competition with one another is evaluated in some way by a set of evaluators who act as judges. This paper presents a methodology by which these judges themselves can be evaluated in a manner that describes the relative standards they use in the evaluation process, their consistency, and how well they compare to a hypothetical ideal judge. The methodology is computationally simple, can be justified in a theoretical manner, and is easy to apply across a wide range of problem types. It can also be used in a predictive manner to indicate how absent judges would have most likely evaluated subjects if they were able to make such an evaluation. A number of empirical examples which fully describe the methodology are discussed in this paper along with the results of applying it to a sizeable real-world problem. The methodology is applicable to a number of areas of physical and social sciences and can be extended, as presented in this manuscript, in a manner which can be applied to other diverse problems in mathematical sociology, computer engineering, and graph theory.
Quality & Quantity – Springer Journals
Published: Sep 30, 2004
It’s your single place to instantly
discover and read the research
that matters to you.
Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.
All for just $49/month
Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly
Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.
All the latest content is available, no embargo periods.
“Whoa! It’s like Spotify but for academic articles.”@Phil_Robichaud