Early voluntary turnover, integrity testing and faking

Dr. Saul Fine, Midot

The following three papers were recently presented at the International Conference of Applied Psychology (ICAP) in Paris, France, July 8-12, 2014.


Assessing early voluntary turnover potential among job applicants

Saul Fine


Voluntary employee turnover can be responsible for significant financial losses, and is a concern for organizations worldwide. In addition, when voluntary turnover occurs early in the employment cycle, such as within the first 3-6 months, the incurred losses can be even more significant, as employee productivity levels have generally not yet covered their initial training and recruitment costs. Although prior studies have identified several factors for predicting voluntary turnover among current employees, fewer studies have examined these factors among job applicants. The present study set out to assess turnover potential among job applicants, before they are hired.

Two independent samples of job candidates for sales positions from Mexico (N = 225) and Ukraine (N = 163) were administered a newly developed 72-item self-report questionnaire designed to measure: personal dispositions (conscientiousness and past behaviors) and turnover cognitions (desire for the job and intention to stay). The applicants were then monitored for their incidence of turnover during their first six months on the job.

The Mexican and Ukrainian samples had 52% and 19% voluntary turnover rates after 6 months, respectively. The results found that personal dispositions were less predictive than cognitions, and the overall composite score was similarly valid for both samples (r = -.20) , whereby there was a 2-4 times greater likelihood for early voluntary turnover among low scorers compared to higher scorers. Finally, as evidence of the measure’s ability to specifically predict voluntary turnover, no validity was found for involuntary dismissals, as expected.

This study contributes evidence towards a better understanding of some of the possibly preventable antecedents related to early voluntary turnover. Despite cultural and contextual differences between the two samples, similar predictive patterns emerged, without indications of adverse impact. Future research is suggested to continue examining and refining these methods for operational use in personnel selection.



A multi-method approach to predicting CWB – the best of all worlds?

Gabriela Pecker and Saul Fine


Counterproductive work behaviors (CWB) are a major concern for organizations around the world, and integrity testing has become a widely adopted method in personnel selection for reducing CWB. However, integrity tests are typically classified as either overt and personality based tests, each with its own advantages and disadvantages, and seldom incorporating both approaches. In addition, other testing methods have been less widely adopted for measuring integrity, such as situational judgment tests (SJT) and biodata inventories. The purpose of this study was to investigate the benefits of using a multiple method approach for predicting CWB, whereby it was hypothesized that in their aggregate, the measures from each approach would facilitate a more reliable and valid measure.

Data was collected from 177 working students using four subtests each with different item types for measuring integrity: overt, personality, SJT, and biodata. CWB was measured using both self-reported admissions of past CWB, as well as external ratings from participants’ employers and colleagues.

All subtests were found to be independently reliable (>.71), showing good construct validity in terms of the factor structure, and convergent/discriminant validity with other measures as expected (i.e., conscientiousness, neuroticism, agreeableness; and GMA), as well as no adverse impact. In addition, all four measures were valid predictors of self-reported CWB and other-reported CWB (.34-.51 and .17-.28, respectively). Finally, a weighted composite was incrementally valid for predicting both criteria (.59 and .31). Accordingly, low scoring participants (<-1SD) were at least three times more likely to have been involved in CWB than other participants.

The results suggest favoring the use of a multi-method approach for better utilizing the available techniques in integrity testing to predict CWB, for capitalizing on their advantages, and compensating for their disadvantages. The theoretical and practical implications of these findings will be discussed.


Detecting faking using within subject reaction time latencies

Saul Fine, Merav Pirak, Gabriela Pecker.


The issue of faking in non-cognitive testing remains a major concern for personnel selection, as the effectiveness of typical methods for detecting faking has been met with criticism. In response to such criticisms, it has been suggested that item response time (RT) latencies may serve as an alternative indicator of faking, whereby faked responses generally require longer mental processing time than honest ones. However, since traditional RT measures use normative response times to identify fakers, they do not directly consider individual-level responses, and may therefore be less accurate for individual hiring decisions as a result. The present study examined the validity of a new within-subject measure of RT which computes a difference score index (DSI) between target and baseline RTs within the same test, as a function of the standard error of the difference.

Two hundred six participants were randomly selected to simulated faking or honest testing conditions, and were administered two types of integrity test items (overt and personality) and an RT control-scale, with group classification (faking/honest) being the main dependent variable.

As expected, RTs in the faking test condition were longer than those in the honest condition, for both overt (d=.43) and personality-based items (d=.47). In addition, in both conditions, overt RTs were shorter than personality RTs (ds=.80-.85). In terms of the accuracy, the DSI correctly classified more cases as being faking or honest in both overt and personality tests to a similar degree (65.6%-68.5%), compared to the traditional normative RT using similar cut-scores (55.8%-56.6%).

The results of this study suggest that DSI can be a viable method for identifying faking in both overt and personality-based tests. Two additional advantages of this method are the ability to identify faking in a single test administration, and the ability to measure the degree of faking as a continuous score, rather than above or below a given cut-score. Future research will examine the criterion validity of such faking estimates in actual applied settings.

Gabi Pecker, Midot