Review the article attached on the topic of the ethics of drug testing in the employment setting.

Review the article attached on the topic of the ethics of drug testing in the employment setting.

After reading the article, write a 500-word article critique by addressing each of the following items:

  • Briefly introduce and summarize the article.
  • Do the author’s arguments support his or her main point?
  • What evidence supports the main point?
  • How could the topic of this article apply to your personal or professional life?
  • How could the topic apply to an organization you have observed?
  • How would you explain the role of leadership in corporate culture, and how would you describe leadership styles and how they affect ethical decision-making?
  • Are there any inherent unethical practices with drug testing, though it is technically legal?
  • What conclusions can you draw about the ethical issues facing business leaders?

The Unit II Article Critique should be at least 500 words in length, double-spaced, and written in Times New Roman, 12 point font.

¥/orkplace ethics has become an increasingly important concern as both large and small companies emphasize ethical behavior, and schools and professional groups devote significant resources to ethics training.

Despite the increased attention, very little research has explored the use of pre-employment selection measures to influence organizational ethics.

This article describes the initial development of a structured ethical integrity interview that can be used in the selection of applicants.

After reviewing the pertinent research literature, the development and testing of a structured ethical integrity interview is described.

In the concluding section we examine directions for future testing and study.

The Development of a Structured Ethical Integrity Interview for Pre-Employment Screening John C. Hollwitz Donna R. Pawlowski Creighton University W orkplace ethics has become an important concern for business educators, researchers, and practitioners. Today, the business community seems to accept an obligation to common norms for right action, at least in principle. Some professional societies are explic- itly devoted to business ethics; applied ethics is a popular topic at meet- ings and seminars; and several professional journals are now devoted to the subject (Murphy, 1993).

Apart from issues of social justice, eth- ical behavior makes good business sense. Unethical behavior is expen- sive.

Discriminatory practices, negligent hiring, and product quality failure can incur large legal judgments.

In the United States, employee m.alfeasance costs companies $15 billion to $50 billion per year. The value of white collar deviance alone is one thousand percent of the anmial loss incurred by all street crimes and burglaries. Thirty per- cent of American bankruptcies are substantially attributable to work- ers who willfully disregard company norms and social law (Bernardin & Cooke, 1993; Jones & Terris, 1991a, 1991b; Kochkin, 1987; Meinsma, 1985; Murphy, 1993; Shepard & Duston, 1988).

Today, many companies use pre-employment integrity tests to detebt job applicants who are likely to violate standards of appropri- ate behavior. These tests are controversial. Although critics claim that research and ethical practice cannot support the use of these measures for employee selection, supporters have reached the opposite conclusion (Dalton & Metzger, 1993; Dalton, Wimbush, & Daily, 1994; Goldberg, Grenier, Guion, Sechrest, & Wing 1991; Guastello & Rieke, 1991; Martin, 1991; Martin & Terris, 1994; Murphy, 1987; U, S. Congress, Office of Technological Assessment, 1990). Support for integrity test- ing is based on three conclusions. First, a growing consensus holds that pre-employment integrity tests predict important job-related 203 TheJournai of Business Communication.

Volume 34, Number 2, April 1997, pages 203-219 © f 997 by the Association for Business Communication 204 The Journai of Business Communication 34:2 April 1997 criteria regardless of job type or setting. It is reasoned the tests tap broad personality constructs, such as “work conscientiousness,” which influences a wide range of job attitudes and behaviors.

Second, the tests lack adverse impact against females or members of racial and ethnic minorities. Third, applicants generally regard the tests as appropri- ate for job screening (Murphy, 1993; Ones, Viswesvaran, & Schmidt, 1993; Sackett, Burris, & Callahan, 1989; Sackett & Harris, 1984).

Despite growing support, existing tests suffer from at least three deficiencies with respect to workplace ethics. First, they focus on honesty, rule compliance, and impulse control. These attitudes and behaviors may not represent the full domain of ethical behavior. The second problem about existing tests is more practical. Honesty tests are expensive proprietary products, costing from $16 to $50 per form, and publishers frequently hesitate to release their tests for indepen- dent investigation (Lillienfeld, Alliger, & Mitchell, 1995; LoBello & Sims, 1993; Martelli, 1988). In addition to these problems, existing tests are neither designed nor validated for employee selection (Cohen, Pant, & Sharp, 1993; Reidenbach & Robin, 1990). The constraints of cost, proprietary control, and design suggest that it would be fruitful to develop and investigate alternatives to commercial honesty tests.

One alternative is to create an employment interview measure that focuses on ethical integrity. There at least two benefits of this method. First, gathering ethical data about applicants in this man- ner would be economical. Although interviews are not cost-free (Schmitt, 1976; Ulrich & Trumbo, 1965), most organizations routinely conduct interviews with applicants. Second, the approach could increase the legal defensibility of pre-employment ethics screening, since some states prohibit employers from administering any sort of written integrity measure (O’Bannon, Goldinger, & Appleby, 1989). To date, no statute excludes honesty or integrity interviews. The purpose of the present paper is to report on our efforts to develop and test an interview format that examines applicants’ ethical integrity.

Instrument Development and Testing Interview Format Conventional wisdom held that employment interviews have low reliability and validity (Arvey & Campion, 1982; Hunter & Hunter, 1984; Mayfleld, 1964; Ulrich & Trumbo, 1965; Wagner, 1949). However, recent research strongly indicates that the structured interview for- mat yields significant increases in both reliability and validity, mak- ing it appropriate for ethical integrity screening (Jones & Terris, 1991b; Murphy, 1993). Investigation shows that structured inter- views can help to minimize some of the difficulties found with unstruc- tured interviews (Arvey, Miller, Gould, & Burch, 1987; Campion, Structured Ethical Integrity interviewing / Hollwitz, Pawiowski 205 Pursell, & Brown, 1988; Dipboye & Gaugler, 1993; Harris, 1989; Marchese & Muchinsky, 1993; McDaniel, Whetzel, Schmidt, & Mau- rer, 1994; Wiesner & Cronshaw, 1988; Wright, Lichtenfils, & Pursell, 1989).

Two structured interview formats have dominated research and prac- tice.

Botii are based on job analyses, and both use empirical processes to develop items and scoring keys. Structured situational interviews are based upon the assumption that intentions predict behaviors. In a situational interview, applicants are asked to respond to hypothet- ical scenarios. Situational interviews seem well suited to pre-employ- ment integrity screening because they are usually developed using Maas’s (1965) procedures for minimizing social desirability infiu- ences on interviewer judgments (Latham, Saari, Pursell, & Campion 1980).

Structured behavioral interviews, or patterned behavior descrip- tion interviews, are based on the assumption that past behavior is the best predictor of future behavior. Although item and scoring key development is identical for both situational and behavioral inteirviews, applicants describe specific instances from their experience in job related domains in the behavioral interview.

Both interview formats have demonstrated acceptable reliability and validitjr (Latham & Saari, 1984; Latham, Saari, Pursell, & Cam- pion, 1980; Maurer & Fay, 1988; Orpen, 1985; Weekly, 1987). In addi- tion, structured interviews show no more adverse impact against women, minorities, and older subjects than do traditional interviews (Campion, Pursell, & Brown, 1988; Lin, Dobbins, & Farh, 1992).

Although sittiational interviews were traditionally thought to be more predictive than behavioral interviews for pre-employment screening, this belief has not been thoroughly tested. Two recent studies detected negligible differences between situational and behav- ioralinterviews (Campion, Campion, & Hudson, 1994; Conway, 1995).

A third found the behavioral interview more predictive than its situ- ational counterpart (Pulakos & Schmitt, 1995).

Only one published study has used structured interviews for integrity screening. Hollwitz and Wilson (1993) reported on several studies of a structured integrity interview used to select child-care workers. Their interview instrument included 21 behavioral and sit- uational interview items, and followed a modification recently tested by HufTcutt and Arthur (1994). The modification permits interview- ers an.

opportunity to probe applicants’ responses with unstructured, foliow-up questions. Interview scores predicted supervisory ratings of interpersonal effectiveness, impulse control, and conscientious- ness, as well as scores on the London House Employee Safety Inven- tory. In addition, subjects and interviewers rated the interview as highly job related. To summarize, structured interviewing offers interesting 206 The Journal of Business Communication 34:2 April ]997 possibilities for pre-employment integrity screening. Given that both behavioral and situational interviews appear reliable and valid, it would appear that the structured format might be used effectively to con- duct pre-employment screening of applicants’ ethical integrity.

Defining Interview Dimensions of Ethical integrity The first step in developing a structured interview is to define the dimensions of job performance to be measured. The process is straight- forward for most areas of job knowledge, skills, and abilities. However, personality-oriented dimensions are more challenging because they tend to be less reliably observable than, for instance, the ability to write complex computer programs or repairing complex machinery. Apart from this limitation, ethical behavior at work does not seem system- atically different from ethical behavior anywhere else. If behavior is consistent, then someone who is ethical in one setting is likely to be ethical in another (Murphy, 1993). Unfortunately, this equation begs the question, since it provides little guidance for determining what ethical behavior is, apart from its cross-situational consistency. Stan- dardized measures of ethical decision-making may not be helpful, since they are generally designed for descriptive analyses and not for human resources practice. On the other hand, many standardized mea- sures of personality demonstrate strong predictive validity for work- place behaviors. Regrettably, these measures are unsuitable for human resource uses. Most measures assess degrees of psychopathology, and there is no guarantee that their scales sample the nonclinical range of ethical decision making or behavior. Equally troubling is that their use has been increasingly challenged in the courts for violating pri- vacy statutes and due process protections (Jackson & Kovacheff, 1993).

Our first step toward developing a structured ethical integrity interview was to define empirically the dimensions of ethical work- place behavior.

A personality-oriented analysis of ethical worker char- acteristics provided this definition.

We administered a critical incidents survey to 100 volunteers, asking them to describe in detail specific episodes characterizing the least ethical person and the most ethical person from their personal work experience. Participation was solicited from members of four international internet groups devoted to applied communication, human resources, management, and applied psy- chology. Volunteers included human resources professionals, psy- chologists, executive MBA students, and university instructors of organizational behavior. Seventy-six respondents provided 161 nar- rative incidents, including 85 incidents of dramatically unethical workplace behavior.

Structured Ethical Integrity interviewing / Hollwitz, Pawlowsi

For the present study, the retranslation procedure con- sisted of six steps. First, we independently reviewed the critical inci- dents and sorted them into categories. Upon jointly reviewing these categories and agreeing on 39 distinguishable incident typies, we jointly grouped the 39 labels into ten overall dimensions. Complying with, the Smith-Kendall method to assure clarity of the dimensions and their consistency with the original data, subjects also retranslated the original data into the dimensions. The goal of this procedure was to achieve at least 67% agreement between the original dimensions and the incident linkages.

Two forms of retranslation were used. First, we independently reassigned incidents into dimensions, yielding a 75% agreement rate. Although this agreement met the Smith-Kendall cri- terion, there was considerable overlap between two particular dimen- sions, labelled “trust violations” and “relationship manipulation.” As a result, the categories were collapsed into one dimension, labelled “relationship manipulation.” With the dimensions combined, retrans- lation agreement exceeded 95%.

Next, we asked two organizational behavior specialists to perform the retranslation exercise, obtaining 71% agreement. The results met the Smith-Kendall criterion for the job relatedness of dimension labels.

The retranslation procedure produced a final list of nine ethical integrity dimensions. In reviewing this list, we noted the absence of “al:tniistic deviance,” a dimension wMch has emerged in studies of work- place honesty and which seemed likely to be directly related to work- place ethics. Altruistic deviance is a form of nonegoistic relativism which occurs when an employee violates company norms or social laws for what he or she perceives as the common good, and not necessar- ily for personal gain (JM^urphy, 1993). An example might be a securi- ties trader who attempts to help the company by withholding information from a regulatory agency or from customers. Thus, “altru- istic deviance” was included for item development because of its apparent relationship to the ethical dilemma of whether ends justify means. Job related ethics dimensions and sample incidents appear in Table 1.

Retranslation provided interesting information about how work- place ethics may differ from workplace honesty. Theft, the dimension which received the largest number of incidents, is a central compo- nent of honesty.

Security Violations included several incidents com- mon to honesty screening, including some relating to drug or alcohol 208 Ttie Journai of Business Communication 34:2 April 1997 use, and to disregard of organizational policies.

However, many ethics dimensions diverged from honesty. Most have a strong interpersonal communication component. Relationship Manipulation, Interper- sonal Deception, Backstabbing, Evasiveness, and Impulse Control con- sisted almost entirely of incidents describing interpersonal interactions and relationship management. Discrimination and Sexual Harassment included many instances of unethical interpersonal messages, as well as behaviors.

It seems likely that workplace ethics is centrally involved with interpersonal communication.

Items and Scoring The ethical dimensions formed the basis for developing two forms of the structured ethical integrity interview. Sixteen interview items were developed to address each ethics dimension. Two goals guided item development. First, although multiple items were written for some dimensions, at least one interview item was written for each dimen- sion. Two dimensions included a large number of incidents and a wide variety of incident types. Theft contained the most incidents, which included numerous examples of property theft, time theft, and theft of ideas (plagiarism). The emergence of this category suggests that ethical integrity bears some relationship to employment honesty, or that employment honesty has an ethical component. Security Viola- tions, the second largest category, included incidents describing sab- otage, drug and alcohol use, and industrial espionage. As a result, multiple items were written for these two dimensions.

Several dimensions (for example. Theft and Security Violations) con- tained incidents pertaining both to employee behaviors and to employee attitudes. Research on employee honesty has demonstrated that such attitudes predict employee behavior. Workers who are punitive about thieves are less likely to be thieves themselves. Pre-employment hon- esty tests often solicit applicants’ judgments about how harshly to discipline workers caught stealing, drinking, or violating safety policies (Cunningham & Ash, 1988; Murphy, 1993; Sackett, Burris, & Callahan, 1989). Consequently, some dimensions produced items that examine both personal behavior and attitudes. A description of the interview item dimensions and specific question targets appears in Table 2.

A second goal guided interview development. Structured interviews can focus either on past behaviors or on hypothetical situations. Each interview item was written in both a behavioral and a situational for- mat, resulting in the development of two versions of the interview. It is important to note, however, that each interview taps the same dimensions and in the same sequence. The different versions were cre- ated for two reasons. First, it was important to have comparison data Structured Ethical integrity Interviewing / Hollwitz, Pawlowski 209 between the two formats, because, as we have mentioned, past research has not yet resolved their relative benefits. Having alternate forms of the interview permits such a comparison. In addition to research hen- efits, there is also a practical advantage to having two versions of each item. Behavioral items ask about job experiences in the recent past, but not all applicants will have had such job experiences. In this instance, an equivalent situational item provides an acceptable alter- native.

Table 1 Global Dimensions of Ethical Job Behavior and Representative Critical Incident Labels Dimension:

Sample Incident Labels 1.

Theft:

2.

Relationship Manipulation:

3.

Interpersonal Deception:

4.

Discrimination:

5.

Security Violations:

6. Impulse Control:

7.

Sexual Harassment:

8. Backstabbing:

9. Evasiveness:

10.

Altruistic Deviance:

Stealing company funds or property; using client or customer lists for personal gain; stealing others’ ideas; time theft such as unauthorized absen- teeism or timesheet deception.

Coercion; using status to compel behavior; arm-twisting to reach per sonal goals.

Lying; misleading by words or deeds; making false promises.

Racist or sexist comments; using race, age, or gender in hiring, appraisal, compensation, or promotion decisions.

Revealing trade secrets to rivals or competitors; drug or alcohol use at work; failing to report policy violations; active sabotage (e.g., behaviors directed against machinery, supplies, or equip- ment); passive sabotage (e.g., failing to fix defective equipment); ignoring policies & procedures.

Losing one’s temper; verbal or physical violence directed against people or objects.

Attempting to use position or reward power to compell personal or sexual relationships with coworkers, subordi- nates, clients, or customers.

Verbally attacking coworkers, subordi- nates, clients, or customers without their knowledge.

Sidestepping responsibility; passing the buck; refusing accountability.

Violating company norms or social laws for the good of the organization or workplace unit.

210 The Journai of Business Communication 34:2 April 1997 Each interview item has four components. Item development con- sisted of creating interview orientations, question stems, probes, and a Likert-t3^e response scale. In conducting an interview, interview- ers would read the item orientation, the stem, and as many follow-up probes as needed to assign an item score, using the response scale.

Each item is administered in the same way to all interviewees.

Table 2 Interview Item Dimensions and Targets Question No.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Ethics Dimension Theft Theft Theft Theft Kelationship Manipulation Relationship Manipulation Interpersonal Deception Security Violations Security Violations Security Violations Security Violations Impulse Control Backstabbing Evasiveness Sexual Harassment Altruistic Deviance Specific Question Target Time theft Theft of ideas Personal feelings about theft Punitiveness towards theft Coercion Violations of trust Lying Sabotage (Personal behavior) Sabotage (Punitiveness) Drug & alcohol use Use of insider information Anger control Backstabbing Evading responsibility Punitiveness about harassment Bending rules Orientation Each item begins with an orientation, including a short introduc- tory statement designed to set the stage for the subsequent question and to offer applicants a wide latitude of possible responses to the inter- view item. For example, one question targets applicants’ ability to con- trol anger and frustration on the job.

The orientation for this question reads as follows:

Sometimes, jobs cause a lot of pressure.

People differ in how they respond to this kind of pressure. Some people need to let oflf steam right away, mak- ing sure that people know how they feel. Others just bottle up the pressure and take care of it later.

It is important that orientations avoid creating ethical problems.

Providing interviewees with a wide range of responses to transpar- ent integrity encourages both high-integrity and low-integrity appli- cants to answer honestly. As a result, such orientations are no different from item stems, or indeed from distractor options in multiple-choice examinations. But to be effective and ethically appropriate, the ori- entations must avoid two dilemmas. On one hand, the orientations can- structured Ethical integrity Interviewing / Holiwitz, Pawiowski 21 1 not imply that one end of the range represents a highly desirable answer; to do so would defeat the purpose of the integrity item. On the other hand, orientations cannot imply that the opposite end ofthe deceptive and, therefore, unethical. Instead, the goal is to encourage honest responses wMch might fall ans^where on that continuum. In the authors’ experience and in previous research, applicants at different levels of employment integrity have responded to such ori- entations by freely discussing incidents which utilized the full range of the continuum (Hollwitz & Wilson, 1993).

Applicants’ willingness to admit such incidents in response to an integrity item represents the great paradox of integrity assessment.

Common sense suggests that people interested in getting hired would consistently try to look their best during employment screening, even at the expense of complete honesty. Yet the research on written integr-ity testing suggests that most applicants consistently answer honestly, even when they are dishonest applicants. They tend to dis- tort their responses only when specifically asked to do so, especially when they are not cued to the “right” answer (Cunningham, Wong, & Barbee, 1994; LoBello & Sims, 1993; Ryan & Sackett, 1987).

The pur- pose of neutral orientations is to help minimize response cues in the event that applicants should be attempting to guess what the “right” answer might be.

Stems Question stems follow the orientation.

Two stems were written for each item.

A behavioral stem addresses past job behaviors or attitudes; a situational stem poses a hypothetical situation, asking applicants how they would respond in a given situation. Attempts were made to keep the two versions of the stems parallel. For instance, Question Three asks about personal responses to theft.

The behavioral version of this question has the following stem:

Tell me ahout a tirae when you realized that someone with whom you worked was taking things home from work, either trivial things (like paper clips) or more expensive items. How did you respond?

The situational version of this question has a different stem:

One day, you and a coworker are leaving the office for the day.

On your way to the elevators, your coworker says, ‘Gosh, I forgot. I’m out of computer paper.

I’ll be right back.’ The coworker hurries back into the office and returns with a box of computer paper.

You realize that your coworker is bringing the paper home for personal use. How would you respond?

Probes For each item, the orientation and stem are presented word-for-word to each applicant.

The response scale for each item is used identically 212 The Journal of Business Communication 34:2 ApriM997 for each applicant. However, in most structured hehavioral or situa- tional interviews, applicants might not immediately provide sufficiently detailed answers for the interviewer to form confident judgments (HoUwitz & Wilson, 1993). Interview reliability increases to the extent that interviewers have the opportunity to ask follow-up probes to elicit as much information as necessary to assign an item score confidently (Huffcutt & Arthur, 1994). Like item stems, probes are keyed to the response anchors. Interviewers are trained to use them as needed until they feel that an applicant has completely responded to the item. For example. Question Sixteen addresses Altruistic Deviance, an integrity construct describing deviant behavior undertaken to advance the organization’s or the work group’s goals. The item attempts to deter- mine the degree to which people feel that the end justifies the means, and how comfortable they feel breaking established rules or violat- ing common norms in order to achieve bottom-line results. Suggested probes for the behavioral version of this item include the following:

Did the end justify the means in this situation?

Was your company too serious about rules, or did it want the job done, what- ever it takes?

Who was responsible for interpreting the rules in situations like this?

Response Anchors Interviewers must score each item on a five-point, Likert-type scale. The scale points are anchored as behavioral expectations according to the Maas (1965) process, a standard procedure for struc- tured interview scoring. Three brief scenarios were created for each interview item. The scenario associated with an item score of 5 rep- resents the type of answer that would be expected of an ideal appli- cant for the position. A scenario typical of a 1 score represents the kind of answer expected of the worst applicant imaginable for the position.

A 3 scenario is an answer characteristic of an acceptable applicant.

In other words, someone who answered at the 3 level (or whose over- all interview scores averaged 3.0) would meet minimal hiring quali- fications.

We developed response anchor scenarios iteratively, refining drafts of the item anchors until agreement on the complete set was achieved.

The behavioral and situational versions of each interview item con- tain the same response anchors. The use of common response anchors for each version of an item has two benefits. First, research compar- isons of hehavioral and situational formats have seldom used a com- mon anchor design, suggesting that observed differences in interview outcomes may have been confounded with response scoring (Pulakos & Schmitt, 1995). Second, the use of common response anchors per- mits interviewers to shift from behavioral to situational items for appli- Structured Ethical Integrity interviewing / Hollwitz, Pawlowski 21 3 cants who have not had the recent job experiences. For example. Ques- tion Tbn asks about drug and alcohol use at work. The question seeks jjxftirjjialiixri ^hrarf.

j:ipjsianal JOiiP alcohol use at work. The behavioral and situational items use the same orientation:

People’s feelings about alcohol or drug use differ widely. Some people don’t drink much at all; others use alcohol or drugs regularly without appearing to be affected very much.

All applicants would receive this orientation, but they would then hear a diffierent question stem, depending on whether they were receiving the behavioral or the structured interview. The behavioral stem asks about recent job experiences:

How often in the last six months has it been o.k.

to have a drink or to use some drug at work, without affecting how well you’ve been able to do your job?

The situationai stem poses a hypothetical question:

Suppose that your company had a policy forbidding any drinking during work hours.

Last week, two people in your work group had a couple of beers dur- ing their lunch break, and your boss found out about it. What should your boss do about it?

Response anchors for both versions of the question are the same. Fol- lovdng the Likert-type format, the response anchors for this question would be:

5 It’s never o.k.

to drink or use drugs at work, under any circumstances; imme- diate action to the point of termination should be taken about anyone who does so; this almost never happens on the job; other workers have an oblig- ation to report anyone engaged in such behaviors to superiors.

3 It’s never o.k. to drink or use drugs at work, under any circumstances; if someone else does so, the company ought to consider the circumstances; this happens very seldom, if ever, on the job; immediate response should not include termination without a consideration of other factors.

1 It may be o.k. to drink or to use a drug at work; people sometimes do so under certain circumstances; it’s important to understand that people need to let off steam; as long as nobody’s being hurt by it, it could be o.k.

In interview administration, intermediate scores of 2 and 4 can be used for answers that fall between the representative benchmarks.

For example, a 4 response to Question Ten would clearly meet the stan- dards set in the 3 response, and would contain some (but not all) of the elements suggested by a 5 response. Similarly, a 2 response would rep- resent a candidate who was a little bit better than absolutely unac- ceptable, but not good enough on the item to earn a score of 3.

After the anchors were written, four volunteer raters checked anchors using the Smith and Kendall (1963) retranslation procedure.

214 TheJournal of Business Communication 34:2 Aprii 1997 Raters included an industrial psychologist, a university-affiliated ethi- cist, a university professor in speech communication, and a university professor in organizational communication. Raters separately read the response anchors and assigned each anchor a 5, 3, or 1, respectively defined as responses characterizing ideal applicants, acceptable appli- cants, and unacceptable applicants. Final agreement among the raters exceeded 93%, exceeding the Smith-Kendall 67% criterion.

Scoring Interview scoring is straightforward. Interviewers are trained to take brief notes while conducting the interview and to defer item scoring imtil the interview is over.

Because there are siKteen interview questions each with a five-point scoring scale, interview scores can range from 16 to 80.

As discussed previously, an item score of 3 represents an accept- able answer. As a result, an overall item average of 3, or a total inter- view score of 48, represents a passing score. Applicants with such scores would meet minimum hiring qualifications. It is important to note that this scoring scheme is compensatory; that is, applicants can receive an overall passing score if they score 1 or 2 on some items and 4 or 5 on others.

Because integrity measures may tap highly sensitive dimen- sions, some organizations believe that a substandard performance (a rating of 1 or 2) on specific items should automatically disqualify an appli- cant from consideration, regardless of overall interview performance.

In their studies of a structured integrity interview for child-care work- ers, Hollwitz and Wilson (1993) reported that the sponsoring organi- zation chose to treat alcohol and illegal drug use items as disqualifier questions. Unless applicants earned a score of 3 or better on each of several chemical use items, they were disqualified from the position, regardless of scores on the rest of the interview.

Discussion As presented in this paper, development of the structured ethical integrity interview followed customary standards for designing pre- employment measures. The next stage ofthe instrument’s development will be examining its reliability and validity.

Interviewers have already been trained to administer and score the questions appropriately, and the next step involves administering the test to subjects who will receive the behavioral or situational forms ofthe interview. Ultimately, the interview will be tested against criteria of workplace integrity. As with other pre-employment measures, analyses will include group com- parisons based on race, gender, and age to ensure that the interview lacks adverse impact against members of designated protected groups.

Apart from final testing ofthe test, several important matters per- taining to the development, testing, and use ofthe interview described Structured Ethical Integrity Interviewing / Hollwitz, Pawlowski 21 5 in this paper are worthy of discussion. First, in its current form of ini- tial validation the interview does not indicate to subjects that items target ethical integriiy.

A reviewer of this manuscript invited us to con- sider whether this presented an ethical dilemma of its own. Several points suggest not.

First, the interview makes no attempt to deceive applicants or to lure them into inappropriate admissions. The items are straightforward and transparent with respect to integrity dimen- sions, formulated according to accepted human resources practices to help guarantee clear job relatedness and fairness.

Second, subjects sign informed consent agreements which specify that the interview pro- poses to assess ethical integrity. Third, theoretical expectations and research previously discussed suggests that such an explicit statement of integrity purpose makes little difference in applicants’ responses.

Fotirth, the administration of the interview meets accepted testing stan- dards, which require that pre-erflployment measures be reliable and job related but not that they illustrate the tests’ purposes or induce socially desirable response sets (which, as noted earlier, is not easily done anyway).

A pre-employment integrity test is most likely to vio- late ethical practices when it is not job related or when it intrudes unnecessarily upon applicant privacy. Such a situation seems more likely to occur with personality oriented measures whose integrity scales are not both transparent and job related (Jackson & Kovacheff, 1993).

On the contrary, clear-purpose integrity tests raise few objec- tions and incur no violation of applicant rights or due process, so long as they are job related and reliable (Jones, Ash, & Soto, 1990; Smither, Eeilly, Millsap, Pearlman, & Stoffey, 1993). Finally, one of our goals in validating this interview was to assess the degree to which appli- cants perceive it as fair and acceptable, using standards developed for other human resources procedures (Lind & lyier, 1988; Sweeney & McFarlin, 1993).

In addition to the ethical issue surrounding administering the interview, several research questions are intriguing. First, a consen- sus is emerging that written pre-employment honesty tests measure general conscientiousness (Ones, Viswesvaran, & Schmidt, 1993).

No research has explored whether ethical integrit]/ refiects the same factor. Second, item response theory (IRT) has proven increasingly valuable as a guide to measurement analysis and design (Hambleton, Swaminathan, & Rogers, 1991; Lord, 1980). In addition to providing alternative techniques for exploring group differences, IRT offers sophisticated procedures to maximize measurement precision. Because of rapid advances in desktop computer technology, item response theor,y is now accessible to researchers with even a modest backgi-ound in measurement theory. To date, IRT has been applied primarily to written measures.

No published research has used item response the- 216 TheJournai of Business Communication 34:2 April !997 ory to analyze interview design and performance. Indeed, communi- cation studies are unique among the social sciences for not having begun widespread applications of IRT in survey and criterion designs.

No published research has applied IRT analyses to pre-employment integrity interviews.

Finally, the critical incident study which formed the basis for item development suggested that workplace ethics has a strong interper- sonal communication component. Further research should examine the relationship of interview scores to measures of communication; style.

A particularly interesting question is whether communication ori- entation might account for variance in workplace integrity beyond mea- sures of honesty.

NOTE John C.

Hollwitz and Donna R.

Pawlowski are both in the Department of Communication Studies, Creighton University, Omaha, Nebraska 68178.

Copies of the situational and behavioral versions of the structured integrity interview are available from the first author.

REFERENCES Arvey, R.

D., & Campion, J. E.

(1982).

The employment interview.” A summary and review of recent research.

Personnel Psychology, 35, 281-322.

Arvey, R.

D., Miller, H.

E., Gould, R., & Burch, P.

(1987).

Interview validity for select- ing sales clerks.

Personnel Psychology, 40, 1-11.

Bernardin, H.

J., & Cooke, D.

K. (1993). Validity of an honesty test in predicting theft among convenience store employees.

Academy of Management Journal, 36, 1097-1108.

Campion, M.

A., Campion, J. E., & Hudson, J.

P.

(1994). Structured interviewing:

A note on incremental validity and alternative question types. Journal of Applied Psychology, 79, 998-1002.

Campion, M. A., Pursell, E. D., & Brown, B. K. (1988). Structured interviewing:

Raising the psychometric properties of the employment interview.

Personnel Psychology, 41, 25-42.

Cascio, W.

(1991).

Applied psychology in personnel management (4th ed.).

Engle- wood Cliffs, NJ: Prentice-Hall.

Cohen, J., Pant, L., & Sharp, D. (1993). A validation and extension of a multidi- mensional ethics scale. Journal of Business Ethics, 12, 13-26.

Conway, J.

M. (1995, May). Construct validity of situational and past behavior description questions.

Paper presented at the meeting of the Society for Indus- trial and Organizational Psychology, Orlando, FL.

Cunningham, M.

R., & Ash, P. (1988). The structure of honesty:

Factor analysis of the Reid Report. Journal of Business and Psychology, 3, 54-66.

Cunningham, M.

R., Wong, D.

T., & Barbee, A.

P.

(1994). Self-presentation dynam- ics on overt integrity tests:

Experimental studies of the Reid Report. Journal of Applied Psychology, 79,643-658.

Dalton, D.

R., & Metzger, M.

B.

(1993).

‘Integrity testing’ for personnel selection:

An unsparing perspective. Journal of Business Ethics, 12, 147-156.

Structured Ethical Integrity Interviewing / Hollwitz, Pawlowski 21 7 Dalton, D.

R., Wimbush, J. C, & Daily, C. M. (1994). Using the unmatched count technique to estimate base rates for deviant behavior.

Personnel Psychology, 47, 818-828.

Dipboye, R. L., & Gaugler, B. B.

(1993).

Cognitive and behavioral processes in the selection interview. In N.

Schmitt, W.

C. Borman, and Associates, Personnel selection in organizations (pp.

135-170). San Francisco:

Jossey-Bass.

Drauden, G.

K, & Peterson, N.

G.

(1974).

A domain sampling approach to job analy- sis.

Unpublished manuscript (Available from Test Validation Center, 215 State Administration Building, St. Paul, MN 55155).

Goldberg, L. R., Grenier, J. R., Guion, E. M., Sechrest, L. B., & Wing, H. (1991).

Questionnaires used in the prediction of trustworthiness in pre-employment selec- tion decisions:

An APA Task Force report.

Washington, DC: American Psy- chological Association.

Guastello, S. J., & Rieke, M. L. (1991). A review and critique of honesty test research. Behavioral Sciences and the Law, 9, 501-523.

Hambleton, R.

K., Swaminathan, H., & Rogers, H.

J.

(1991).

Fundamentals of item response theory.

Newbury Park, CA: Sage Publications.

Harris, M. M.

(1989).

Reconsidering the employment interview:

A review of recent literature and suggestions for future research.

Personnel Psychology, 42, 691- 726, Hollwitz, J., & Wilson, C. (1993). Structured interviewing in volunteer selection.

Journal of Applied Communication Research, 21, 41-52.

Huffcutt, A.

I, & Arthur, W.

(1994).

Hunter and Hunter (1984) revisited: Interview validity for entry-level jobs.

Journal of Applied Psychology, 79, 184-190.

Hunter, J. E., & Hunter, R.

P.

(1984).

Validity and utility of alternative predictors of job performance.

Psychological Bulletin, 96, 72-98.

Jackson, D.

N., & Kovacheff, J.

D.

(1993). Personality questionnaires in selection:

Privacy issues and the Soroka case.

The Industrial-Organizational Psycholo- gist, SO, 45-50.

Jones, J. W., Ash, P., & Soto, C. (1990). Employment privacy rights and pre- employment honesty tests.

Employee Relations, 15, 561-575.

Jones, J. W,, & Terris, W. (1991a). Integrity testing for personnel selection: An overview.

Forensic Reports, 4, 117-140.

Jones, J.

W., & Tferris, W.

(1991b).

Selection alternatives to the preemployment poly- graph. In Jones, J.

W.

(Ed.), Preemployment honesty testing:

Current research and future directions (pp. 39-52). New York:

Quorum Books.

Kochkin, S.

(1987).

Personality correlates of a measure of honesty.

Journal of Busi- ness and Psychology, 1, 236-147.

Latham, G. P., & Saari, L. M. (1984). Do people do what they say? Further stud- ies on the situational interview. Journal of Applied Psychology, 69, 569-573.

Latham, G. P., Saari, L. M,, Pursell, E. D., & Campion, M. A. (1980). The situa- tional interview. Journal of Applied Psychology, 65, 422-427.

Lillienfeld, S.

O., Alliger, G.

M., & Mitchell, K. E.

(1995, May).

Effects of coaching on overt and covert integrity test responses.

Paper presented at the meeting of the Society for Industrial and Organizational Psychology, Orlando, FL.

Lin, T. R., Dobbins, G. H., & Farh, J. L.

(1992).

A field study of race and age sim- ilarity effects on interview ratings in conventional and situational interviews.

Journal of Applied Psychology, 77, 363-371.

218 The Journal of Business Communication 34:2 April 1997 Lind, E.

A., & lyier, T.

R. (1988).

The social psychology of procedural justice. New York; Plenum.

LoBello, S. G., & Sims, B. N. (1993). Fakability of a commercially produced pre- employment integrity test. Journal of Business and Psychology, 8, 265-273.

Lord, F.

M.

(1980).

Applications of item response theory to practical testing prob- lems.

Hillsdale, NJ: Lawrence Erlbaum.

Maas, J.

B.

(1965). Patterned expectation interviews:

Reliability studies on a new technique. Journal of Applied Psychology, 49, 431-483.

Marchese, M. C, & Muchinsky, P. M. (1993). The validity of the employment interview:

A meta-analysis.

International Journal of Selection and Assessment, 1, 18-26.

Martelli, T.

A.

(1988).

Pre-employment screening for honesty:

The construct valid- ity, criterion validity, and test-retest reliability of a written integrity test.

Unpub- lished doctoral dissertation, Ohio University.

Martin, S. L. (1991). Honesty testing: Estimating and reducing the false positive rate.

In J.

W.

Jones (Ed.), Preemployment honesty testing:

Current research and future directions (pp. 107-119). New York:

Quorum Books.

Martin, S. L., & Terris, W.

(1994).

Reemphasizing Martin and Terris’s (1991) ‘Pre- dicting infrequent behavior: Clarifying the impact on false-positive rates.’ Journal of Applied Psychology, 79, 302-305.

Maurer, S.

D., & Fay, C.

(1988).

Effect of situational interviews, conventional struc- tured interviews, and training on interview rating agreement: An experimen- tal analysis.

Personnel Psychology, 41, 329-344.

Mayfield, E. C. (1964). The selection interview: A reevaluation of published research.

Personnel Psychology, 17, 239-260.

McDaniel, M.

A., Whetzel, D.

L., Schmidt, F.

L., & Maurer, S.

D.

(1994).

The vahd- ity of employment interviews:

A comprehensive review and meta-analysis.

Jour- nal of Applied Psychology, 79, 599-616.

Meinsma, G. (1985). Thou shalt not steal.

Security Management, 29, 35-37.

Murphy, K.

R.

(1987).

Detecting infrequent deception.

Journal of Applied Psychology, 72, 611-614.

Murphy, K. R. (1993).

Honesty in the workplace.

Pacific Grove, CA:

Brooks-Cole.

O’Bannon, R. M., Goldinger, L.

A., & Appleby, G. S. (1989).

Honesty and integrity testing:

A practical guide.

Atlanta, GA:

Applied Information Services.

Ones, D.

Z., Viswesvaran, C, & Schmidt, F.

L.

(1993). Comprehensive meta-analy- sis of integrity test validities: Findings and implications fqr personnel selec- tion and theories of job performance. Journal of Applied Psychology, 78, 679-703.

Orpen, C.

(1985).

Patterned behavior description interviews versus unstructured interviews: A comparative validity study. Journal of Applied Psychology, 70, Pulakos, E. D., & Schmitt, N.

(1995). Experience-based and situational interview questions: Studies of validity.

Personnel Psychology, 48, 289-308.

Reidenbach, R. E., & Robin, D. P. (1990). Towards the development of a multidi- mensional scale for improving evaluations of business ethics.

Journal of Busi- ness Ethics, 9, 639-653.

Ryan, A.

M., & Sackett, P.

R. (1987). Pre-employment honesty testing:

Fakability, reactions of test takers, and company image.

Journal of Business and Psychology, 1, 248-256.

Structured Ethical Integrity Interviewing / Hoiiwitz. Pawiowsi

R., Burris, L.

R., & Callahan, C. (1989). Integrity testing for personnel selection: An update.

Personnel Psychology, 42, 491-529.

Sackett, P.

E., & Harris, M.

M. (1984). Honesty testing for personnel selection: A review and critique.

Personnel Psychology, 37, 221-245.

Schmitt, N.

(1976).

Social and situational determinants of interview decisions:

Impli- cations for the employment interview.

Personnel Psychology, 29, 79-101.

Shepard, I. M., & Duston, R. (1988).

Thieves at work:

An employer’s guide to com- bating workplace dishonesty.

Washington, DC: Bureau of National Affairs.

Smith, P.

C, & Kendall, L.

M.

(1963).

Retranslation of expectations: An approach to the construction of unambiguous anchors for rating scales. Journal of Applied Psychology, 47, 149-155.

Smither, J.

W., Reilly, R. R., Millsap, R. E., Pearlman, K., & Stoffey, R.

W.

(1993).

Applicant reactions to selection procedures.

Personnel Psychology, 46, 49-76.

Sweenej/, P. D., & McFarlin, D. B. (1993). Workers’ evaluations ofthe ‘ends’ and the ‘means’:

An examination of four models of distrihutive and procedural jus- tice.

Organizational Behavior and Human Decision Processes, 55, 23-40.

Ulrich, L., & Trumbo, D. (1965). The selection interview since 1949.

Psychologi- cal Bulletin, 63, 100-116.

U. S. Congress, Office of Technology Assessment. (1990). The use of integrity tests for pre-employment screening.

Washington, DC:

Author.

Wagner, R.

(1949).

The employment interview:

A critical summary.

Personnel Psy- chology, 2, 17-46.

Weekly,

A.

(1987).

ReliahiJity and validity ofthe situational interview for a sales position. Journal of Applied Psychology, 72, 484-487.

Wiesner, W.

H., & Cronshaw, S. (1988). The moderating impact of interview for- mat and degree of struct\ire on interview validity.

Journal of Occupational Psy- chology, 61, 275-290.

Wright, P.

M., Lichtenfels, P.

A., & Pursell, E.

D.

(1989).

The structured interview:

Additional studies and a m.eta-analysis.

Personnel Psychology, 62, 191-199.

“Get 15% discount on your first 3 orders with us”
Use the following coupon
FIRST15

Order Now