Article

Selecting an Assessment Technology: Five Business Principles Vital to Your Success

Topic: Assessment ToolsBy Dr. Jason E. TaylorPublished Recently added

Legacy signals

Legacy popularity: 1,629 legacy views

Executive Summary

The goal within most organizations is to hire a happy, productive workforce that rnstays on the job longer and produces more. That simple mission is often very rnhard to execute without an HR tool that is proven to predict a candidate’s rnon-the-job performance and tenure. Volumes of research show that an assessment rntechnology—when positioned and deployed correctly—will reduce turnover and rnimprove productivity while creating a reservoir of objective performance data rndesigned to identify prospective employees who are good fits in specific job rnroles.

To fulfill the mission of hiring a productive workforce that stays on the job rnlonger and produces more, assessment technology has become a mission critical rncomponent for organizations. With the right assessment technology, your company rnshould have the means to identify, develop, and retain a highly productive rnworkforce, which is one of the vital ingredients to business success.

I want to share with you lessons I’ve learned over the last decade on how to rnmost effectively select, deploy, and study the effectiveness of an assessment rntechnology solution. Equipped with these five principles, you possess the rnfundamental components that must be top-of-mind when purchasing an assessment rntechnology solution.

The Principles

Principle #1: An assessment technology should be…

Proven to predict employee performance.


Assessment technologies are designed to assist organizations in identifying rncandidates who will be successful on the job. To determine which assessment can rnbest meet your organization’s needs, you must be convinced of the system’s rnability to predict performance. From an objective, scientific perspective, performance predictability of an assessment solution is most often documented rnthrough two concepts: reliability and validity.

Reliability—Only Part of the Equation

I met a good friend of mine at a golf course in West Texas many years ago. Our rnplan was to enjoy a round or two and catch up on old times. However, due to a rnhigh volume of golfers waiting in line, the course officials paired us up with rntwo “local boys” (that’s a Texanism for two grown men you don’t know).

I was the last to tee off after watching my friend and the two local boys really rnset the pace by crushing their drives. Embarrassingly, I “topped” the ball, meaning I barely caught enough of the ball to send it gently skipping down the rnmiddle of the fairway about fifty yards from the tee box.

As golf etiquette would have it, the player furthest from the hole must hit the rnnext stroke. As I took a couple of practice swings, I noticed the two local boys rnwaiting in front and just to the right of my position on the fairway.

In a neighborly fashion, I called out, “Hey, you boys might want to move. I have rna nasty slice.” (My ball always curls off to the right.) One of the two rnnonchalantly called back, “Aw, don’t worry, you won’t hit us!” Not wanting to rndisrupt the flow of the game, I warily continued to line up my shot. I tightened rnmy grip on the club, took one more practice swing, and then let it rip.

It really was a beautiful shot—featuring my standard beautiful slice in all its rnglory. The ball curved so fast I did not have time to yell “fore.” Before I knew rnit, the ball whistled straight at the local boys and struck one with a loud rnthud! (I suppose he was fortunate—the ball struck that padded area between the rnhamstrings and the lower back.) The golfer with the smarting backside shrieked rnso loudly that everyone on the course felt his pain.

The ever-present slice in my golf swing provides the perfect illustration of the rnconcept of reliability in an assessment technology.

In golf, I reliably slice the ball to the right side of the course every time; you can count on it, and, unfortunately, the local boys did not heed the rnwa ing. To relate this to assessment terms, anytime you assess someone, you rnwant to receive a reliable result. The reliability of an assessment focuses on rnthe consistency of the responses, but not the accuracy. In practical terms, an rnassessment that asks several similar questions—using slightly different rnwords—would yield similar answers. Put another way, if a person took an rnassessment, then took it again later, the results should be very similar. By rncontrast, if you receive a wide variety of responses, you would likely determine rnthat the measure is not reliable.

The statistical reliability of an assessment is measured in several different rnways. It would take a lengthy white paper to cover this topic to my rnsatisfaction, but, in simple terms, a rule of thumb for a behavioral assessment rninstrument is to achieve reliability of .7 to .8. This range will vary due to rnthe type of assessment that was used. I would encourage you to not only ask rnabout the reliability of any assessment technology, but also the background data rnthat defines how that number was generated.

It is important to remember that reliability is only part of the equation. Without validity, you will not have a full picture of the assessment’s rneffectiveness. For example, to better understand the actual success of my golf rngame (or lack thereof), we need to analyze my validity to determine how rnaccurately I can hit the ball in the hole. (At least I am reliable…one out of rntwo isn’t bad.)

Validity—Does the Assessment Work?

Validity answers a very different question. Does it work? In the game of golf, the number of strokes to complete a round of golf provides a validity estimate rnof a player’s golfing abilities. It is important to understand that one round of rngolf at one golf course does not provide an accurate representation of one’s rngolfing ability. Golfers attain different scores depending on the course played, weather, type of course, difficulty of the course, the number of holes played, the number of strokes required to make par, etc. It is not one round, but the rnbody of evidence collected over time that provides the validity of a player’s rngolf game.

This concept translates nicely to assessment validity. When evaluating the rnvalidity of an assessment technology, you should focus your evaluation efforts rnon the volume of studies, types of roles, and the sample sizes of the various rnstudies. Generally, assessments should deliver a validity coefficient in the rnneighborhood of .2 to .4. Like reliability, but even more so, the range of the rnvalidity coefficient may vary due to the context of the study, sample sizes, length of study, etc. Dig into the reported validity coefficient as well as the rnsupporting documentation that details the study process.

Collectively, discussions around reliability and validity should provide you rnwith the confidence you need to narrow the choices of possible assessment rntechnologies for your organization.

Principle #2: An assessment technology should be…

The catalyst to continuous workforce improvement.

To stay competitive, every company should desire to see continuous improvement rnin the workforce. The advantages that an organization gains through the pursuit rnof continuous improvement are numerous: more productive workers, better process rnefficiencies, lower overall expenses, and higher revenues, to name a few. The rnkey to that kind of long-lasting improvement lies in bettering the performance rnof every member of the organization. After all, individuals make up teams, teams rnmake up departments, departments comprise company divisions, and divisions form rncorporations. Individual performers are the building blocks of the entire rnstructure.

Often the key role that individual performers play in creating a culture of rncontinuous improvement is overlooked. Traditionally, companies are very good at rnmonitoring and tracking performance of the masses at the company, regional, and rngroup levels. However, those same organizations often miss the mark when it rncomes to tracking and monitoring performance at the individual level. Without rnsolid tracking of individual job performance, companies are unable to evaluate rnperformance on the front lines where it actually occurs: at the individual rnlevel.

As part of your evaluation of assessment technologies, look for processes that rnrely heavily, if not solely, on objective performance metrics to document the rneffectiveness of individuals in the workforce. Individual performance numbers rnwill not only define “success” in your company and culture, but also serve to rnlink behaviors to performance when a behavioral assessment tool is introduced rninto the hiring procedure.

This is how your assessment technology can become the catalyst for continuous rnworkforce improvement. If positioned properly, the assessment software will be a rncrucial collection point of individual behaviors—and related performance rnmetrics—that dictate what great performers look like in specific jobs.

To derive the best results from an assessment technology, it is important to rnunderstand performance in terms of data at the individual level. Understanding rnindividual performance will provide you with a clear performance picture rnsurrounding the objectives and desired outcomes for a position. The clearer the rnperformance picture, the more equipped you are to accurately capture the rnbehaviors and skills needed for success.

By installing an assessment technology, your organization’s maintenance will rninclude reevaluating the clarity of performance data on a continual basis in rnorder to improve the behavioral/skill capture. In this process, it is rncommonplace for companies to focus on higher quality individual performance rnmetrics to better leverage their assessment technology. This effect will rnautomatically raise the bar in terms of selection, training, development, and rnemployee productivity across any position where an assessment technology is rndeployed.

In summary, focusing on detailed, objective performance data collection methods rnwill inevitably lead to a better capture of behaviors and skills. A better data rncapture through an assessment technology leads to the accumulation of workers rnwho are more aligned with desired business performance goals. Eventually, one rncomponent improves the other, fueling an ongoing cycle of continuous rnimprovement.

Principle #3: An assessment technology should be…

Focused on fit; more is not always best.

Have you heard the saying, “More is better”? In the game of golf, you have a rnvariety of golf clubs designed for different situations. Some clubs are for rndriving the ball great distances down the fairway, while other clubs are used rnfor shorter shots such as chipping or putting. Imagine how your golf game would rnsuffer if you believed that the bigger club was always better. On a par three rnhole, you may overshoot the green with one swing. Even worse, once you make it rnto the green, you will struggle putting the ball in the hole using your driver. At that point, the bigger club actually hurts your ability to maneuver the ball rnwhere you want it to go, which is in the hole. By that logic, more is not always rnbetter.

The same concept applies when it comes to using an assessment. Typically, assessments measure a collection of characteristics (referred to as factors, dimensions, etc.). Many people assume—incorrectly— that it is always better to rnbe on the higher side of a characteristic (the More is Better Syndrome).

Let’s consider the implications of this thought process. Is being smarter always rnbetter? What about filling a mundane job vacancy? How long would a brilliant rnperson stay in a non-thinking, repetitive job? Is being highly sociable a great rncharacteristic for every job? Consider an isolated role where interaction with rnothers is detrimental to good performance. Would a person who thrives on rnsocializing enjoy, or be driven to success, in this type of role?

Of course, I’m exaggerating these scenarios to drive home the point: it is rnimportant to avoid the mistake of assuming more is always better. The key to rnfully utilizing the power of the assessment is to find just the right amount of rnmany characteristics to predict future success in a specific role.
By fine-tuning the subtle shades of each assessment characteristic to best rndescribe your strongest performers, you will be better equipped to maximize the rnpredictive power of your assessment tool. Again, great caution should be taken rnif your objective is to only use assessment characteristics in the context of “more is better.” That method of evaluation often leads to selection tactics rnbased on incorrect assumptions. Additionally, you will effectively dismiss a rnlarge amount of hidden insight that will increase your predictive power to rnidentify your future top performers who will stay in position longer.

Keep in mind that most assessment technologies are built according to the rnassumption that more is better. Your evaluation of assessment technologies rnshould only include systems that measure a large group of behavioral rncharacteristics; moreover, the system must offer flexibility in specifying the rnoptimal amount of each characteristic an ideal candidate would possess to rnsucceed in the target job.

Principle #4: An assessment technology should be…

More than just a score.

When selecting an assessment technology, it is important that the usefulness of rnthe assessment goes far beyond a simple score or rating of the candidate. Overall scores are helpful when sorting and sifting candidates and narrowing the rnfield, but the real value comes when you dig deeper and fully leverage all the rnrich information gathered from the assessment. Specifically, you should be able rnto apply the assessment information to areas such as enhancing the interview, on-boarding, determining future career paths, and developing employees over the rnlong term.

Enhanced Interviews

Beyond providing a score, information gained from the assessment should rnimprove your interview process. A quality assessment can effectively produce rntargeted interview questions designed to facilitate discussion around the rnspecifics of a position. These targeted interview questions also provide a means rnto ensure consistency in your interviewing process regardless of the size or rngeography of your organization. Additionally, by using the targeted interview rnquestions, you will maximize your time with the candidate. At a minimum, you rnwill have a better understanding of the strengths and opportunities revealed by rnthe assessment in relation to a specific position.

On-Boarding

On-boarding is the process of getting a new hire officially authorized for his rnor her first day on the job. This hiring phase includes the completion of rnvarious governmental and proprietary forms, plus any other paperwork required by rnthe hiring company. To expedite this procedure, an assessment technology will rntypically be integrated with the company’s Human Resource Information System (HRIS) to pass on all relevant data previously collected on the candidate. In essence, the assessment platform should “fill in the blanks” required on electronic forms rnin the HRIS database through a transfer of information from the candidate’s rnoriginal application. Without this integration (more on integrations in the next rnsection), on-boarding remains a manual process and any potential efficiencies rnthat could be driven from the assessment technology are negated. Direct your rnevaluation of assessment technologies to only those systems with proven rnintegration success with common HRIS technologies.

Career Pathing

Future career paths are another area where an assessment technology should allow rnyou to go beyond a score. In companies with an eye to the future, the selection rnstrategy is to hire not only for the immediate need, but also determine each rnemployee’s viability for future positions. For example, if you are tasked with rnhiring an assistant manager, you may also be interested in a candidate’s rnpotential to be a manager at some point down the road. Your assessment rntechnology should provide you with the insight to understand and evaluate the rnpotential for candidates to move into other positions, and not just the job for rnwhich they applied.

Employee Coaching and Development

Companies are often asked to do more work with fewer people on the payroll. Therefore, coaching and employee development programs have become an area of rnemphasis in most organizations. Consider future coaching tools as an integral rnpart of the assessment technology purchase. The assessment process captures a rnwealth of data, which should be used throughout the life cycle of an employee. By scientifically examining the relationships between performance data and rnassessment characteristic scores, the assessment technology provides specific, detailed developmental targets to support continued growth of the assessed rnindividual.

One of the biggest hindrances to creating a quality coaching and development rnprogram is finding specific content statistically related to performance on the rnjob. Assessment technology provides the perfect vehicle to supply accurate, job-related content for training in the current position, as well as in future rnpositions.

Principle #5: An assessment technology should be…

A tool that makes your organization better.


Although this principle serves as number five, it fits the old adage, “Last but rnnot least.” Central to any new purchase or program decision is the need to rndetermine how your organization will ultimately define value. A great approach rnto this question is to ask, “How will this assessment technology make us rnbetter?” You will find that value comes in many forms; each organization has a rnunique focus that is proven to breed success. Three universal ways in which an rnassessment technology can better an organization are:

• Better processes.
• Better retention.
• Better performance.

Better Process

The primary function of an assessment technology is to address the fundamental rnchallenge of indentifying candidates who produce more and stay longer on the rnjob. In fulfilling that primary function, your assessment technology should not rnhinder your overall HR process, but in fact should streamline the hiring rnworkflow. This is most often accomplished through integrations with existing rnsoftware systems designed to manage the flow of information as candidates move rnfrom their initial applications to their first day on the job.

The advent of applicant tracking software (ATS) allowed companies to manage the rndata generated during the hiring process. ATS tools—not to be confused with rnassessment technology—were designed only to collect, organize, and move rncandidates through the HR process. In other words, they simply manage bits of rninformation. Some applicant tracking tools provide a few features such as rnpre-screens or light assessment functionality, but the central focus is on rnorganizing information. These features are handy, but secondary, to the primary rnobjective of hiring the right fit for the job.

To enjoy the functionality of assessment technology and an ATS, one business rnoption is to select an assessment technology that can co-exist side by side with rnan ATS. However, this arrangement isn’t a requirement. Quality assessment rntechnology now provides features to categorize and sort people, collect résumés, store applications, provide detailed reports, and do many other practical tasks rnto manage your peopleflow—the path every candidate takes from the “Apply Now” portal to the final hire/no hire decision. The focus must always be on selecting rnthe right candidate for the job, but be aware that an assessment technology may rnbuild in enough information management features to ensure that your hiring rnprocess is smooth, user friendly, and meets your peopleflow needs.

Assessment + ATS = Integration

If your organization has determined to use, or is currently using, an applicant rntracking software, then you want to make sure that the assessment technology has rnthe ability to integrate with that specific ATS. Integration is defined as the rnprocess of connecting two or more technology solutions together to create a rnseamless flow of information from one system to another. The seamless flow rnshould be present for both the applicant and the end-user. The objective of an rnintegration is to simplify and streamline the data collection and delivery rnprocess.

Integrations are common in the marketplace today. Many systems such as tax rncredit, background checks, performance management, applicant tracking, and rnpayroll or human resource information systems (HRIS) are connected through a rnseamless integration. You should expect an assessment technology to provide you rnwith a history of integrations and examples of current clients already using the rnassessment technology integrated with another ATS or HRIS.

Better Retention

A business objective that is directly addressed by an effective assessment rntechnology solution is improving employee retention. Excessive employee turnover rneffects all organizations in the form of both direct and indirect costs. Direct rncosts include the placement of job postings, plus the labor hours devoted to rnscreening and interviewing candidates. There are many indirect costs to consider rnas well. A few examples are down time in the vacant position, lost rnopportunities, overtime expenses for others to cover job vacancies, not to rnmention the potential negative effect on company morale.

Regardless of your current retention issues, the stakes are high and worthy of rncareful consideration. Cash America, an inte ational financial services company rnthat studied its hire-termination trends over a two-year period, conservatively rncalculated the direct and indirect costs for replacing a store manager at $10,000 each, and around $2,500 for each customer service representative. Whether your numbers are higher or lower, it’s readily apparent that for a rncompany with thousands of employees, significant reductions in employee turnover rnequates to millions of dollars saved over time.

A common thread among much of the existing employment research is the fact that rncandidates who are good behavioral fits to their particular jobs tend to stay rnlonger and turnover less frequently. It is important to recognize that employee rnretention is a strong indicator of an improvement effect from an assessment rntechnology. Most companies keep detailed records of terminations for payroll rnpurposes, which makes good business sense. No company would willingly continue rnto pay an individual who is no longer employed. These records may provide rnimportant data for a quality hire-termination study. For example, as part of the rnaforementioned Cash America study consisting of data on 3,248 employees, the rnhire-termination data documented that the company experienced a 43% turnover rnreduction in managerial positions after implementing an assessment technology.

Keep in mind that obtaining study-worthy results for all positions in the rnorganization simply may not be possible. Expectations for turnover studies rnshould be appropriate to the scope of the position. Roles with small rnpopulations, lack of accurate hire and termination data, or an insufficient rnamount of time for data collection can affect your ability to conduct a quality rnstudy.

Better Performance


I have never met an executive who did not measure success in terms of rnperformance. Companies may evaluate performance in many different ways, but one rnbusiness rule is undeniable—improved performance comes from improving your rnincumbents and selecting better people. Because so many companies desire to rnimprove their workforce, assessments are a great way to drive improvement. An rnassessment technology modeled after actual performance data provides a strong rntool to select those who have the greatest potential to perform well in the rnrole.

When evaluating an assessment technology, a very common question is often posed rnby company executives, included in requests for proposals (RFPs), and/or rnsubmitted by committees: “What is your validity coefficient?” By latching on to rnthis statistical term, the organization is really asking, “Does it work?” Or, “Can you prove it has made other companies better in target positions?” Let’s rntake a moment to dissect the meaning of this question.

As we touched on in Principle #1, it is important to interpret any answer to the rnvalidity question in the context of the particular situation. Remember my golf rngame. If you ask me what I shoot, like any self-respecting person I am going to rntell you my best score. You might think I am a decent golfer based on that one rnscore. What I conveniently neglected to tell you was the situation surrounding rnthat score. I left out the part about all the holes being par threes with no rnwater, sand traps, or trees to get in the way. On an average competitive golf rncourse, my performance would be much worse.

Interpreting validity is more than just asking, “What is your validity rncoefficient?” You should dig into the specifics of the situation. Pay attention rnto specific items such as sample sizes, types of data being studied, types of rnpositions, or any other particular items of interest. Some studies may not, at rnface value, seem impressive until you understand the situation and the results rnbased on the situation.

For example, by deploying an assessment technology, a large call center rnenterprise hoped to identify job candidates who could reduce the average time rnspent on incoming phone calls. After studying the performance of 704 employees rnover their first 12 months on the job, employees hired using the assessment rnprocess averaged call times that were 1.14% shorter than calls taken by their rnnon-assessed coworkers. That translates to a savings of approximately four rnseconds per call, or about the time it took you to read this sentence.

At first glance, are you impressed with a 1.14% improvement? Before you answer, consider this: across the entire corporation consisting of multiple call rncenters, each second shaved from the average call time is valued at $175,000 rnover the course of a year. That four-second improvement saves over $700,000 per rnyear company-wide, and the assessment technology has paid for itself many times rnover.

While there are plenty of success stories, be aware that the reverse can occur. A study may appear very impressive at first glance, but when the situation is rnexposed to the light, the results may be found lacking due to tiny sample sizes rnor some other extreme set of conditions.

Breaking down the question, “What is your validity coefficient?” a bit deeper, we find that the terms are in a singular context. Meaning, the person asking the rnquestion is asking for only one number or one value that represents the entire rnconcept of “Does it work?” or “How has this made someone else better?” It is rnimportant to realize that a solid, proven assessment technology should be able rnto show many studies from different companies, positions, and situations. Each rnstudy, based on the situation, should show a relationship (in one form or rnanother) between the assessment outcome and the performance metric. The rndocumented volume of evidence should go way beyond one “validity coefficient” and provide massive amounts of ongoing research proving the technology has, and rncontinues, to make other companies better.

Just as with a hire-termination study, obtaining concrete performance results rnfor all positions may not be possible. Temper your expectations for performance rnstudies according to the scope of the position. Small sample sizes, a lack of rnobjective performance metrics, or an insufficient amount of time for data rncollection can affect your ability to conduct a quality study.

When evaluating an assessment technology, ask to see multiple client case rnstudies that demonstrate significant performance improvements based on quality rnsample sizes. Reputable assessment technologies should provide access to a rntechnical manual packed with studies that detail significant improvements in the rnareas of turnover and performance.

Summary


There you have it…the list of five business principles that should guide your rndecision on your next purchase, or upgrade, of an assessment technology. To rnrecap, here are the five principles:

• Principle #1: An assessment technology should be proven to predict rnperformance.
• Principle #2: An assessment technology should be the catalyst to continuous rnworkforce improvement.
• Principle #3: An assessment technology should be focused on fit; more is not rnalways best.
• Principle #4: An assessment technology should be more than just a score.
• Principle #5: Assessment technology should be a tool that makes your rnorganization better.


This is by no means an all-inclusive list, but if an assessment falls short on rnone or more of these principles, keep shopping. Your efforts will deliver great rndividends for your company when the right assessment technology is
in place.

One tip I recommend to those evaluating different assessment technology tools is rnto create a wish list of features and functionality. Be sure that the needs of rnall levels of end-users are included in your wish list. Then categorize the list rninto groups consisting of the “must haves” and the “like to haves.” This little rnexercise will help you focus your efforts during the evaluation process to rnensure you achieve maximum improvement within the organization.

Article author

About the Author

Jason Taylor uses science and technology to design tools for the selection and talent management field. Annually, the tools under Taylor's direction match several million employees to employers. Taylor often speaks on talent management and selection technology at conferences across many industries including HR, retail, hotel, restaurant, real estate, and industrial-organizational psychology. Member: APA and SIOP.