Ultimately, this discussion argues that companies should
- adopt modern hiring processes that focus on outcomes, and
- reject the popular (but ineffective) legacy kabuki interviewing practices that plague industry.
The concepts within also underpin many of the philosophies behind my Interview Circuit: A Modern Hiring Playbook.
Team and Market are the most important ingredients for success.
Which comes first is hotly debated, but experience (and industry consensus) says these two are the high-order bits. Great teams can win markets, and great teams can create markets. In either case, great teams tend to have great outcomes.
A leader’s priority, therefore, should be obvious.
However… companies—especially high-growth startups—tend to under-index on tooling for building great teams.
People Operations (née Human Resources) are considered a cost center and CHROs (fractional or full-time) are among the last members hired into the Executive Team. Interviewer training is often non-existent, and interviewers are inconsistent. Recruiting is informal and proximal. As such, team can find themselves hiring net-negative contributors—contributors who, once removed, increase team output. A fraction of that hidden expense, recast as upfront recruitment investment, pays for itself to improve hiring outcomes that extend to every new teammate from day zero.
Much of what I’ve synthesized below is an amalgam of self-study,
failure experience, big-corporate training, and deep collaboration with certified co-conspirators in People Operations functions… with a heavy application of continuous introspection (kaizen).
This synthesis informs my Hiring Playbook, which receives continual updates as we learn more about what works best.
As an employer, assertions based on experience and data:
- Diverse teams outperform index.
- High-trust teams outperform teams with objectively higher-skilled individuals that have less trust.
- Diverse, high-trust teams are more resilient (and not resistant) to change.
And yet, legacy thinking hiring hews towards:
“Do I like this individual?”
“Do they have the skills?”
“Can they handle the job?”
over the more modern:
“How will this person contribute to the team’s output?”
Companies1 prefer hiring referrals because it can greatly reduce time-to-hire metrics, even if referrals only marginally accept offers at a higher rate that non-referrals. While this advantages those with the existing networks/connects, it is often counterproductive towards increasingly-laggard DEIB goals and the the long-term benefits of building resiliency.
Companies are also hostile towards job seekers. (I speak from experience.)
I do not think it intentional, but reality is bleak: applications must be filled out repeatedly on error-prone web forms in which bespoke cover letters will go unread, and a swift rejection notice is an uncommon kindness against redoubtable silence.
Should a candidate get a the opportunity to interview, the next steps are unclear, interviews redundant, and the timing unknown.
Talent like challenges. But, the best talent have a far greater appetites traversing the boundaries of their expertise over remaining stoic throughout a protracted job search and interview process. While referrals and internal advocates can soften the pain associated with the job application process, some folks prefer to work with new people or explore new industries.
Given all of the above, a leader’s approach, therefore, should be obvious.
Modern Hiring Processes rely on
- the adoption of Structured Interviews,
- the rejection of high-stakes assessments,
- real-world work product reviews,
- independent evaluation, and
- bidirectional understanding.
Structured Interviews improve hiring outcomes.
Regimented, pre-scripted questions designed to measure specific traits, skills, and abilities will generate better data and evidence for fairly comparing candidates over unscripted, freewheeling interviews2.
When organizations don’t employ structured interviews, candidates often find themselves answering the same questions during every single interview… which unless they’re applying to be a Big Box Store Greeter, is not fun nor useful for the candidate. Nor is it for the company, wasting its most valuable resource: productive employee attention and time.
Creating a structured interview process requires the hiring manager to do more work up front than what they are probably used to. But, that’s their job and they should get over it. (I prefer managers to proactively build their teams rather than be reactive. Don’t you?)
I was introduced to Structured Interviews in the late 2010s by then HR-startup Greenhouse. Greenhouse’s Applicant Tracking Systems (ATS) has become popular within technology organizations as it encourages and enables structured interview data capture and many modern, technology-driven conveniences and integrations.
Despite quirks, I’m a huge fan and not alone: many CTOs go rogue from their corporate’s legacy ATS and also buy the not-inexpensive, not-always-intuitive Greenhouse for their own purposes.
The Interview Circuit uses Structured Interviews extensively (but not exclusively).
High-Stakes Assessments are not Good Assessments for Performance
Companies want to understand real-world output in a real-world situations. Yet, many continue to rely on synthetic interactions (i.e. interviews) which alone are a poor predictor of job fit and performance.
Instead, companies should evaluate a candidates’ actual work product and review how they’ve performed in real-world scenarios. (More on this in a moment, but allow me a few words on why high-stakes assessments are deserving of my ire.)
Most knowledge workers work in an open-book, un-observed, team-based environment where outcome is more important than output, value delivery cycles are long, feedback loops are short, pride of authorship is irrelevant, and few things are truly urgent. Interviews are (generally) closed-book, poker-faced, tightly-observed, single-player experiences in an unfamiliar environment where value creation must be demonstrated immediately.
This is default behavior for company interviews and counterproductive. It follows the same logic as if a successful marriage could be predicted after a promising speed date.
Unless the role calls for a high-degree of poise under pressure with limited information, high-stakes assessments do little to inform how job applicants perform might perform in normal scenarios. Corollary: those who perform well in high-stakes situations do not necessarily cope well with the day-to-day or doldrums and may manufacture drama to cope!
Do the types of interviews that demonstrate how a candidate might perform in the day-to-day? Unless it’s an intense job, probably not. (Data suggests that I’m right.3) Will they be communicative? Responsive? Available? Clear? Considerate? Helpful? For most day-to-day knowledge work, these interviews will tell you nothing.
Where high-stakes assessments do make sense are in jobs that require a degree of finesse in high-stakes, high-stress situations. Even so, it’s worth remembering that the existential threat of needing income from a job is a materially different stressor than the threat of doing wrong by your teammates. Be sure that you’re measuring the right thing during the interview!
Important reminder for people managers and executives: a degree of handiness to handle the unexpected is often a marker of strong leadership experience. However, while it may be important in their roles, extending that requirement may be unnecessary.
Evaluating Actual Work Product is Best
Instead of high-stakes assessments, assess work product: real work product.
Many companies rely on take take-home tests and/or toy problems, asking candidates to do free throwaway assignments. Instead, companies could ask for work samples. For software developers, existing work product could be an API they’ve written for personal use, a medium.com post talking through a problem space they’ve explored, or really anything that demonstrates mastery and command of their core skillset.
A quick note: if a candidate does not have anything shareable, it should not be treated as a red flag. Many companies maintain strict confidentiality agreements and it is an unreasonable expectation that people would produce the same type of work in their hobbies as they do in their jobs.
I’ve seen hiring managers, particularly in the technology industry, view lack of out-of-work code as a lack of passion… this would be a mistake—not all software developers write production code just for fun!
Where shareable work product does not exist, companies should consider paying candidates for their time to produce some.
As a technical leader who heavily relies on automation, I have an ever-growing list of small projects that are additive, but not urgent, to improving my operation. These projects are usually outside of my existing teams’ priority rubrics and core systems, and it’s work that I’d happily pay for.
For individual contributor roles in tech, examples include:
- internal tool creation and feature enhancement
- code fixes / refactors / improved test coverage
- open-source feature contributions that would be helpful to our dependency stack
And so forth.
I’ve done similar challenges for other roles, such as asking candidates to create a marketing campaign, test ad, or design identity. Even when we don’t use the work directly, it often helps guide us down the right path!
For leadership roles, including people managers and prospective board members, companies will sometimes interview as a form of free consulting. It’s a dishonest practice, but it exists. I suggest restructuring synthetic assessments as trial periods to give all parties far more information about working with each other.
Is paying candidates for work product expensive? Yes, of course it is. But, it is far cheaper than finding out a candidate isn’t going to work out after you hire them, not including the knock-on costs of restarting the recruiting process and wasted productivity. Plus, the work (if it is useful) is something that would eventually need to be done anyway. Heck, maybe the company and the candidate discover they’d prefer to engage on a contractual relationship rather than a full-time one.
Hiring managers should ensure their interviews and assessments are congruent, not approximate or synthetic, to the role.
Each interview should be an entirely independent evaluation, both inside (and external to) the company. (We’ll get to external references shortly.)
For inside evaluations, interviewers should enter interviews with:
- good context on the role,
- limited context on the candidate, and
- zero commentary from either the hiring manager or previous interviewers.
A subtle eye-roll or an “oh, you’re going to like this gal” will unfairly color the experience for the next interview. Existing teammates tend to have high-trust and high-bandwidth communications, which is why it is imperative no one discuss candidates’ performance with each other until the hiring manager is at the hiring decision stage. (Colocated smaller teams where candidates are the obviously odd-person-out should pay extra mind to this.)
ProTip™: If an interviewer cannot come to a decision whether or not the interviewee passed an interview, this is a fail… but not on the interviewee’s part. The interviewer should reach out to the hiring manager immediately to discuss why the interview was a fail and triage accordingly. If a Structured Interview cannot be counted on to generate an independent evaluation, doing it again simply wastes everyone’s time, money, and reputation.
On that, most ATSs allow for candidate scoring 1 to 5, with 3 being “indifferent” or “don’t know”. An indifferent note is an interview fail, not a candidate fail. Interviewers should understand the importance of their evaluations and hiring managers should reinforce that.
Hiring managers should also take evaluations from outside the company in the form of references. Not reference “checks”, but references.4
At the time of candidacy, previous employers and colleagues have far more real-world information about candidate than hiring managers will have even after the most rigorous of interviews. Work history interviews provides a rich database of real-world performance samples to draw from, with independent synthesis from a (usually) neutral third party. Insist on them.
Job interviews are as much about ensuring candidates understand what they’re getting themselves into as they are about the company learning about the candidate.
When you hear phrases like, “we just need to get a body in here”, companies are apt to rush for fit. However, a new hire, particularly one who is a strong contributor, will leave in short order if expectations do not match reality. This is an expensive mistake.
Companies can demonstrate their commitment to getting this right by providing as much information about the specific role and team as is feasible and appropriate early-on; ideally in the job description itself. This will not only generate higher-quality candidates, but it will attract ones can envision this opportunity as their next career stop. Following this, companies can keep candidates engaged by providing information about who they’ll be meeting with and why to give them the best chance at preparing for the interview and demonstrating the requisite skills.
As a bonus, that detailed job description provides a blueprint for the eventual hire’s onboarding plan that meets expectations. Hiring managers might as well get that done up front.
What a Modern Hiring Process does not rely on
A non-exclusive list of often-used pillars of legacy recruiting practices.
The hiring manager, often the team/people manager, is responsible for making the call. Not everyone will agree with every hiring decision, but all should commit to it having had the opportunity to provide feedback prior to offer.
Likability and Agreeability
People are generally likable and agreeable, but that doesn’t mean the most likable/agreeable is the most desirable hire. Like most behaviors, both are a skillset—decide if how important it is in the role (e.g. customer-facing) and move on.
Keep screening out assholes and Egos, though!
The best team members are not always the best interviewers in the same way that solid teammates are usually more focused on executing as a team than showboating as individuals. Those who interview well often have more practice than those who do not. Focus on the strengths of the candidate’s work, not the strength of the interview.
Let competitors conflate these two.
Pedigree and Hobbies
Brilliance is evenly distributed, opportunity is not. Life circumstances have outsized (and unfair) effects on career trajectory. Interviews and references will surface whether those effects were causal or merely correlative; do the work and you’ll find fantastic candidates that others will overlook.
Strict Skill and Experience Requirements
Many job requirements often aren’t. Companies will attract more qualified (and diversified) candidates when hard requirements are truly real, and where preferences are expressed as preferences.
Modern hiring processes can be greatly improved with software, but not all software enforces helpful processes.
I’ve successfully implemented modern hiring with Lever, Greenhouse, RecruiterBox, Zoho People, and even Google Sheets. Vended software that enforces processes are nice-to-haves, not need-to-haves. (But you really ought to have it to enforce and train process.)
I was recently on a call with Greenhouse’s CEO and asked him for advice to counter the resistance from some peers and reports when trying to implement parts of my Interview Circuit, which relies heavily on up-front work from hiring managers. (Greenhouse has its own approach called “Structured Hiring”, which is compatible with my Playbook.)
With permission, his response:
Quick story for you – about 20 years ago when I first started getting involved in hiring, we had no structure at all - I had never heard of the concept.
I was super confused about the conclusions we were reaching - managers would get in big arguments about a candidate with one swearing that this was the best person he’d ever met, and the other manager saying if we hired this person they would resign in protest. Yiikes! What to do?
I realized I had no way to judge between these two competing claims because I had no idea what was going on in that interview room. So I decided to find out.
I went downstairs to the lobby of our building, where there was a Radio Shack (told you this was a long time ago!) and bought a few portable tape recorders.
I had my interview coordinator give a recorder to every interviewer as they went into an interview room, and collect it when they came out. On my commute home I would listen to the interviews.
Oh. My. God.
The questions people asked in those interviews – MY people! Smart, caring, thoughtful people! I could die. Silly questions. Impossible questions. Brain teasers. Repetitive questions. The repetitive questions! I sat there listening to a candidate get asked the same question for the 4th time, my head was ready to explode. And the candidate clearly thought our organization was a disorganized joke.
So, maybe you can ask your skeptical hiring managers to record some interviews, listen to them, then ask “Do we think we could do these better?”.
That’s my idea.
Daniel Chait <email@example.com>
The Interview Circuit: A Modern Hiring Playbook is the result of years of my “trying to do this better” with my companies and teams. I’m publishing it here—for free—as I think, as an industry, we can and should be doing better.
Please steal and improve upon these ideas. (And write me when you do!)
- Zippia: Employee Referral Statistics - random site I found, but the numbers pass the sniff test. Was hoping for something more rigorous.
- Multiple points:
- A meta-analysis of research on the validity of employment interviews found that structured interviews (where questions are standardized and based on job-related criteria) had a higher validity coefficient (0.51) than unstructured interviews (0.38) in predicting job performance. However, the validity of interviews varied depending on the job being assessed and the type of measure used to assess job performance. (McDaniel, Whetzel, Schmidt, & Maurer, 1994)
- A study of selection methods used by the UK Civil Service found that job simulations had the highest predictive validity (0.37) in assessing job performance, followed by work sample tests (0.26) and assessment centres (0.21), while unstructured interviews had a validity coefficient of 0.14. (Harris, Schaubroeck, & Van der Vliert, 1999)
- A study of selection methods used by the US Federal Government found that job knowledge tests and structured interviews were the most accurate predictors of job performance, with validities ranging from 0.41 to 0.54, while unstructured interviews had a validity coefficient of 0.14. (Schmidt, Oh, & Shaffer, 2016)
- High-stakes interviews don’t correlate with job performance
- Schmidt, F. L., Oh, I. S., & Shaffer, J. A. (2016). The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 100 years of research findings. Journal of Applied Psychology, 101(3), 469–500.
- Society for Human Resource Management. (2016). Assessing Talent: Past, Present and Future.
- Who: The A Method for Hiring (Geoff Smart, Randy Street) is a great reference for conducting references.
Additional Background Material:
- Dynamic Reteaming: The Art and Wisdom of Changing Teams 2nd Edition (Heidi Helfand)
- Structured Interviews (Indeed)
- Westrum Organizational Culture (DORA)
- Don’t Send A Resume (Jeffrey J. Fox)