Why Editorial Hiring Is Different

Editorial roles are among the hardest to hire for accurately. The skills involved — language precision, judgement under pressure, domain knowledge, speed and accuracy in combination — do not show up reliably in CVs, cover letters, or interviews. Yet the cost of a bad editorial hire is high: missed errors, damaged reputation, slow output, and the time cost of managing underperformance before inevitably rehiring.

After 28 years of working with editorial teams across publishing, media, legal, financial, and corporate sectors, the same hiring mistakes appear over and over again. Here are the seven that cost organisations the most.

Mistake 1: Relying on the Portfolio Review

Portfolio review is the default editorial hiring method, and it is deeply flawed. A portfolio shows you the best work a candidate has produced over months or years, polished through multiple revisions and often shaped by the editors above them. It tells you almost nothing about how the candidate performs under time pressure on unfamiliar material.

The fix: use a portfolio to establish credibility and career trajectory, then use a timed editorial test to measure real-time performance. The test should always take precedence for shortlisting decisions.

Mistake 2: Treating Grammar and Editing as the Same Thing

Grammar competence is a necessary but not sufficient condition for editorial ability. Many hiring managers use a grammar test as the sole editorial screen and then discover that the candidate who passed it with 90% cannot copy-edit to a professional standard. Grammar tests measure rule-following. Copy-editing tests measure judgement — the ability to decide what needs changing, how to change it, and what to leave alone.

The fix: use grammar tests as a first-stage screener, then use a copy-editing or proofreading test for the shortlist. Never use grammar alone as the editorial assessment for roles where editing is the primary function.

Mistake 3: Ignoring Industry Knowledge

For specialist roles — medical editor, legal editor, financial editor, technical writer — the ability to engage with the domain terminology is as important as general editorial skill. A candidate who is an excellent general editor but has no familiarity with the terminology of their target sector will slow down the team, miss errors that require domain knowledge to catch, and require months of upskilling before becoming fully productive.

The fix: add an industry vocabulary test to the assessment process for any specialist role. A 15–20 minute vocabulary test for the relevant sector costs very little and prevents one of the most common expensive misfires in specialist editorial hiring.

Mistake 4: Skipping Benchmarking

Without benchmark data, test scores are uninterpretable. A score of 82% on your internally designed editing test might mean the candidate is exceptional — or it might mean they are squarely average. You have no way of knowing without a comparison population.

The fix: use an assessment platform that benchmarks scores against a large validated population. A percentile score that positions the candidate relative to all prior test-takers gives you the context to make a meaningful decision. Look for platforms with at least 50,000 benchmarked candidates; the larger the pool, the more stable and reliable the percentile data.

Mistake 5: Using Take-Home Tests Without Time Limits

Take-home editing tests are popular because they are easy to administer. They are also easy to game. A candidate given a passage to edit overnight can use grammar checkers, style guides, dictionaries, and — increasingly — AI tools to improve their output far beyond what they could achieve in real working conditions.

The fix: administer timed tests under controlled conditions, ideally on a dedicated assessment platform that logs time-on-task and prevents tab-switching. The test should mirror realistic working conditions: a fixed time window, no external resources, and a task that resembles actual editorial work.

Mistake 6: Testing Too Late in the Process

Many organisations administer editorial tests only after multiple interview rounds — by which point significant HR and hiring manager time has already been invested in candidates who may not meet the editorial standard. A candidate who fails a test after three interview rounds represents a substantial wasted investment.

The fix: move testing earlier. A short grammar screener can be administered immediately after the application stage. A full editorial assessment can be administered before first interviews. This structures the process so that time-intensive interview rounds are reserved for candidates who have already demonstrated a minimum level of editorial competence.

Mistake 7: Not Setting Role-Specific Score Thresholds

Editorial roles vary significantly in their demands. A junior editorial assistant needs to meet a different standard from a senior editor or editorial manager. Using the same pass threshold for all editorial roles means either setting the bar too low for senior positions or unfairly filtering out otherwise strong candidates for junior ones.

The fix: set percentile thresholds by role level. A reasonable framework: 50th percentile minimum for junior and assistant roles, 65th percentile for mid-level editorial roles, 80th percentile for senior editors and editorial managers. Adjust these based on your organisation's existing team benchmarks and the specific demands of each role.

Building a Better Editorial Hiring Process

The best editorial hiring processes are structured, evidence-based, and efficient. They use automated assessment at the top of the funnel to filter quickly, reserve human time for candidates who have already cleared an objective threshold, and produce comparable data on every candidate who advances to interview. EditingTests.com was designed to support exactly this kind of process — with seven test types, instant automated scoring, and benchmarking against 130,000+ prior candidates.