What Is an Editorial Benchmarking Standard?

An editorial benchmarking standard is a defined, measurable level of editorial competence that an organisation uses as a reference point for hiring, promotion, and performance management decisions. Rather than making hiring decisions based on subjective impressions — this candidate seems strong, that one seems weaker — a benchmarking standard gives HR teams and hiring managers an objective, consistent criterion.

The standard might be expressed as a minimum percentile score on a validated editorial assessment (candidates must reach the 65th percentile or above on the proofreading test), or as a minimum raw score threshold, or as a combination of scores across multiple assessment types. The key feature is that it is explicit, documented, defensible, and applied consistently.

Why Most Organisations Do Not Have One

Editorial benchmarking standards are rare — not because organisations do not recognise their value, but because building one seems complicated. In practice, it is less complicated than it appears, and the return on the investment is substantial.

Most organisations that do not have a benchmarking standard hire editorially by gut feel and portfolio review, advance candidates based on interview performance, and discover post-hire that there is significant variance in actual editorial quality across the team. They have no data to help them understand why some editorial hires perform well and others do not, because they never established a baseline.

Step 1: Test Your Existing Team

The foundation of any editorial benchmarking standard is data from your current team. Before you can set a meaningful hiring threshold, you need to know where your existing editorial team sits on the same scale you will use for candidates.

Administer the same assessments to your current team that you intend to use for hiring. This achieves several things at once:

  • It tells you the actual range of editorial competence in your current team, which is often surprising — there is typically more variance than managers expect.
  • It gives you an internal baseline against which to calibrate hiring thresholds. If your current team averages the 70th percentile on the proofreading test, a hiring threshold of 65th percentile ensures new hires meet or approach the existing team standard.
  • It surfaces any training or development needs in the current team before you add new members to it.

Step 2: Define Your Minimum Hiring Threshold

Using your team baseline and the broader benchmark population data from your assessment platform, define a minimum threshold for each assessment type you use. Consider:

  • Role level: Junior roles can have lower thresholds than senior roles. A tiered standard — 50th percentile for editorial assistants, 65th for editors, 80th for senior editors — is more nuanced and useful than a single threshold applied to all levels.
  • Role type: A proofreading-heavy role should weight proofreading scores more heavily. A role involving specialist content should make the industry vocabulary test a hard threshold rather than an advisory one.
  • Your team average: Hiring consistently below your team average pulls the team average down over time. The standard should ensure that most new hires sit at or above the team median.

Step 3: Document the Standard

Write the benchmarking standard down. It should be explicit enough that any member of the HR team or any hiring manager can apply it consistently without interpretation. A documented standard also provides a defensible basis for hiring decisions if those decisions are ever challenged — it demonstrates that the decision was based on objective, consistently applied criteria rather than subjective preference.

The document should specify: which assessments are used at which stages, what the pass threshold is for each assessment at each role level, how scores are weighted when multiple assessments are used, and what the process is for borderline cases.

Step 4: Apply It Consistently and Review Annually

A benchmarking standard only delivers value if it is applied consistently. The most common failure mode is making exceptions — advancing candidates who fall below threshold because they interviewed well, or adjusting the threshold ad hoc to fill a vacancy quickly. Both moves undermine the standard and reintroduce the subjectivity the standard was designed to remove.

Review the standard annually. As your team grows and your benchmark data accumulates, you will have better data on which score thresholds actually predict performance in your specific context. The standard should evolve as your data matures.

The Long-Term Value

Organisations with established editorial benchmarking standards consistently report better hire quality, lower editorial error rates, faster onboarding, and less time spent managing underperformance. The standard is not a guarantee — it is a systematic improvement to a process that is otherwise governed by intuition. Over multiple hiring cycles, the compound effect of consistently better hiring decisions is significant.

EditingTests.com provides the assessment infrastructure to build and maintain an editorial benchmarking standard: seven validated test types, instant automated scoring, and percentile benchmarking against 130,000+ candidates. The data to build your standard is available from the first test you run.