What Is Track-Changes Editing?

Track-changes editing is the professional standard method for copy-editing documents in a collaborative workflow. Rather than editing text directly and overwriting the original, the editor makes changes that are visually marked up in a different colour — insertions underlined, deletions struck through — so that the author or senior editor can review each change, accept it, reject it, or query it before the document is finalised.

The track-changes function was pioneered in word processing software and remains the dominant workflow in book publishing, academic publishing, legal document production, corporate communications, and most other professional editorial contexts. An editor who cannot work fluently with track changes is functionally limited in almost every professional editorial environment.

What a Track-Changes Editing Test Measures

A track-changes editing test assesses a candidate in a simulated version of this real editorial workflow. The candidate is presented with a document containing deliberate errors — grammatical, stylistic, factual inconsistencies, and structural problems — and asked to edit it using track-changes markup within a set time limit.

The test measures several dimensions simultaneously:

  • Error detection rate: How many of the deliberate errors does the candidate identify and correct?
  • Precision: Does the candidate make unnecessary changes — altering things that did not need to be changed? Over-editing is as revealing as under-editing.
  • Editorial judgement: Does the candidate distinguish between genuine errors and stylistic choices that should be preserved? Does the candidate respect the author''s voice while correcting what needs to be corrected?
  • Speed: Does the candidate complete the task within the time limit at an acceptable accuracy level?
  • Track-changes fluency: Does the candidate use the markup correctly, or do their changes suggest unfamiliarity with the tool?

Why Track-Changes Testing Has Higher Ecological Validity

Ecological validity refers to how closely a test resembles the real conditions of the job it is predicting performance for. A grammar quiz in a multiple-choice format has low ecological validity for editorial roles — it tests knowledge of rules in isolation, not the ability to apply them in context.

A track-changes editing test has high ecological validity because it replicates the actual task. The candidate works in an environment that looks and behaves like the tool they will use on the job. The errors they are asked to catch are embedded in realistic prose rather than obviously constructed test sentences. The time pressure reflects the deadline conditions of professional editorial work.

High ecological validity means the test is more predictive of actual job performance. Candidates who perform well on a track-changes editing test are more likely to perform well in the role than candidates who perform well only on grammar quizzes.

How EditingTests.com Delivers Track-Changes Testing

EditingTests.com built a browser-based track-changes editing environment specifically for its copy-editing assessments. The interface simulates the track-changes functionality of a professional word processor — insertions, deletions, and comments — within the browser, with no software installation required on the candidate side.

The Full Editing Assessment (FEA) uses this environment for a 45-minute comprehensive test. The Editing Test (ET) applies the same environment to a shorter five-question format. Both tests are available as no-login demos, so HR teams can experience the candidate interface before deploying tests to real candidates.

No other editorial assessment platform at scale offers a simulated track-changes environment. It is a genuinely unique capability that reflects 28 years of focus on editorial assessment specifically.

Using Track-Changes Test Results

The most useful output from a track-changes editing test is not just the overall score, but the pattern of what was caught and what was missed. Candidates who catch grammar and punctuation errors reliably but miss structural problems have a different profile from candidates who spot structural issues but let surface errors through. These patterns inform both hiring decisions and onboarding priorities.

When benchmarked against a large population — EditingTests.com has results from over 130,000 candidates — even granular patterns become interpretable. A candidate who catches 90% of grammar errors but only 60% of stylistic inconsistencies is clearly strong on rule-following and weaker on editorial judgement: a profile that suits some roles better than others.