Review Center Guide
September 3, 2024
For the most recent version of this document, visit our documentation website.
Review Center Guide 2
Table of Contents
1 Review Center 5
1.1 Review Center overview 5
1.2 Review Center workflow 5
1.3 Understanding the integrative learning classifier 6
1.3.1 Language support in Review Center 6
1.4 Using Review Center versus batching 6
2 Creating a Review Center queue 8
2.1 Installing Review Center 8
2.2 Choosing a queue type 8
2.2.1 Saved search queues 8
2.2.2 Prioritized review queues 8
2.3 How document assignment works 9
2.3.1 Keeping document families together 10
2.4 Setting up the reviewer group 10
2.5 Creating required queue fields 10
2.6 Creating a queue template 11
2.7 Creating a new queue from a template 13
3 Monitoring a Review Center queue 15
3.1 Review Center dashboard 15
3.1.1 Queue tab strip 15
3.1.2 Queue Summary section 16
3.1.3 Review Progress section 20
3.2 Charts and tables 22
3.2.1 General charts and tables 22
3.2.2 Prioritized review charts 24
3.2.3 Reviewed Documents table 24
3.3 Deleting a queue 25
3.4 Fixing a misconfigured queue 25
3.5 Understanding document ranks 25
3.6 Tracking reviewer decisions 26
Review Center Guide 3
3.6.1 Using the Documents tab 26
3.6.2 Using the Field Tree 27
3.6.3 Using the Track Document Field Edits by Reviewer application 27
3.7 Moving Review Center templates and queues 27
4 Reviewing documents using Review Center 28
4.1 Reviewing documents in the queue 28
4.2 Finding previously viewed documents 28
4.3 Queue card statistics 29
4.4 Viewing the dashboard 30
4.5 Best practices for Review Center review 30
4.5.1 Coding according to the "four corners" rule 30
4.5.2 Factors that affect Review Center's predictions 31
5 Review validation 32
5.1 Key definitions 32
5.2 Determining when to validate a Prioritized Review queue 32
5.3 Starting a validation queue 32
5.3.1 Choosing the validation settings 33
5.3.2 Inherited settings 34
5.4 Coding in a validation queue 34
5.5 Monitoring a validation queue 35
5.5.1 Editing a validation queue 35
5.5.2 Releasing unreviewed documents 35
5.5.3 Tracking sampled documents 36
5.6 Accepting or rejecting validation results 36
5.6.1 Manually rejecting validation results 37
5.7 Reviewing validation results 37
5.7.1 Recalculating validation results 38
5.7.2 Viewing results for previous validation queues 38
5.8 How adding or changing documents affects validation 39
5.8.1 Scenarios that require recalculation 39
5.8.2 Scenarios that require a new validation queue 39
6 Review validation statistics 40
Review Center Guide 4
6.1 Defining elusion, recall, richness, and precision 40
6.2 Groups used to calculate validation metrics 40
6.3 How setting a cutoff affects validation statistics 41
6.3.1 High versus low cutoff 42
6.4 Validation metric calculations 42
6.4.1 Elusion rate 42
6.4.2 Recall 43
6.4.3 Richness 43
6.4.4 Precision 44
6.5 How the validation queue works 44
6.6 How validation handles skipped and neutral documents 44
7 Review Center security permissions 46
7.1 Creating a Review Center template or queue 46
7.2 Editing and controlling Review Center queues 46
7.3 Deleting a Review Center template or queue 46
7.4 Viewing the Review Center dashboard 47
7.5 Tracking reviewer decisions from the Documents tab 47
7.6 Reviewer permissions 47
Review Center Guide 5
1 Review Center
Review Center is a review management tool that helps you build custom queues, use AI to prioritize
relevant documents, and leverage a rich reporting dashboard to understand the state of your data and track
productivity. With streamlined administrative features and flexible AI algorithms, you can tailor the review
process to your needs.
Some of Review Center's key features include:
n
Templatization—set up best-practice structures ahead of time for easy re-use.
n
Customizable queues—replace batch administration with queues based on saved searches.
n
Powerful AI classifier—Review Center uses a new integrative learning classifier that provides even
greater efficiency than previous AI classifiers.
n
Clear progress reporting—a rich dashboard features timeline-based visualizations that show
relevance rates and progress.
1.1 Review Center overview
Review Center enables administrators to build review queues from any saved search and choose the order
in which the documents will be served up to reviewers. These queues can be ordered using either AI-
powered relevance predictions, or custom sort conditions chosen by the admin. After the admin starts the
queue, reviewers check out documents from a simple interface. The admin manages all queues, reporting,
and progress charts from a central dashboard.
For a guided video showing how to use Review Center, watch the Review Center: Getting Started on-
demand training on the RelativityOne documentation site.
1.2 Review Center workflow
The basic steps to set up Review Center are:
Review Center Guide 6
1.
Install the application.
2.
Create a saved search containing the documents for review.
3.
Create any necessary fields.
4.
Create or customize a review queue template.
After setup, create and manage the Review Center queue:
1.
Create a new queue from the template.
2.
Assign the reviewer group.
3.
Start the queue.
4.
Review documents.
5.
Monitor the queue using the Review Center dashboard.
After the admin enables the queue, reviewers log into a simple screen showing the queues assigned to
them. For more detail on the reviewer's experience, see Reviewing documents using Review Center on
page28.
For detailed instructions on setting up Review Center, see Creating a Review Center queue on page8.
1.3 Understanding the integrative learning classifier
The integrative learning classifier used by Review Center's AI-powered queues is a scalable, secure, and
efficient classification service that can support a variety of use cases and documents. It makes connections
among concepts and decisions to serve up relevant documents to reviewers as early as possible.
You do not need to create an Analytics index for Review Center queues. Instead, when you prepare or start
an AI-driven queue, the classifier automatically runs in the background to manage documents.
1.3.1 Language support in Review Center
Because the integrative learning classifier is language-agnostic, you can use Review Center for documents
written in any language. However, the methods Review Center uses to tokenize text, or break it up into
individual words, are primarily based on English, Chinese, and Japanese. If Review Center detects Chinese
or Japanese, it uses the tokenization method for those languages. For any other text, it uses the English-
based tokenization method that relies on spacing and punctuation. This means that languages with similar
spacing and punctuation to English typically have good results.
1.4 Using Review Center versus batching
Review Center offers many benefits over batched reviews, including:
n
Built-in administrative reporting—track progress and manage reviews in a single spot.
n
Time-saving templates—shorten queue creation to a few clicks by creating templates for common
workflows.
n
Streamlined assignment—permissions are simplified, and documents are checked out
automatically as each reviewer advances.
Review Center Guide 7
n
Simplified entry screen for reviewers—reviewers enter queues with one click and have fewer
distractions than on the standard Documents tab.
n
Easy to change—you can update queues at any time, whether to add documents or to try out AI-
powered review. None of these changes interrupt reviewer access.
If your organization uses custom reporting that requires a specific workflow, you may prefer to continue
using batching for now. For other scenarios, though, users often find significant benefit in switching from
batching to Review Center.
For more information on the traditional batching workflow, see Batches in the Admin guide.
Review Center Guide 8
2 Creating a Review Center queue
Review Center queues are flexible, customizable, and can be used for any stage of review. You can also
create templates for common workflows, which shortens the setup time for a new queue to only a few clicks.
These queue templates can be saved as part of workspace templates, making it easy to re-use them for
other cases.
Even after creating a queue, you can still edit the settings or add new documents without interrupting
reviewers.
2.1 Installing Review Center
Review Center is available as a secured application from the Application Library.
To install it:
1.
Navigate to the Relativity Applications tab in your workspace.
2.
Select Install from application library.
3.
Select the ReviewCenter application.
4.
Click Install.
After installation completes, the following tabs will appear in your workspace:
n
Review Library—create and manage queue templates.
n
Review Center—create and manage queues and view the dashboard.
n
Review Queues—review documents using queues.
For more information on installing applications, see Relativity Applications in the Admin guide.
2.2 Choosing a queue type
Review Center offers two types of review queues. Based on the needs of your project, you can set up review
queues that either focus on custom-sorted sets of documents, or focus on documents that the AI classifier
predicts as relevant.
2.2.1 Saved search queues
Saved search queues tie your queue to a saved search. You can use saved searches to group documents
based on nearly any criteria, including documents from any existing Active Learning project or other Review
Center queue. With this queue type, documents are served up to reviewers based on the sort method used
for the saved search. If the saved search does not have a sort method selected, documents will be served
up based on Artifact ID.
2.2.2 Prioritized review queues
Prioritized review queues are also based on a saved search, but instead of serving up documents based on
their sort order, they use the AI classifier to serve up documents that it predicts as relevant. These relevance
rankings are stored in the Rank Output field, and the ranks automatically update every time the queue
refreshes.
Review Center Guide 9
The AI classifier uses the extracted text of documents to make its predictions. Even if other fields are
returned in the saved search, it will not affect the results.
If you choose a prioritized review queue, we recommend coding at least two non-empty documents in your
data source before preparing or starting the queue: one with the positive choice on your review field, and
one with the negative choice. This gives the AI classifier the information it needs to start making its
predictions. The more documents are coded, the more accurate the classifier’s predictions will be.
If you do not have any coding completed, you can start the prioritized review queue without any coding. The
classifier model won't build until at least 50 documents have been coded, with at least one coded positive
and one coded negative. After you reach 50 coded documents, your ranks will update upon the next auto-
refresh or manual refresh. If you need it to build sooner, you can manually trigger a queue refresh at any
point after at least one document has been coded positive and one has been coded negative.
2.2.2.1 Including random documents in the queue
When you set up a prioritized review queue, you have the option to serve up randomly chosen documents
alongside documents that are predicted relevant. This gives the AI classifier a broader variety of coding
decisions to learn from, which improves its predictions in the early stages of a review. Having reviewers
code a selection of random documents helps the classifier identify a wider range of relevant topics and
prevents it from focusing on a limited subject area.
Under the queue setting Include Random Items, you can choose to include random documents as up to
20% of the total documents served to reviewers. You can change this setting at any time. We recommend
including a high percent of random items during the early stages of review.
2.2.2.2 Using Coverage Mode
When Coverage Mode is turned on for a prioritized review queue, the queue switches away from serving up
the highest-ranking documents. Instead, it serves up documents that are better for training the model.
These are documents with scores near 50, which usually have different content and topics from documents
that the model has previously seen. Labeling these helps the model learn from a wider variety of documents
and become more confident quickly.
When in Coverage Mode, the AI classifier sorts all documents by their scores’ distance from 50, but limits
and spreads out the number of exactly 50-ranked documents. This intermixing diversifies the group of
documents and lowers the chance of duplicates. The classifier then serves up these sorted documents to
reviewers until the next refresh. After each refresh in Coverage Mode, it re-sorts the documents. Coverage
Mode also overrides the Include Random Items setting.
You can turn the Coverage Mode setting on or off at any time during a review. For instructions, see Turning
Coverage Mode on and off on page19.
Note: Whenever you turn Coverage Mode on or off, manually refresh the queue. This updates the
document sorting for reviewers. For more information, see Turning Coverage Mode on and off on
page19.
2.3 How document assignment works
By default, five documents are checked out to each active reviewer at a time. As the reviewer saves their
progress on those documents, more are checked out as needed.
For example, documents 1 through 5 are assigned to the first reviewer who starts review. If a second
reviewer logs in immediately after, documents 6 through 10 are assigned to the second reviewer. As the first
reviewer completes their work, documents 11 through 15 are assigned to them, and so on.
Review Center Guide 10
If a relational field is set for the queue, then the entire relational group for a document will also be checked
out to that document's reviewer. For more information, see Keeping document families together below.
2.3.1 Keeping document families together
All Review Center queues have the option of setting a relational field. If this is set, the whole relational group
of documents present in the queue will be checked out to the same reviewer. This keeps families, email
threads, or other relational groupings together in one queue.
When a relational field is set, it takes priority over the sort method and document rank. For example, if you
sort a saved search queue by size and set the relational field to Family Group, then the entire family of the
largest document will be checked out to the first reviewer, even if it contains small documents. Likewise, if
you set the relational field to Family Group for a prioritized review queue, the entire family of the highest
ranked document will be checked out to the first reviewer, even if it contains low-ranked documents. Within
that family, documents will be served up based on the sort specified in therelational view.
If you plan to code families in the related items pane as part of the reviewer workflow, we recommend that
you do not include families in your queue. Otherwise, as you code documents in the related items pane, the
coded family documents will still be served to reviewers.
Note: If you set a relational field on a template or queue, set the same field in the Related Items drop-
down menu of the saved search Conditions tab. Only relational group members returned by the saved
search will be included in the queue. For more information, see Creating or editing a saved search in the
Searching guide.
2.4 Setting up the reviewer group
To give reviewers access to a queue, set up a reviewer group. You can either create a brand new group, or
modify the permissions for an existing user group. You can assign multiple user groups to the same queue.
To set up a reviewer group:
1.
Decide which user group or groups should contain the reviewers for the queue. For information on
creating and editing groups, see Groups in the Admin guide.
2.
Add each group to the workspace.
3.
Assign each reviewer group the following permissions:
Object Security Tab Visibility
n
Document - View, Edit
n
Review Center Queue
- View
n
Review
Queues
4.
Add the reviewers to each group.
For more information about permissions, see Review Center security permissions on page46.
2.5 Creating required queue fields
Before creating a prioritized review queue, create the following fields:
Review Center Guide 11
n
Review field—a single-choice field that serves as the coding field for your queue. This field should
have at least one positive choice and one negative choice. Any other choices will be considered
neutral.
n
Rank Output—a decimal field that will hold the document ranks. Each prioritized review queue
needs a separate Rank Output field on the Document object.
Note: If a field has two colons (::) in the name, this is called a reflected field. Reflected fields
typically link two objects, and they cannot be used as the Rank Output field.
If you are creating a saved search queue, you do not need a Rank Output field, and the review field is
optional.
For more information about creating new fields, see Fields in the Admin guide.
2.6 Creating a queue template
Templates are unassigned queues that can be used as the basis for building other queues quickly. The Is
Template field should always be toggled to On for templates.
The Review Center application comes with several premade queue templates to choose from. However, we
recommend tailoring them or creating your own to best suit your needs. These can also be saved as part of
your workspace template.
Most fields which are required for queues, such as the Review Field, are not required for a template. This
enables you to create generalized templates ahead of time and leave those decisions to the queue creator.
To create a queue template:
1.
Navigate to the Review Library tab.
2.
Click the New Review Center Queue button.
3.
Enter the following information:
1.
Name—the queue name reviewers will see.
2.
Is Template—toggle this to On.
Note: This field exists for all queues. If you toggle the Is Template setting to On for a regular
queue, it disappears from the dashboard and becomes usable as a template for other
queues. Toggling it off again returns the queue to the dashboard. The queue keeps all of its
statistics and coding decisions, but the queue state resets to Not Started.
3.
Template Description—enter notes about the template such as its intended use, comments
about field settings, etc.
4.
Queue Label—create and choose organizational labels that will apply to queues created from
this template. Some label ideas include First Level Review, Second Level Review, or Quality
Control. For more information, see Filtering the queue tab strip on page16.
5.
Reviewer Groups—this is not recommended for templates.
6.
Queue Type—choose either Saved Search or Prioritized Review.
7.
Data Source—select the saved search that contains the documents for your queue.
Review Center Guide 12
Note: If you are using a prioritized review queue:
n
We recommend a maximum of 5 million documents in the data source.
n
The classifier ignores documents with an extracted text field larger than 600 KB. We
recommend leaving these documents out of the data source.
8.
Rank Output (Prioritized Review only)—select the decimal field you created to hold the
document rank scores.
9.
Review Field—select the single choice field you created for review. This field must have two
or more choices.
1.
Positive Choice—select the choice that represents the positive or responsive
designation.
2.
Negative Choice—Select the choice that represents the negative or non-responsive
designation.
Note: Any remaining choices are considered neutral.
10.
Positive Cutoff—on a scale of 0 to 100, enter the document rank that will be the dividing line
between positive and negative documents. All documents ranked at or above this number will
be predicted positive, and all documents ranked below it will be predicted negative. By default,
the cutoff is set at 50.
11.
Relational Field—select a relational field for grouping documents in the queue. This makes
reviewers receive related documents together, such as members of the same document
family.
Note: If you set a relational field on a template or queue, set the same field in the Related
Items drop-down of the saved search Conditions tab. Only relational group members
returned by the saved search will be included in the queue. For more information, see Creat-
ing or editing a saved search in the Searching guide.
12.
Allow Coded in Review (Saved Search only)—controls whether documents coded outside of
the queue will still be served up in the queue.
n
Toggle this On to allow outside-coded documents to be served up.
n
Toggle this Off to exclude outside-coded documents from being served up. These are
found and removed during queue refreshes and every time a reviewer checks out a
document.
Note: Prioritized review queues use outside-coded documents to train their predictions, but
they only show them to reviewers if the Relational Field is set. For example, if the relational
field is set to Family Group and some members of a document family are already coded,
those will be served up to reviewers along with their family.
13.
Queue Display Options—select which statistics you want reviewers to see on the queue card
in the Review Queues tab.
14.
Include Random Items (Prioritized Review only)—select what percentage of random
documents to serve to reviewers. For more information, see Including random documents in
the queue on page9.
Review Center Guide 13
15.
Coverage Mode (Prioritized Review only)—controls whether the queue serves up the highest-
ranking documents first, or documents with middle ranks. For more information, see Using
Coverage Mode on page9.
n
Toggle this On to serve up mid-rank documents first. This is useful for training the model
as quickly as possible. You can turn this mode off at any time.
n
Toggle this Off to serve up highest-rank documents first. This is the default setting for
finding relevant documents quickly.
16.
Queue Refresh—controls whether the queue automatically refreshes after coding activity in
any queue. This refresh includes re-running the saved search and checking for outside-coded
documents. For prioritized review queues, this also re-trains the classifier with the latest coding
and re-ranks documents in order of predicted relevance. For more information, see Auto-
refreshing the queue on page17.
n
Toggle this On to refresh the queue automatically.
n
Toggle this Off to prevent automatic refreshes. You will still be able to manually trigger
refreshes using the dashboard.
17.
Reviewer Document View—select a view to control which fields reviewers see in the
Documents panel of the Viewer. If you do not choose a view, this defaults to the lowest ordered
view the reviewer has permission to access.
n
This panel shows reviewers a list of documents they previously reviewed in their queue.
For more information, see Finding previously viewed documents on page28.
n
If there are any conditions applied to the view, those conditions will also limit which
documents appear in the panel.
18.
Reviewer Layout—select the coding layout that you want reviewers to see by default when
they enter the queue. If you do not choose a view, this defaults to the lowest ordered layout the
reviewer has permission to access.
19.
Email Notification Recipients—enter email addresses to receive notifications about the
queue status. These emails tell users when a manually-triggered queue preparation
completes, a queue is empty, or a queue encounters an error while populating. To enter
multiple email addresses, separate them with a semicolon. Do not include a space.
4.
Click Save.
The template now appears in the Review Library list.
2.7 Creating a new queue from a template
To create a new queue using a queue template, use the Add Queue button on the Review Center
dashboard.
To create a new queue from template using the dashboard:
Review Center Guide 14
1.
Navigate to the Review Center tab.
2.
Click the Add Queue button.
3.
Select the template you want to use, then click Next.
4.
Under Reviewer Groups, choose one or more reviewer groups.
5.
In the other fields, check the default values filled in by the template. Change any values that should
be different for this queue.
6.
Click Save.
The new queue appears as a tab in the banner at the top of the dashboard.
All queue settings can also be edited after creating the queue.
Note: After a queue has been created from a template, the two of them are no longer connected. You can
edit the template without affecting the queue.
For information on starting, managing, and deleting queues, see Monitoring a Review Center queue on the
next page.
Review Center Guide 15
3 Monitoring a Review Center queue
The Review Center dashboard provides a centralized location to track, manage, and edit all Review Center
queues. In addition, you can track reviewer coding decisions through a variety of methods.
3.1 Review Center dashboard
After creating a queue, navigate to the Review Center tab. This tab contains a dashboard showing all
queues, their statistics, and controls related to queue progress.
The Review Center dashboard contains the following sections.
3.1.1 Queue tab strip
The queue tab strip contains a tab for each queue that has been created. To make the dashboard show
details for a queue, click on its name in the tab strip.
Below the queue name, each queue shows its status. The possible statuses are:
n
Not Started—the queue has not been prepared or started.
n
Preparing—the queue is refreshing the saved search for the first time. If it is a prioritized review
queue, this also trains the classifier.
n
Prepared—the queue has finished preparing for the first time, but it has not been started. It may or
may not have a reviewer group assigned.
n
Starting—the admin has started the queue, and the queue is becoming active for reviewers. During
this phase, the queue also refreshes the saved search and retrains the classifier if needed.
n
Active—the queue has started, and reviewers can start reviewing.
n
Paused—the admin has paused the queue.
n
Canceling—the admin has canceled the process of starting or refreshing the queue.
n
Complete—the admin has marked the queue as complete, and it is no longer available to reviewers.
n
Errored—an error occurred. When this happens, the error details will appear in a banner at the top of
the dashboard.
n
Ready for Validation—a linked validation queue has been created, but not started.
n
Validation Pending—all documents in the validation queue have been reviewed, and it's ready for
you to accept or reject the results.
In addition, if any of the statuses have the word "Validation"added to them (such as "Validation Paused"),
this means the status applies to a linked validation queue. For more information, see Review validation on
page32.
At the right of the strip, the Add Queue button lets you quickly create new queues. For instructions, see
Creating a new queue from a template on page13.
Review Center Guide 16
3.1.1.1 Filtering the queue tab strip
If you have a large number of queues, you can filter them according to their assigned labels in the Queue
Label field.
To filter the queue tab strip:
1.
Click into the search bar above the queue tab strip.
A drop-down list of labels appears.
2.
Select the labels you want to filter by. You can also type in the search bar to narrow the list, then
press Enter to select or deselect a label.
3.
Close the list using the arrow at the right end of the bar.
The queue tab strip now only shows queues whose labels are listed in the search bar. If several
labels are listed, queues that match any one of them will appear.
The queue tab filters only apply to the tab strip. They do not affect any of the charts or statistics on the rest of
the page.
3.1.2 Queue Summary section
The Queue Summary section shows the reviewer group, saved search, coding fields, and controls for
actions such as pausing or refreshing the queue. The "<X> Active" statistic under the reviewer group shows
how many reviewers currently have documents checked out to them. Additionally, clicking on the saved
search name or the coding field name takes you to that saved search or field.
To view all settings for the current queue, click on the arrow symbol on the left side. This expands the Queue
Summary panel and shows the detailed setting list.
3.1.2.1 Preparing or refreshing the queue
In order for a queue to function, Review Center has to run the saved search, check for any outside-coded
documents, and perform other actions. If it is a prioritized review queue, it also needs to periodically retrain
the classifier. This collection of actions is referred to as refreshing the queue.
Depending on your settings, the refresh button may say several things:
n
Prepare Only—appears when the queue has not been started. This runs the saved search and trains
the classifier for the first time, but it does not start the queue. Alternately, you can click Prepare and
Start to perform both actions together.
Review Center Guide 17
Note: Preparing a new queue in advance makes the Start Review action take only a few seconds.
This can be helpful if your data source is very large or if you have a complicated saved search. For
example, you might prepare a new queue overnight, then start it in the morning.
n
Refresh Queue—appears during a review that does not use auto-refresh. Clicking this refreshes the
queue.
n
Auto Refresh—appears during a review that uses auto-refresh. Clicking this starts an immediate
refresh of the queue. For more information, see Auto-refreshing the queue below.
After you click Confirm, a Cancel option appears. For prioritized review queues, you may also see a
confirmation modal with the option to refresh the cache. For more information, see Caching extracted text in
prioritized review queues on the next page.
If you edit a queue's settings when the queue is partway through refreshing, the refresh will automatically
cancel. Any edits that affect the queue refresh will take effect during the next refresh.
Auto-refreshing the queue
If Queue Refresh is set toOnin the queue settings, the queue will automatically refresh at specific intervals.
The interval length depends on the queue type and the coding activity.
Saved search queues refresh every 15 minutes if there is coding activity within the queue.
Prioritized review queues refresh when 20% of documents in the queue have had positive or negative
coding changes since the last queue refresh. The queue will also auto-refresh if there is coding activity and
it has been 8 hours since the last refresh, regardless of whether 20% of documents have been coded.
These refreshes only happen after the queue has been started, and you can change this setting at any time.
For example, if 1000 documents were coded positive or negative at the last refresh, coding another 200
would trigger the next auto-refresh. If another 10 were coded, the queue would also auto-refresh after 8
hours. However, if the queue were to sit completely inactive for 8 hours, with no reviewer coding, the queue
would not auto-refresh.
For prioritized review queues, the Auto Refresh button shows an estimate of how many documents must be
coded to trigger the next auto-refresh. When that number have been coded positive or negative, the next
auto-refresh will start within about five minutes.
If you need to trigger an immediate refresh, click on the words Auto Refresh to trigger an additional manual
refresh. For example, if new documents have been added to the saved search, you can click this to add
them to the queue quickly instead of waiting until the next auto-refresh.
While the queue is auto-refreshing, a Cancel option appears. If you cancel the current auto-refresh, the
queue will still try to auto-refresh again later.
Note: Canceling the queue preparation can take some time. If you need to remove reviewer access
immediately while canceling, edit the queue and remove the reviewer group.
Reviewer access during refreshes
Reviewers can still review documents in an active queue while it refreshes. Clicking the refresh button,
running an auto-refresh, or canceling a refresh makes no difference to reviewer access.
Review Center Guide 18
Similarly, if the queue was paused before the refresh, it will stay unavailable. Active queues stay active, and
paused queues stay paused.
Auto-refreshing in Coverage Mode
If your prioritized review queue has automatic refreshes enabled and Coverage Mode turned on, the
refreshes trigger at a different time. The queue will automatically refresh each time 100 documents are
coded, or when 5% of the documents have been coded, whichever occurs first. The "Next refresh"
document count reflects this change whenever you turn on Coverage Mode.
Note: Whenever you turn Coverage Mode on or off, manually refresh the queue. This updates the
document sorting for reviewers. For more information, see Turning Coverage Mode on and off on the next
page.
Caching extracted text in prioritized review queues
The first time you prepare a prioritized review queue, Review Center caches the extracted text of the
documents in the queue and stores the documents' data at the workspace level. This significantly speeds up
later refreshes, because Review Center references the cache instead of re-analyzing the text. This also
speeds up the creation of any other queues in the workspace with the same documents.
When you click to manually refresh the queue, a modal appears with an option to refresh the cache:
n
If the extracted text of documents in the queue's data source has not changed, leave the box
unchecked. This makes the refresh process much faster.
Note: You do not need to refresh the cache if you are simply adding or removing documents from
the queue.
n
If the extracted text of documents in the queue's data source has changed, check the box. This tells
Review Center to re-cache the extracted text from all documents in the queue. Choosing to re-cache
the text may add significant time to the queue refresh.
3.1.2.2 Starting the queue
The Start Review button makes the queue available for review. If the queue has never been prepared
before, it will say Prepare and Start. This also runs the saved search and trains the classifier for the first
time.
After the queue has finished starting, the symbol beside this option changes to a pause button. Clicking this
pauses the queue and stops reviewers from checking out more documents.
Before starting a queue, you must have a reviewer group assigned.
3.1.2.3 Editing queues and other actions
To edit the queue or perform other less-frequent actions, click on the three-dot menu on the right.
Review Center Guide 19
The menu options are:
n
Edit—opens a modal to edit any of the queue settings.
o
For information on general edits, see Editing recommendations below.
o
For information on Coverage Mode, see Turning Coverage Mode on and off below.
n
Release Documents—releases all documents that are checked out by reviewers. If a reviewer falls
inactive and does not review the last few documents in a queue, this frees up those documents for
reassignment.
o
To see the number of currently checked out documents, look at the main ribbon for the Queue
Summary section.
o
If you release documents while a reviewer is actively reviewing, that person will be able to
finish coding, but their documents may get checked out by another reviewer at the same time.
To prevent this, ask any active reviewers to exit and re-enter the queue after you click the link.
n
Set up Validation (prioritized review queue only)—opens the options to create a review validation
queue. For more information, see Review validation on page32.
n
Mark as Complete—sets the queue's status to Complete and moves it to the far right of the queue
tab strip. This also removes the queue from the Review Queues tab, and reviewers can no longer
access it. After the queue has been marked Complete, this option changes to Re-enable. Clicking
this sets the queue's status to Not Started and returns it to the Review Queues tab.
Editing recommendations
Many edits are minor, and you can make them without pausing the queue. However, if you make a major
change such as changing the data source, we recommend:
1.
Pause the queue before editing.
2.
Release any checked out documents.
3.
Edit the queue.
4.
Refresh the queue.
5.
Restart the queue.
For descriptions of the editable fields, see Creating a Review Center queue on page8.
Turning Coverage Mode on and off
When you turn Coverage Mode on or off for a prioritized review queue, this changes the order in which the
documents will be served up. Before the new order will take effect, though, you must refresh the queue.
To turn Coverage Mode on or off:
1.
On the right side of the Queue Summary section, click the three-dot menu.
2.
Select Edit.
3.
Click the Coverage Mode toggle to enable or disable it.
4.
Click Save.
5.
Manually refresh the queue. For more information, see Preparing or refreshing the queue on page16.
Review Center Guide 20
On the right of the Queue Summary section, the start or pause button reflects whether Coverage Mode is
turned on. If it is turned On, it will refer to the queue as "Coverage Review."
If Coverage Mode is turned Off, the button will refer to the queue as "Prioritized Review."
On the Queue History table, you can also see if your queue was in Coverage Mode or not during each
queue refresh. For more information, see Queue History on page23.
3.1.3 Review Progress section
The Review Progress section shows statistics for the current queue's progress.
By default, the section shows a set of statistics that are calculated for all documents in the queue. By
clicking the triangle next to the section name, you can select another view.
3.1.3.1 Review Progress view
The default Review Progress view shows statistics for all documents in the queue's data source.
The Review Progress statistics are:
n
Total Docs—the total number of documents currently in the queue's data source. To be counted, the
queue must have been prepared or refreshed after the documents were added or removed. The
"100%" in smaller print underneath it indicates that this is the total document set.
n
Docs Coded—the number of documents in the data source that either have a value in the review
field, or have been skipped. This includes documents coded outside the queue. The smaller
percentage underneath it shows the percentage of Docs Coded divided by Total Docs.
Review Center Guide 21
n
<Positive Choice>—the number of documents coded with the positive choice on the review field.
This includes documents coded outside the queue. The smaller percentage underneath it shows the
percentage of <Positive Choice>divided by Docs Coded.
n
<Negative Choice>—the number of documents coded with the negative choice on the review field.
This includes documents coded outside the queue. The smaller percentage underneath it shows the
percentage of <Negative Choice>divided by Docs Coded.
n
Neutral—the number of documents coded with a neutral choice on the review field. This includes
documents coded outside the queue. The smaller percentage underneath it shows the percentage of
Neutral documents divided by all Docs Coded.
n
Relevance Rate—the total percentage of documents coded positive. This is calculated by counting
the number of documents coded positive, then dividing it by the total number of coded, non-skipped
documents. The bold percentage shows the relevance rate including documents coded either inside
or outside of the queue, while the smaller percentage underneath it shows the relevance rate only for
documents coded inside the queue.
n
Uncoded—the number of documents in the data source with no value in the review field. This
includes documents that were skipped or had their coding decision removed. The smaller percentage
underneath it shows the percentage of Uncoded documents divided by Total Docs.
n
Skipped—the number of documents that were skipped within the queue. The smaller percentage
underneath it shows the percentage of Skipped documents divided by all Uncoded documents.
n
Predicted <Positive Choice> (Prioritized Review only)—the number of documents in the data
source with no review field value and a relevance rank greater than or equal to 50.00. The smaller
percentage underneath it shows the percentage of Predicted <Positive Choice> documents divided
by all Uncoded documents.
n
Predicted <Negative Choice> (Prioritized Review only)—the number of documents in the data
source with no review field value and a relevance rank less than 50.00. The smaller percentage
underneath it shows the percentage of Predicted <Negative Choice> documents divided by all
Uncoded documents.
Note: The Predicted <Positive Choice> and Predicted <Negative Choice> fields only show their pre-
dictions after 50 or more documents have been coded.
3.1.3.2 Documents Coded Outside Queue view
If you select Documents Coded Outside Queue from the Review Progress drop-down, this shows an
alternate view. These statistics count documents that are part of the queue's saved search, but that were
coded through some means other than the selected Review Center queue.
The Documents Coded Outside Queue statistics are:
n
Docs Coded—the number of documents in the data source that were coded outside of the queue.
The smaller percentage underneath it shows the percentage of documents coded outside the queue
divided by all documents coded.
n
<Positive Choice>—the number of documents that were coded positive outside of the queue. The
smaller percentage underneath it shows the percentage of documents coded positive outside the
queue divided by all documents coded.
n
<Negative Choice>—the number of documents that were coded negative outside of the queue. The
smaller percentage underneath it shows the percentage of documents coded negative outside the
queue divided by all documents coded.
Review Center Guide 22
n
Neutral—the number of documents that were coded with a neutral choice outside of the queue. The
smaller percentage underneath it shows the percentage of documents coded neutral outside the
queue divided by all documents coded.
3.2 Charts and tables
The dashboard includes two visualization panels. Both panels have the same options for charts and tables
to show, which lets you choose which visualization to show on which panel, in any order.
To navigate the visualization panel:
n
To select a different visualization, click the blue arrow ( ) next to the visualization's name. This
opens a drop-down menu with all other visualizations.
n
To switch from the chart view to the table view, click the Chart drop-down in the upper right corner
and select Table. This shows a table with the same information as the selected chart.
n
To zoom in or out on a chart, hover the cursor over it and scroll. All charts reset to their default zoom
when you reload the page.
n
To download the panel contents, click the download symbol ( ) on the upper right. Charts download
as .png images, and tables download as .csv files.
Note: If any documents were coded by reviewers who are not part of this Relativity instance, those review-
ers will be listed as Unknown User 1, Unknown User 2, and so on. This can happen if a reviewer was
removed from the workspace or if the workspace has been archived and restored into a different instance.
3.2.1 General charts and tables
Some charts and tables are available for any type of queue. These include:
Review Center Guide 23
3.2.1.1 Coding Progress
The Coding Progress tab shows the count of documents that have been coded in the queue over time.
Coding data is reported in 15-minute increments.
The numbers for Est. Total Docs and Est. Docs Remaining are updated every time the queue refreshes.
Because they update at a different time than the coding data, these numbers are estimates.
3.2.1.2 Relevance Rate
The Relevance Rate tab shows the relevance rate over time. This can be shown overall or by user.
Each solid data point represents 100 documents, and a hollow data point represents any remainder. For
example, if 201 documents have been coded, there will be 3 points: 2 solid points for each set of 100, and 1
hollow point for the final document.
Other details about the data points include:
n
If you have more than one data point in a 15 minute increment, the chart shows them as two points on
a vertical line. This can happen if many reviewers are coding quickly.
n
The Date field for a data point is the date the last document in the set of 100 was logged.
For prioritized review queues, the relevancy rate usually declines over time. However, the relevance rate
may spike if lots of new documents are added to the queue or if the definition of relevance changes during
review. For saved search queues, the shape of the relevancy rate graph varies depending on the saved
search being used.
3.2.1.3 Review Speed
The Review Speed tab shows the number of documents coded per hour. Data is reported in 15-minute
increments.
The Review Speed data can be shown overall or by user. When it's set to show all reviewers, the line chart
shows a weighted average of the review speeds of the reviewers. It does not report their aggregate review
speed.
3.2.1.4 Queue History
The Queue History tab shows the state of the queue at every previous refresh. This is shown only as a table,
not a chart.
The columns vary depending on the queue type. For saved search queues, it also depends on whether
positive and negative choices are selected for the review field.
Possible columns include:
n
Refresh Start Time
n
Refresh End Time
n
Total Items—the total number of documents in the data source.
n
Refresh Type—this can be either Auto or Manual.
n
Coded <Positive Choice> (optional for saved search queues)
n
Coded <Negative Choice> (optional for saved search queues)
n
Uncoded Predicted <Positive Choice> (prioritized review queues only)
n
Uncoded Predicted <Negative Choice> (prioritized review queues only)
Review Center Guide 24
n
Coverage Mode (prioritized review queues only)—whether the queue was in Coverage Mode during
the refresh.
All document counts show the number of documents in that category at the Refresh End Time.
3.2.2 Prioritized review charts
The Rank Distribution chart is available for prioritized review queues. This chart helps you compare the
model's predictions to reviewer's actual coding decisions. It shows the number of documents at each rank,
from 0 to 100, color-coded by the reviewers' coding decisions on those documents.
A low relevance rank means that the model predicts that the document is more likely to be coded negative,
and a high relevance rank means that the model predicts the document is more likely to be coded positive.
If you zoom out on the Rank Distribution chart, you may see documents with ranks below zero. These are
documents that could not be classified. For more information, see Understanding document ranks on the
next page.
3.2.3 Reviewed Documents table
The Reviewed Documents table shows which reviewer coded each document, how long the reviewer took,
and how it was coded.
For saved search queues, the columns depend on whether a review field is set, as well as if positive and
negative choices are selected.
Possible columns include:
n
Control Number—the control number of the document.
n
Reviewer—the assigned reviewer's name.
n
Coded Time—the check-in time for the document. If the document is still checked out, this is blank.
n
Coding Duration—how much time passed between the document being checked out to the reviewer
and checked back in. This is reported in hours, minutes, and seconds (HH:MM:SS).
Review Center Guide 25
n
Queue Coding Decision (optional for saved search queues)—how the document was coded when
the reviewer checked it back in. If the document was skipped, this is blank.
n
<Review Field Name> (optional for saved search queues)—the current coding designation of the
document.
3.3 Deleting a queue
Queues can be edited or deleted from the Review Library tab.
To delete a queue:
1.
Navigate to the Review Library tab.
2.
Click on the queue you want to delete.
3.
Click Delete.
A confirmation pop-up will appear.
4.
Click Delete again.
After the process completes, you will return to the main Review Library tab.
Deleting a queue does not remove any of the coding decisions or rank values that have been assigned to
the documents.
Note: If you delete a main queue that has a validation queue linked to it, it also deletes the validation
queue. For more information on validation queues, see Review validation on page32.
3.4 Fixing a misconfigured queue
If a required field or object that a queue relies on is deleted or moved, this puts the queue into a warning
state. Any queue preparation or auto-refresh stops, and a message appears at the top of the Review Center
tab directing you to the field or object that needs to be fixed. Your reviewers also see a warning at the top of
the Review Queue page telling them which queue is misconfigured and that they should alert their
administrator.
When this happens, we recommend pausing the queue and checking its settings. For example, if the saved
search was deleted, you may need to link the queue to a new saved search. If a required field was deleted,
you may need to create a new one.
If you have checked the queue's settings and still see warnings, contact Product Support.
3.5 Understanding document ranks
During prioritized review, the AI classifier assigns a rank to each document. These ranks are stored in the
Rank Output field, and they determine the order in which reviewers will see documents.
Most document ranks range from 0 to 100. The higher the score, the stronger the prediction that the
document will be coded on the positive choice. The AI classifier recalculates ranks every time the queue
refreshes, and the highest-ranking documents are served up to reviewers.
Review Center Guide 26
Notes:
n
Active Learning and Review Center use similar ranking systems, but the classifiers are not the same.
If you use both tools to classify the same document, it will receive separate scores. These scores can
be very different depending on circumstances.
n
In order to improve efficiency and performance, Relativity reserves the right to update the prioritized
review queue's AI classifier during software upgrades. Although we work hard to minimize dis-
ruptions, these upgrades can cause minor differences in document ranking. As a result, admin-
istrators may occasionally see minor variations in document ranks after a queue is refreshed, even
without any new document coding.
If the classifier cannot classify a document, it will assign the document a value below zero. These values
are:
Negative
rank
Document error
-1 An error occurred while processing the data through the classifier.
-2 The extracted text field is empty. If you see this rank, consider making a saved search queue
to review these documents separately.
-3 The document's extracted text field is larger than the limit of 600 KB. If you see this rank, we
recommend filtering out large documents from your saved search to improve the per-
formance of the classifier.
3.6 Tracking reviewer decisions
You can view coding decisions made by each reviewer in the Reviewed Documents table. For more
information, see Reviewed Documents table on page24.
Alternatively, you can also use the following methods.
3.6.1 Using the Documents tab
The Review Center Coding fields track the original reviewer names, decisions, and dates. You can add
these to views and saved searches from the Documents tab.
The field names are:
n
Review Center Coding::Reviewed On—the date of the original coding decision. Dates are based
on the UTC time zone.
n
Review Center Coding::Reviewed By—the name of the reviewer who made the original coding
decision.
n
Review Center Coding::Field Name—the name of the Review Field for the queue.
n
Review Center Coding::Queue—the name of the Review Center queue that contains the
document.
n
Review Center Coding::Value—the reviewer's coding decision.
For more information on creating views and saved searches, see Creating a view and Creating or editing a
saved search on the RelativityOne documentation site.
Review Center Guide 27
3.6.2 Using the Field Tree
The Field Tree helps you get a quick overview of document coding decisions.It does not show which
reviewer made each decision.
To view coding decisions using the Field Tree:
1.
Navigate to the Documents tab.
2.
In the browser panel, click on the tag symbol to open the Field Tree.
3.
Scroll to the folder labeled Review Center and expand it.
4.
Click on your queue's name. This shows all documents currently in the queue, plus any documents
that were coded in the queue but later removed.
Depending on your queue's history, there may also be other tags nested underneath it:
n
<queue name> Validation <#>—this lists documents in an attached Validation queue. If the queue
has several attached Validation queues, each one will have its own tag.
n
Removed—this lists any documents that were coded in the queue, but later removed from the data
source.
If you rename or delete a queue, this renames or deletes the matching Field Tree tags also.
3.6.3 Using the Track Document Field Edits by Reviewer application
The Track Document Field Edits by Reviewer application lets you see which reviewer made each coding
decision. You can set up the application individually for each of your queues.
Install the application using the instructions from Track document field edits by reviewer on the
RelativityOne documentation site.
When configuring the application:
1.
Put your Reviewed On and Reviewed By fields into a saved search or view for monitoring.
2.
Set your queue's review field as the Field to Monitor.
3.
For most use cases, set Track Initial Change Only? to Yes. This sets it to track the first reviewer of
the document, instead of overwriting the Reviewed On and Reviewed By fields every time a user
edits the document.
If you set up the application after starting your queue, you can still see previous coding decisions by
following the steps under Populating Historical Records.
3.7 Moving Review Center templates and queues
Review Center templates and queues are Relativity Dynamic Objects (RDOs), which typically can be moved
across workspaces or instances with Relativity Integration Points and Relativity Desktop Client. However,
because of the complexity of an active queue, we do not support moving active queues. Doing so could
damage your Review Center environment.
We do support moving queue templates across workspaces or instances using Relativity Integration Points
and Relativity Desktop Client. This process is safe for your environment.
Review Center Guide 28
4 Reviewing documents using Review Center
The Review Queues tab is the starting point for reviewers. Every Review Center queue that a reviewer is
assigned to shows up here.
This topic provides step-by-step instructions for accessing a queue and reviewing documents.
4.1 Reviewing documents in the queue
To review documents in a queue:
1.
Navigate to the Review Queues tab.
2.
Each queue you are assigned to has a separate card. Locate the card with the same name as the
queue you want.
3.
Click Start Review.
This opens the document viewer.
4.
Review the document as specified by your admin, then enter your coding choice.
5.
Click Save and Next.
The next document will appear for review.
If you do not see a Start Review button, either the queue is paused, or the admin has not started the queue.
Talk to your administrator to find out when the queue will be ready.
For more information on using the document viewer, see Viewer in the Admin guide.
4.2 Finding previously viewed documents
As you work through the queue, you can see documents you already reviewed in the queue by clicking on
Documents in the left-hand navigation bar. This opens the Documents panel.
Review Center Guide 29
To view a document, click on its control number in the panel.
To return to your current document, click on the Navigate Last button in the upper right corner of the
document viewer.
Note: When you filter columns in the Documents panel, the filters only apply to documents on the current
page of the panel. For a comprehensive list of results, filter within the Documents tab of Relativity or run a
search from the saved search browser or field tree.
4.3 Queue card statistics
If your admin has enabled it, you may see some statistics displayed on the queue cards.
The statistics you may see are:
n
Total docs in queue—the total number of documents in this queue, across all reviewers.
n
Total remaining uncoded docs—the total number of uncoded documents in this queue, across all
reviewers.
n
My docs reviewed total—how many documents you have reviewed total in this queue.
Review Center Guide 30
n
My docs reviewed today—how many documents you have reviewed today in this queue. These are
counted starting at midnight in your local time.
4.4 Viewing the dashboard
Your admin may give you access to the Review Center dashboard. The dashboard shows how the review is
progressing, including statistics and visualizations.
For more information on the Review Center dashboard, see Monitoring a Review Center queue on page15.
4.5 Best practices for Review Center review
When reviewing documents in a Review Center queue, we recommend the following guidelines:
n
Double check—always check the extracted text of a document to be sure it matches the content in
other views. Whenever possible, review from the Extracted Text viewer.
n
Stay consistent—check with fellow reviewers to make sure your team has a consistent definition of
relevance. The AI classifier can handle occasional inconsistencies, but you’ll get the best results with
coordinated, consistent coding.
n
When in doubt, ask—if something confuses you, don't guess. Ask a system admin or project man-
ager about the right course of action.
4.5.1 Coding according to the "four corners" rule
Review Center's AI classifier predicts which documents will be responsive or non-responsive based on the
contents of the document itself. Family members, date range, custodian identity, and other outside factors
do not affect the rankings. Because of this, the AI classifier learns best when documents are coded based
only on text within the four corners of the document.
When you code documents as positive or negative in a Review Center queue, you are both coding the
document and teaching the AI classifier what a responsive document looks like. Therefore, your positive or
negative coding decisions should follow the "four corners"rule: code only based on text within the body of
the document, not based on surrounding factors.
Having one or two documents that fail this rule will not harm the overall accuracy of Review Center's
predictions. However, if you want to track large numbers of documents that are responsive for reasons
outside of the four corners, we recommend talking to the project manager about either setting up a third,
neutral choice on the coding field for those, or adding a secondary coding field. Neutral choices and other
coding fields are not used to train the AI classifier.
4.5.1.1 Common scenarios that fail the "four corners" rule
The following scenarios violate the "four corners" rule and do not train the AI classifier well:
n
The document is a family member of another document which is responsive.
n
The document comes from a custodian whose documents are presumptively responsive.
n
The document was created within a date range which is presumptively responsive.
n
The document comes from a location or repository where documents are typically responsive.
Review Center Guide 31
For example, the following email has a responsive attachment. However, the body of the email—the text
within the four corners—is only a signature and a privacy disclaimer. Because the body of this email is not
responsive, this document does not pass the "four corners" rule.
4.5.2 Factors that affect Review Center's predictions
Not all responsive documents inform Review Center equally. The following factors affect how much the AI
classifier learns from each document:
n
Sufficient text—if there are only a few words or short phrases in a document, the engine will not
learn very much from it.
n
Images—text contained only in images, such as a photograph of a contract, cannot be read by
Review Center. The system works only with the extracted text of a document.
n
Numbers—numbers are not considered by Review Center.
Review Center Guide 32
5 Review validation
Review validation evaluates the accuracy of a Review Center queue. The goal of validation is to estimate
the accuracy and completeness of your relevant document set if you were to stop the queue immediately
and not produce any unreviewed documents. The primary statistic, elusion rate, estimates how many
uncoded documents are actually relevant documents that you would leave behind if you stopped the queue.
The other statistics give further information about the state of the queue.
For a detailed explanation of how the validation statistics are calculated, see Review validation statistics on
page40.
Note: Review validation does not check for human error. We recommend that you conduct your own
quality checks to make sure reviewers are coding consistently.
5.1 Key definitions
The following definitions are useful for understanding review validation:
n
Discard pile—the set of documents that are uncoded, skipped, or coded as neutral. This also
includes documents that are being reviewed when validation starts, but their coding has not been
saved yet.
n
Already-coded documents—documents that have already been coded as either positive or
negative. These are counted as part of the validation process, but they will not be served up to
reviewers a second time. Neutral-coded documents are considered part of the discard pile instead,
and those may be served up a second time.
5.2 Determining when to validate a Prioritized Review queue
When a Prioritized Review queue is nearing completion, it can become more difficult to find additional
relevant documents. As you monitor your queue, the following dashboard charts can help you determine
when the queue is ready for validation:
n
Rank Distribution—look for few or no unreviewed documents with a rank of 50 or higher.
n
Relevance Rate—you should see a decline in the relevance rate progress line indicating that very
few responsive documents are being found.
When you believe you have found most of the relevant documents, run validation to estimate the accuracy
and completeness of your relevant document set.
For more information on the dashboard charts, see Charts and tables on page22.
5.3 Starting a validation queue
When you are ready to validate your progress in a Review Center queue, you can start a linked validation
queue that samples documents from the discard pile and serves them to reviewers.
To set up the validation queue:
Review Center Guide 33
1.
From the Review Center tab, click on the queue you want to validate.
2.
Pause the queue.
n
If auto-refresh is turned on, turn it off.
n
If the queue is in the middle of refreshing, wait until the refresh has finished before starting
validation.
n
If any documents are currently checked out to reviewers, release them. For more information,
see Editing queues and other actions on page18.
3.
On the right side of the Queue Summary section, click on the three-dot menu and select Set up
Validation.
An options modal appears.
4.
In the options modal, set the following:
1.
Validation Reviewer Groups—the user groups you want reviewing the queue.
2.
Cutoff—enable this to set a cutoff for the validation queue. For more information, see How
setting a cutoff affects validation statistics on page41.
3.
Positive Cutoff—if Cutoff is enabled, enter a custom cutoff value between 0 and 100 in this
field. This rank will be used as the dividing line between documents predicted positive and
documents predicted negative. Setting this value also adds a Precision statistic to the
validation results.
4.
Set the sample size using three interconnected fields:
1.
Sample Size—this sets a fixed number of documents for the sample size. By default,
this field is set to 1000 documents. The sample size must be larger than 5 and smaller
than the size of the discard pile.
2.
Margin of Error Estimate (Elusion)—this calculates a size for the sample based on
how accurate the Elusion statistic will be.
3.
Margin of Error Estimate (Recall)—this calculates a size for the sample based on how
accurate the Recall statistic will be.
Note: Each of these fields affects the others. For an explanation of how they work,
see Choosing the validation settings below.
5.
Click Save.
5.3.1 Choosing the validation settings
Validation always samples a specific number of documents, but there are three ways to choose the sample
size:
1.
You can specify exactly how many documents you want to sample. Review Center automatically
calculates the estimated margins of error for both Elusion and Recall based on the sample size you
select.
Review Center Guide 34
Note: This is equivalent to choosing the “fixed” option when configuring an Elusion with Recall test
in Active Learning. In contrast with an Active Learning Elusion with Recall test, Review Center only
samples the discard pile. This means that the sample size is also the number of documents that will
need to be coded.
2.
You can specify the desired margin of error for the elusion estimate and let Review Center calculate
an appropriate sample size. Review Center also automatically calculates the corresponding recall
margin of error.
Note: This is equivalent to the “statistical” option when configuring an Elusion with Recall test in
Active Learning.
3.
You can specify the desired margin of error for the recall estimate and let Review Center calculate an
appropriate sample size. Review Center also automatically calculates the corresponding elusion
margin of error.
The final margin of error estimates may be slightly different from the ones chosen at setup, depending on
the documents found during validation. All validation statistics aim for a 95% confidence interval alongside
the margin of error.
The estimated elusion margin of error depends only on the sample size, and vice versa. Their relationship to
the estimated recall margin of error depends on the number of relevant documents that have already been
coded and the current size of the discard pile. It may vary among different validation samples, even within
the same review.
For more information on how validation statistics are calculated, see Review validation statistics on
page40.
5.3.2 Inherited settings
Each validation queue inherits these settings from the main queue:
n
Queue Display Options
n
Reviewer Document View
n
Reviewer Layout
n
Email Notification Recipients
To change them, edit the validation queue after creating it. For more information, see Editing a validation
queue on the next page.
5.4 Coding in a validation queue
Reviewers access the validation queue from the Review Queues tab like all other queues. Have reviewers
code documents from the sample until all documents have been served up.
For best results, we strongly recommend coding every document in the validation queue as positive or
negative. Avoid skipping documents or coding them as neutral. For more information, see How validation
handles skipped and neutral documents on page44.
Review Center Guide 35
5.5 Monitoring a validation queue
Validation statistics are reported on the Review Center dashboard like any other queue. You can cancel
validation from the three-dot menu, and you can pause validation by clicking the Pause button. All data in
the charts and tables reflect the validation queue.
During validation, the Review Progress section changes to become a Validation Progress section, which
shows the progress of the validation queue. To view validation statistics instead, click the arrow next to the
section name, then select Validation Stats.
For more information on the validation statistics, see Reviewing validation results on page37.
5.5.1 Editing a validation queue
You can change some of the queue settings at any time during validation.
To edit the validation queue:
1.
On the right side of the Queue Summary section, click on the three-dot menu and select Edit.
2.
Edit any of the following settings:
n
Reviewer Groups
n
Queue Display Options
n
Reviewer Document View
n
Reviewer Layout
n
Email Notification Recipients
3.
Click Save.
For descriptions of the queue settings, see Creating a Review Center queue on page8.
5.5.2 Releasing unreviewed documents
If a reviewer falls inactive and does not review the last few documents in a validation queue, you can release
those documents through the Queue Summary section of the dashboard. For more information, see Editing
queues and other actions on page18.
Review Center Guide 36
To see which documents are checked out to a reviewer, filter the Reviewed Documents table by the
reviewer's name. Any documents that are still checked out will show the Coded Time as blank. For more
information, see Reviewed Documents table on page24.
5.5.3 Tracking sampled documents
If you want to run your own calculations or view documents in the validation sample, you can track the
sampled documents from the Document list page. This process is optional.
To view sampled documents:
1.
From the Documents tab, click on the Field Tree ( ) icon.
2.
Expand the Review Center folder.
3.
Expand the folder for the queue you're validating.
Several subfolders appear.
4.
Expand the folder titled [Queue Name] - Validation [Current Round Number]. If you have only run
validation one time, the round number will be 1.
Each validation folder contains the documents selected for the sample. It also holds two sub-choices: one
for documents coded positive or negative, and one for skipped or neutral documents. As documents are
coded, they populate under these choices.
5.6 Accepting or rejecting validation results
After all documents in the validation queue have been reviewed, a ribbon appears underneath the Queue
Summary section. This ribbon has two buttons: one to accept the validation results, and one to reject them.
If you click Accept:
Review Center Guide 37
n
The queue status changes to Validation Complete.
n
The model remains frozen. Any future coding decisions will no longer be used to train the model, and
the Review Progress statistics will not reflect any new coding.
n
The Validation Progress strip on the dashboard displays the final validation statistics.
If you click Reject:
n
The validation queue status changes to Rejected, and the main review queue changes to Paused.
n
The main review queue re-opens for normal coding, and you can build the model again at any time.
Any documents coded since validation began, including those from the validation queue itself, will be
included in the model build.
n
The Coding Progress strip on the dashboard displays the main queue's statistics.
You can run validation on the queue again at any later time, and you can reject validation rounds as many
times as needed. Even if you reject the results, Review Center keeps a record of them. For more
information, see Viewing results for previous validation queues on the next page.
5.6.1 Manually rejecting validation results
If you change your mind after accepting the validation results, you can still reject them manually.
To reject the results after accepting them:
1.
On the right side of the Queue Summary section, click on the three-dot menu and select Reject
Validation.
2.
Click Reject.
After you have rejected the validation results, you can resume normal reviews in the main queue.
5.7 Reviewing validation results
After reviewers code all documents in the sample, the queue status changes to Complete. All validation
results appear in the Validation Progress strip on the Review Center dashboard.
The results include:
n
Relevance Rate—percentage of sampled documents that were coded relevant by reviewers, out of
all coded documents in the sample. If any documents were coded as neutral, this statistic also counts
them as relevant.
n
Elusion Rate—the percentage of unreviewed documents that are predicted as non-relevant, but that
are actually relevant.. The range listed below it applies the margin of error to the sample elusion rate,
which is an estimate of the discard pile elusion rate.
Notes:
o
If you do not set a cutoff for your validation queue, this is calculated as the percentage of all
unreviewed documents that are actually relevant.
o
Documents that are skipped or coded neutral in the validation queue are treated as relevant
documents when calculating Elusion Rate. Therefore, coding all documents in the elusion
sample as positive or negative guarantees the statistical validity of the calculated elusion rate
as an estimate of the entire discard-pile elusion rate.
Review Center Guide 38
n
Eluded Documents—the estimated number of relevant documents that have not been found. This is
calculated by multiplying the sample elusion rate by the number of documents in the discard pile. The
range listed below it applies the margin of error to the document count.
n
Precision—the estimated percentage of documents that would be responsive in a production that
includes all documents coded positive, plus all documents at or above the Positive Cutoff. This stat-
istic is only calculated if you set a cutoff when creating the validation queue.
n
Recall—percentage of documents that were coded relevant out of the total number of relevant doc-
uments, both coded and uncoded. The range listed below it applies the margin of error to the per-
centage.
n
Richness—the percentage of relevant documents across the entire review queue. The range listed
below it applies the margin of error to the percentage.
For more information about how these statistics are calculated, see Review validation statistics on page40.
5.7.1 Recalculating validation results
If you have re-coded any documents from the validation sample, you can recalculate the results without
having to re-run validation. For example, if reviewers had initially skipped documents in the sample or coded
them as neutral, you can re-code those documents outside the queue, then recalculate the validation results
to include the new coding decisions.
To recalculate validation results:
1.
On the right side of the Queue Summary section, click on the three-dot menu and select Recalculate
Validation.
2.
Click Recalculate.
5.7.2 Viewing results for previous validation queues
After you have run validation on a queue, you can switch back and forth between viewing the statistics for
the main queue and any linked validation queues that were completed or rejected. Viewing the statistics for
linked queues does not affect which queue is active or interrupt reviewers.
To view linked queues:
1.
Click the triangle symbol near the right side of the Queue Summary section.
A drop-down menu listing all linked queues appears.
2.
Select the queue whose stats you want to view.
When you're done viewing the linked queue's stats, you can use the same drop-down menu to select the
main queue or other linked queues.
Review Center Guide 39
5.8 How adding or changing documents affects validation
Typically, review validation is linear: The administrator sets up the validation sample, the reviewers code the
sample, and the results are calculated from those documents. However, if documents are added or
removed, coded documents are re-coded, or other things happen to change the queue being validated, this
can affect the validity of the results.
5.8.1 Scenarios that require recalculation
The following scenarios can be fixed by recalculating statistics:
n
Changing coding decisions on documents within the validation sample
n
Changing already-coded documents outside the sample from positive to negative or negative to
positive
n
Adding already-coded documents to the queue after validation starts
In these cases, the sample itself is still valid, but the numbers have changed. For these situations,
recalculate the validation results to see accurate statistics.
For instructions on how to recalculate results, see Recalculating validation results on the previous page.
5.8.2 Scenarios that require a new validation queue
The following scenarios cannot be fixed by recalculation:
n
Adding uncoded or neutral documents to the queue after validation starts
n
Changing positive- or negative-coded documents outside the sample to skipped or neutral
In both of these cases, this means that the validation sample is no longer a random sample of all uncoded or
neutral documents. For these situations, we recommend starting a new validation queue.
Review Center Guide 40
6 Review validation statistics
Review Center provides several metrics for evaluating your review coverage: elusion, richness, recall, and
precision. Together, these metrics can help you determine the state of your Review Center project.
Once you have insight into the accuracy and completeness of your relevant document set, you can make an
educated decision about whether to stop the Review Center workflow or continue review.
For instructions on how to run Project Validation, see Review validation on page32.
6.1 Defining elusion, recall, richness, and precision
Validation centers on the following statistics. For all of these, it reports on confidence intervals:
n
Elusion rate—the percentage of unreviewed documents that are predicted as non-relevant, but that
are actually relevant. The rate is rounded to the nearest single digit (tenth of a percent).
n
Recall—the percentage of truly positive documents that were found by the review.
n
Richness—the percentage of relevant documents across the entire review.
n
Precision—the percentage of found documents that were truly positive. This statistic is only cal-
culated if you set a cutoff when creating the validation queue.
The calculations for elusion and recall change depending on whether the validation queue uses a cutoff. For
more information, see How setting a cutoff affects validation statistics on the next page.
For each of these metrics, the validation queue assumes that you trust the human coding decisions over
machine predictions, and that the prior coding decisions are correct. It does not second-guess human
decisions.
Note: Validation does not check for human error. We recommend that you conduct your own quality
checks to make sure reviewers are coding consistently.
6.2 Groups used to calculate validation metrics
Validation divides the documents into groups based on two distinctions:
n
Whether or not the document has been coded.
n
Whether or not the document is relevant.
Together, these distinctions yield four buckets:
n
Coded, not relevant—documents that have been coded but are not relevant.
n
Coded, relevant—documents that have been coded and are relevant.
n
Uncoded, predicted not relevant—documents that have not been coded and are predicted not rel-
evant.
n
Uncoded, predicted relevant—documents that have not been coded and are predicted relevant.
Review Center Guide 41
At the start of validation, the system knows exactly how many documents are in buckets 1 and 2.
n
Coded documents—have a positive or negative coding label.
n
Uncoded documents—have not received a positive or negative coding label. This includes any doc-
uments that:
o
have not been reviewed yet.
o
are being reviewed at the moment the validation starts, but their coding has not been saved
yet.
o
were skipped.
o
received a neutral coding label.
The system also knows how many documents are in buckets 3 and 4 altogether, but not the precise
breakdown between the two buckets.
You could find out by exhaustively coding the uncoded documents, but that’s time-consuming. Instead,
review validation uses statistical estimation to find out approximately how many are in each bucket. This
means that any statistics involving bucket 3 or 4 will include a confidence interval that indicates the degree
of uncertainty about how close the estimate might be to the true value.
Notes:
n
These buckets are determined by a document's status at the start of Project Validation. For the
purpose of these calculations, documents do not "switch buckets" during the course of validation.
n
If you choose not to set a cutoff for your queue, buckets 3 and 4 are combined, and all uncoded
documents are predicted not relevant. For more information, see How setting a cutoff affects
validation statistics below.
6.3 How setting a cutoff affects validation statistics
When you set up a validation queue, you have the choice of setting a cutoff. This cutoff rank is used as the
dividing line between documents that are predicted positive or predicted negative. Setting a cutoff also
enables the Precision statistic.
Depending on your choice, the calculations change as follows:
Review Center Guide 42
n
If you set a cutoff, Review Center assumes that you still expect some of your uncoded documents to
be positive. All statistics are calculated with the assumption that you plan to produce all positive-
coded documents, plus all uncoded documents at or above the Positive Cutoff rank (buckets 2 and
4). These documents are considered "found."
n
If you do not set a cutoff, Review Center assumes that you do not expect any uncoded documents to
be positive. All statistics are calculated with the assumption that you plan to produce only the positive-
coded documents (bucket 2). Only bucket 2 is considered "found." Without a cutoff, there is no bucket
4 - all uncoded documents are treated as bucket 3.
Whether or not you should use a cutoff depends on your review style. TAR1-style reviews, which focus on
having humans code all positive documents, usually do not use a cutoff. TAR2-style reviews, which focus
on refining the model until it can be trusted to predict some of the positive documents, usually use a cutoff.
6.3.1 High versus low cutoff
If you choose a high cutoff, this generally increases the precision, but lowers recall. If you choose a low
cutoff, this generally increases the recall, but lowers precision.
In other words, choosing a high cutoff makes it likely that a high percentage of the documents you produce
will be positive. However, it also increases the likelihood that some positive documents will be mistakenly
left out. Conversely, if you choose a low cutoff, it's more likely that you may produce a few negative
documents. However, you have better odds of finding ("recalling") all of the positive documents.
6.4 Validation metric calculations
6.4.1 Elusion rate
This is the percentage of uncoded, predicted non-relevant documents that are relevant.
Elusion = (Relevant documents in bucket 3) / (All documents in bucket 3)
Elusion measures the "error rate of the discard pile" meaning, the relevant document rate in bucket 3.
Documents that were coded relevant before starting project validation are not included in the calculation,
regardless of their rank score. Documents coded outside of the queue during validation count as "eluded"
documents.
If you do not set a cutoff for your validation queue, this is calculated as the number of documents in the
validation sample that were coded relevant, divided by the entire size of the sample.
Review Center Guide 43
6.4.2 Recall
Recall is the number of documents that were either previously coded or correctly predicted relevant, divided
by the total number of documents coded relevant in any group.
Recall = (Bucket 2 + relevant documents in bucket 4) / (Bucket 2 + relevant documents in buckets 3 and 4)
Recall measures the percentage of truly positive documents that were found by the review. Recall shares a
numerator with the precision metric, but the denominators are different. In recall, the denominator is "what is
truly relevant;" in precision, the denominator is "what we are producing." Documents coded outside of the
queue during validation count against recall.
If you do not set a cutoff for your validation queue, this is calculated as the number of documents that were
previously coded relevant, divided by the total number of documents coded relevant in any group.
6.4.3 Richness
This is the percentage of documents in the review that are relevant.
Richness = (Bucket 2 + any relevant documents found in buckets 3 and 4) / (All buckets)
Review Center Guide 44
Similar to recall, review validation estimates the number of relevant documents in bucket 4 by multiplying
the estimated elusion rate by the number of uncoded documents. This is only done for the top half of the
formula. For the bottom half, review validation only needs to know the size of the review.
6.4.4 Precision
Precision is the percentage of truly positive documents out of all documents that were expected to be
positive. Documents that were predicted positive, but coded negative, lower the precision percentage.
Precision = (Bucket 2 + relevant documents in bucket 4) / (All documents in buckets 2 and 4)
The precision statistic is only calculated if you set a cutoff when creating the validation queue. It assumes
that you plan to produce all documents coded positive, plus all documents at or above the Positive Cutoff.
6.5 How the validation queue works
When you start validation, the system puts all sampled documents from buckets 3 and 4 into the queue for
reviewers to code.
Documents coded during project validation do not switch buckets during the validation process. Documents
that started in buckets 3 and 4 are still considered part of 3 and 4 until validation is complete. This allows the
system to keep track of correct or incorrect predictions when calculating metrics, instead of lumping all
coded documents in with those which were previously coded.
Review Center reports statistics after all documents in the sample are reviewed. A document is considered
reviewed if a reviewer has viewed the document in the Viewer and has clicked Save or Save and Next.
6.6 How validation handles skipped and neutral documents
We strongly recommend coding every document in the validation queue as relevant or non-relevant.
Skipping documents or coding them neutral lowers the randomness of the random sampling, which
introduces bias into the validation statistics. To counter this, Review Center gives conservative estimates.
Each validation statistic counts a skipped or neutral document as an unwanted result.
The following table shows how skipped or neutral documents negatively affect each statistic.
Review Center Guide 45
Skipped or Neutral Docu-
ment
Effect on
Elusion
Effect on Recall
Effect on Rich-
ness
Effect on Pre-
cision
n
Low-ranking
(validation with cutoff)
n
Any document in a
validation without
cutoff
Increases
elusion rate
(Counts as
relevant)
Lowers recall rate
(Counts as relevant)
Raises richness
estimate
(Counts as
relevant)
(No effect)
n
High-ranking
(validation with cutoff)
(No effect) Lowers recall rate
slightly
(Counts as if it
weren't present)
Raises richness
estimate
(Counts as
relevant)
Lowers
precision rate
(Counts as non-
relevant)
Review Center Guide 46
7 Review Center security permissions
This page contains information on the security permissions required for creating and interacting with the
Review Center application.
7.1 Creating a Review Center template or queue
To create a Review Center template or queue, you need the following permissions:
Object Security Tab Visibility
n
Queue Refresh Trigger - View,
Edit, Add
n
Review Center Queue - View,
Edit, Add
n
Workspace - Edit Security
n
Review
Library
n
Review
Center
7.2 Editing and controlling Review Center queues
To edit an existing Review Center queue and use dashboard controls such as Prepare or Start, you need
the following permissions:
Object Security Tab Visibility
n
Queue Refresh Trigger - View,
Edit, Add
n
Review Center Queue - View, Edit
n
Workspace - Edit Security
n
Review
Center
Note: The Workspace - Edit Security permission is only required to edit the assigned reviewer group.
7.3 Deleting a Review Center template or queue
To delete a Review Center template or queue, you need the following permissions:
Object Security Tab Visibility Mass Operation
n
Queue Refresh Trigger - View,
Edit, Add
n
Review Center Queue - View, Edit,
Delete
n
Review
Library
n
Delete
Review Center Guide 47
7.4 Viewing the Review Center dashboard
To view the Review Center dashboard, you need the following permissions:
If you want a user group to only see specific queues on the dashboard, you can restrict a queue using item-
level security on the Review Library tab. For more information, see Security and permissions in the Admin
guide.
7.5 Tracking reviewer decisions from the Documents tab
To track reviewer coding decisions using the Documents tab or the Field Tree, you need the following
permissions:
Object Security Tab Visibility
Browsers
n
Review Center Coding
- View
n
Documents
n
Field
Tree
Users with access to the Review Center dashboard can also track reviewer decisions using the Reviewed
Documents table. For more information, see Tracking reviewer decisions on page26.
7.6 Reviewer permissions
A reviewer group accessing a Review Center queue and coding documents must have the following
permissions:
Object Security Tab Visibility
n
Document - View, Edit
n
Review Center Queue
- View
n
Review
Queues
For more information on assigning reviewer groups to a queue, see:
n
Setting up the reviewer group on page10
n
Creating a new queue from a template on page13
Review Center Guide 48
Proprietary Rights
This documentation (“Documentation”) and the software to which it relates (“Software”) belongs to
Relativity ODA LLC and/or Relativity’s third party software vendors. Relativity grants written license
agreements which contain restrictions. All parties accessing the Documentation or Software must: respect
proprietary rights of Relativity and third parties; comply with your organization’s license agreement,
including but not limited to license restrictions on use, copying, modifications, reverse engineering, and
derivative products; and refrain from any misuse or misappropriation of this Documentation or Software in
whole or in part. The Software and Documentation is protected by the Copyright Act of 1976, as amended,
and the Software code is protected by the Illinois Trade Secrets Act. Violations can involve substantial
civil liabilities, exemplary damages, and criminal penalties, including fines and possible imprisonment.
©2024. Relativity ODALLC. All rights reserved. Relativity® is a registered trademark of Relativity
ODA LLC.