a

Lorem ipsum dolor sit amet, consectetur adicing elit ut ullamcorper. leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet. Leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet.

  /  Project   /  Blog: Scale annotation quality

Blog: Scale annotation quality


One of the challenges of working with annotations at scale is training and evaluating human annotators. Here we cover some of the ways you can do that with Diffgram

Is this guide right for me?

If any of these apply:

  • You have a single Subject Matter Expert (SME) doing annotations, who is not familiar with deep learning concepts, and wants step by step instructions.
  • You have more than 1 SME doing annotations
  • You are outsourcing or crowd sourcing annotations, or are an outsourcing firm.

Naturally this is a more complex feature, and this guide is generally aimed at folks operating medium to larger scale projects wanting an in depth walk through.

Primary flow of Diffgram

The primary flow of using Diffgram is a cycle of importing data, training models, and updating those models, primarily by changing the data. Making use of the deep learning and collecting feedback to channel back to Diffgram is handled in your system. More on this here.

Benefits of scaling your annotation efforts through Diffgram Jobs

  • Same abstraction for many workflows including in house, crowd-sourcing, or outsourcing to a specific organization.
  • Analytics. Insight into SME (subject matter expert) performance.
  • All your files are in one place. No need to “send” files to annotation firms, or worry about complex integrations. If you are building brains with Diffgram too you can setup a complete workflow, from raw data to business value, all inside Diffgram.

When working with larger teams, Jobs manages the task graph creation for you based on settings you control, and:

  • An SME only can access the task they are on (by default). This provides an effective way to have a large team working on it without exposing all the data.
  • Embed complex guides, including different guides for reviewers.
  • Issue training and Exams. Exams are repeatable jobs that can grant credentials. This training step can be valuable to scale your annotation efforts and onboard new SMEs. Jobs may require credentials, so you can automatically restrict work.

High level overview of jobs

Jobs is concerned with the upgrading of data. One way to think of a job is as a batch of work.

The high level process is:

  • Create a new job, this is usually done by a member of the software or business team. Tasks are automatically created by the system based on the job setup.
  • Annotation Tasks are done in the job. The work is done by the SME, ie the Doctor, Engineer, Lawyer, or any professional who’s an expert in the visual area such as Policing, Property Assessment, Chef etc..
  • The job outputs completed Files to be used by the your software system, to create a new deep learning system, or both. This includes merging data from Tasks.

This is a cycle. Once the new system is created, it will create new data to be upgraded or there is continuous stream of data coming in.

Creating a job, what’s at play:

Creating the job object itself. This relates to settings like share type, naming, and launching.

  • Guide(s). This is what the SME sees when they go to do the task a guide has a name and a markdown description.
  • File List. The data to be worked on. These are the files (ie an Image or Video) are to be completed.
  • Credentials. This is part of quality control — determining who works on it and what outputs are from it.

A job is created within the scope of a project, and be default inherits the labels from the project.

Understanding task vs file

A task represents a single unit of work for a SME, where as a file is the data being upgraded. The system creates the tasks for you, they are available after the job is launched.

One of the central challenges here is quality control. Having multiple SME’s look at the same data, and also having separate draw and review tasks and guides is part of the solution.

To see why this is relevant, imagine we have one file, where we wish to have 3 SME’s complete it. We want every 3rd draw task to be reviewed, but not by the SME who did the original draw task. After, we want the 3 files to be merged into a single output file. Here’s an example of what that looks like:

This graph gets generated by simply settings passes_per_file and review_frequency in the job setup.

While a job could have a single file, we encourage each job to be a meaningful batch of work. For example, 64, 256, or 512 may all be reasonable batch sizes depending on your product scope.


Create a new job

From the diffgram sdk

Quickstart

Job quickstart on github

Imagine you have a single file, that you want to get annotated.

We first call from_local() to get the diffgram file.

You can create a new job by job.new(), name it, and pass the file. This creates a new job in our project.

file = project.file.from_local(path)
job = project.job.new(
name = "my job from SDK",
file_list = [file],
launch = True)

That’s it! Now normally we would want a guide, and to send multiple files at a time, so let’s look at that:

Normal

Job Normal on github

directory_path = "../images_test/*"
path_list = glob.glob(directory_path)
file_list = []
for path in path_list:
file = project.file.from_local(path)
file_list.append(file)

Here we are using glob to get a list of paths from a directory. Then we are iterating through and creating a diffgram file from each path. By appending them to a list we can easily call

job.new(file_list = file_list)
# or, if the job has already been created
job.file_update(file_list = file_list)

As you can see this is flexible, we can do it all in one command, or create the job and then attach many files.

For example if we had a long running process, we could call file_update() many times before launching the job. File update can also be used to remove files.

This process can also be done via the UI. You can view the files uploaded through the SDK too.


Guide

Now normally we would want to attach a guide. This can be done from code, or in the ui.

guide = project.guide.new(
name = "Traffic lights"
description_markdown = "my description")
# Then you may pass guide to job.new()
# or call job.guide_update(guide = guide)

A guide has a name and a markdown description.

A job can have multiple types of guides. Most commonly a “Draw” guide and a “Review” guide.

While a guide could be as simple as “Annotate all the parked cars”, it’s unlikely to get good results.

A good example of a guide is the nuScenes Annotator Instructions which is 380 lines.

If you aren’t familiar with markdown, it’s a format designed for quick reference. It allows embedding of headers, emphasis, images, videos, etc.

Guides are saved to your project for reuse. You can have many guides and improve them over time.

You can create a guide visually in Diffgram in Job / Guide / New. It has a markdown renderer so you can preview exactly what it will look like.

Here’s how to create a guide in the UI

The equivalent SDK code is:

guide = project.guide.new(
 name = "Traffic lights",
 description_markdown = "# Markdown supported <3"
)

The guide then shows up for the annotator right in their annotation menu.

Share types

  • Market, visible to all secure users
  • Project, visible to all project users
  • Organization (Org), visible to your org

This offers complete flexibility for your annotation work.

Project

If you want to keep your work strictly in house, you can use the project visibility. Lease permissive, only those with project access can view the job.

Organization

If you have share the job across an organization. Some examples

  • You want to share the job with an external annotation firm
  • You want to share the job internally, but don’t want someone to have Editor level access to the project.

Market

Most permissive. This will alert annotation firms on Diffgram you would like your job completed at the bid you requested. Any organization may apply to work on your job.


Credentials

Credentials give a finer grain control over who does your annotation work. Different types of annotation require different skill sets. Credential examples

Polygon annotator

A general Exam that the person is familiar with using the polygon interface.

Traffic light expert

A exam of 10 traffic light files. Must get over 80% to pass.

Offline credential — Medical Doctor

Credential verified offline.


Exams

Exams are designed to be practice for new SMEs in general and to act as a gate keeper for arms length SMEs, and to organize access for larger projects.

Creating an Exam

An Exam takes your existing ground truth data and uses at as a “gold standard” against which new annotations are measured. Exams can also grant credentials which can then be used by jobs to automatically restrict work.

Diffgram automatically uses your existing annotations data as ground truth.

  • Award digital credentials. Digital credentials are valuable to Trainers. They can also be a requirement for normal work.

How to create a new Exam Template:

Exams are a type of job.

Create a new job

Set the Type to “Exam
Exams always create a 1 : 1 ratio of draw : review tasks.

If you are a trainer Org, setting it visible to your Org is likely your best option to get started.

Attach files for the exam. 
Existing instances will automatically be used to create the gold standard for reviews.

Select credentials to Award
This will be optionally award, after the Trainer passes the Exam

A single exam may grant many credentials.

Select guides

Now you have created an Exam! See how to complete an Exam for user’s perspective.

Launch your exam

View your Exam in the job list:

Now Trainer’s in your org (or your team if you share the project) can click “Apply” to launch their own instance of your Exam.

Exam reviews

Exam review tasks have 2 key parts:

  • Scoring existing instances
  • Detecting missed instances

After completion save the file.

Saved tasks will immediately show up under Exam results.

Scoring existing instances

Select a star rating. Click the “Focus” button to show only the selected instance.

Select missing instances

Gold standard instances are rendered in gold.

  • Find instances that don’t match any reviewed instances
  • Click “Missing”. These are now flagged, and the word “Missing” will render on the instance.
  • If needed, click undo to reset as “not missing”.

Exam results — scoring an exam and granting credentials

The exam results screen shows a list of reviewed tasks from the exam.

Each task has

  • Average star rating
  • Count of missed instances

Passing the exam

If the Trainer has met your criteria for passing the exam, click the pass button.

This will grant the credentials earned in the exam.


Thanks for reading!


Source: Artificial Intelligence on Medium

(Visited 10 times, 1 visits today)
Post a Comment

Newsletter