Why Should You Outsource Data Labeling?

by Nov 22, 2020

Image for post
Polygon Labeling Sample of Galata Tower in Istanbul (prepared with Ango Hub)

Data labeling is a fundamental necessity for successful AI and Machine Learning applications. It is also, however, the step that takes up the most time and resources. Cognilytica reports that data preparation takes up 80% of the time dedicated to ML projects. The cost of annotating data is also increasing with the scale and complexity of the problems. And as advancements in ML come along, more and more annotated data is required not only for training but for validation too. As a result, data labeling is not only the most crucial step in any AI project, it’s also the most expensive one.

So is it better to build data labeling tools in-house, or to purchase a solution from a third party? There is no one single answer to this question, as it depends on a couple of different factors like the complexity of labeling, the scale of the data, and so on. For this reason, we are listing some of the main costs of labeling data, and leaving the answer to you.

What are the Costs of Data Labeling?

Data Labeling Tool Development

Before starting to annotate your data, you’ll need a labeling environment. For small and straightforward tasks, one can use various open-source tools such as Computer Vision Annotation Tool (CVAT) or Doccano. For more complex problems and larger datasets, however, professional tools designed around the problem at hand must be developed. In addition to that, for large-scale datasets, the labeling tool must have the ability to distribute tasks to a large number of annotators. For example, for an autonomous driving project, a huge amount of data coming from different cameras and sensors should be merged inside a labeling tool to reconstruct a 3D map of an environment.

A screenshot of the data annotation tool Ango Hub, showing points collected from a LiDAR.
Ango Hub: 3D Point Cloud Data Labeling Tool

Besides its core functionality, a labeling tool must also be user-friendly, in order to make it easier for the annotators to perform their task. This is crucial especially in cases where the dataset is extremely large, and thus each additional second spent on drawing one bounding box, for example, will result in days or weeks of delays.

Recruiting the Human Workforce

After developing the labeling tool, you’ll need to hire annotators. For most companies, recruiting even only one person is a heavy burden, so doing this on a large scale only makes the issue worse. Beyond a “burden,” there is also a cost facto associated to recruiting a human workforce. According to another report by Cognilytica, performing data labeling in-house is five times more expensive than outsourcing the effort, mainly because the cost of the underutilized workforce is much higher internally.

The issue of motivation is another one entirely. Say you have recruited your workforce. Maintaining their motivation high is an issue that while not apparent initially, will compound in the longer run. It will hinder not only the annotators’ speed, but the quality of the work and their work satisfaction as well.

Reaching Domain Experts

In the areas of AI/ML, there are many issues that require advanced domain expertise. For instance, tasks like medical image diagnosis, banking, finance, law, insurance applications, natural language processing, etcetera, all need professionals from each field. Reaching out to them and employing their services brings along an extra cost to companies engaged in annotating data.

Image for post
Image by Anna Shvets.

Preparing Comprehensive Labeling Instructions

During the labeling process, you’ll need to make sure that each annotator is aligned with each other. This is so that they will be able to produce labelings that are compatible with one another. For this reason, every single detail must be explained comprehensively. This, however, brings an additional challenge in projects with high amounts of data. In such a project there can be an infinite number of outlier cases, making preparing a comprehensive list of instructions unfeasible.

Distributing Tasks to Annotators

After developing the labeling tool and recruiting annotators, one must distribute the data to the workforce to start the labeling process. As easy as this sounds, this might not be a straightforward process at all. For example, the abilities of each annotator might differ from each other, and assigning the correct task to the correct annotator might become crucial. Also, data distribution brings issues not commonly known, such as sorting the data according to their label uncertainties, and preventing the labeling of similar data instances.

Evaluating the Performance of Annotators

Another considerable step of data labeling is evaluating the performance of the annotators. The evaluation process is important to be able to assign the correct tasks to the correct annotators, according to their skills. This reduces errors. In addition, human bias can reflect in the data while annotating. This is still a crucial issue of AI projects. Yet, evaluating the performance of annotators plays a crucial role in alleviating bias.

Exploring Smarter Ways to Perform Data Labeling

Advances in AI and ML are coming at an unrelenting pace. And so are the opportunities to explore novel and smarter ways to annotate data. By taking advantage of these opportunities, there are now more chances to decrease the duration and cost of annotating large-scale data.

Quality

The quality of the training data is a crucial factor for highly accurate AI solutions. In order to make sure the data labeling is of high quality, there are a couple of methods. One is to have a single data point (e.g. a single image, document, or video,) be annotated by various annotators. The final decision can then be made through a voting mechanism. Another, after annotating the data, a separate set of annotators might verify the labels manually. While these methods increase the burden of data labeling, they are the most effective ways to ensure data quality.

There are other smarter and faster ways to control the quality of the data labeling. For example:

  • Identifying outlier cases by visualizing the data together with their labels
  • Applying anomaly detection techniques to fix mislabeled data points
  • Adding AI models for a third voting mechanism

These are among the best techniques to increase the quality of data labeling in a “smart” way.

Image for post
AngoHub Provides Built-In QA Pipeline

So, What Can We Do for You?

In this post, we’ve shown you some of the implications of performing in-house data labeling. We believe that when taken together, the burden they cause for the company is something that holds AI back.

Everyone should be able to benefit from AI. Deploying machine learning models should be as easy as deploying anything else. Yet, as we have seen, annotating training data is still an expensive, long, and tedious task. Our goal is to take it off your hands, so that you can focus on the things that matter.

If you need reliable, affordable data for your project, you can contact us for a demo, or just to say hi.

And if you want to be a part of this change, check out our careers page. We might have an open position for you.

Want to try it out?

Click on the link below to request a free, hassle-free and quick demo of our data labeling service.


Join the List

Join our email list to hear about us
and get access to exclusive content on AI development.