Instance Segmentation

How to train YOLOv8 segmentation on a custom dataset

February 25, 2023
10 min read

YOLOv8 is the latest installment of the YOLO family, developed by the same company which is behind the YOLOv5 architecture, the Ultralytics team. They not only set new highs on the COCO benchmark for object detection but also streamlined their instance segmentation algorithm and really hit a home run. With its intuitive CLI, it is one of the user-friendliest instance segmentation implementations we have come across.

In this tutorial, we are going to train a YOLOv8 instance segmentation model using the trainYOLO platform on a custom dataset. As an example, we will develop a nucleus (instance) segmentation model, which can be used to count and analyze nuclei on microscopic images. Before you start, make sure you have a trainYOLO account. If not, you can create a free account here.

Create a new project

We start by creating a new project, which will hold our images and trained models, by clicking on the green ‘plus’ icon. We will set the project visibility to Public, the name to nuclei segmentation, set the annotation type to Instance Segmentation, and set a single category nucleus. If you have multiple categories, you can add more.

Upload your images

To upload the images to our project, we go to the data tab and click on the green ‘plus’ icon. We select the images we’d like to upload and click upload. In this example, we will upload 197 images of microscopic images from the BBBC039 dataset.

Label your images

Once we have uploaded the images, we start labeling. By clicking on an image, you enter the labeling editor. For labeling instances, we provide two tools: the paintbrush and the polygon tool. Depending on the use case, one might be preferred over the other one. In the example below, you can see both paintbrush and polygon in action. You can increase the paintbrush size by scrolling up or down. To confirm an instance, press the SPACE bar. Note that if you have multiple object categories, you need to set the right category using the drop-down list on top before confirming. Once finished labeling the entire image, you can use the arrow keys on top to go to the next image or select a specific one in the carousel at the bottom.

Once we have labeled around 50 images, we start training the first YOLOv8 segmentation model. This way, we can accelerate the labeling of the other images using model-assisted labeling

Train YOLOv8 model 

Once you have labeled enough images, you can start training your YOLOv8 model. While you can train both locally or using cloud providers like AWS or GCP, we will use our preconfigured google Colab notebooks. Therefore, we go to the model's tab and choose the YOLOv8 notebook by clicking on the green ‘plus’ icon. 

This will open our preconfigured notebook for YOLOv8 instance segmentation. We fill in our API key (which you’ll get by clicking on your avatar next to your username) and our project name. We leave the other parameters to their default values and start the training session by clicking on “Runtime” => “Run all”. This will download your dataset, start training a model and upload the trained model to our platform. 

After training, you can see the first version added to the model’s tab. It has a mask mAP@.5 of 0.921 and a mask mAP@.5-.95 of 0.62.

Model-assisted labeling

We can now use our first trained model as a labeling assistant, using so-called model-assisted labeling. Each time you add a new model (version) to your project, this latest version will be used as the labeling assistant. Instead of labeling an image from scratch, you now only need to verify or correct the model predictions. This dramatically speeds up the labeling process. 

To use the labeling assistant, click on the magic paintbrush icon in the left menu. This will load the model predictions. Next, adapt or remove unwanted instances using the paintbrush or polygon tool. Note that you can set the brush (and polygon) on erase mode.

After labeling more images using the model-assisted labeling approach, we trained another version, this time on all 197 images. As can be seen below, the mask mAP@.5 increased from 0.921 to 0.939, and the mask mAP@.5-.95 increased from 0.62 to 0.65.

Next steps

Once you have labeled more images, you can train more versions and change the hyperparameters like image size, model type, etc. to optimize your model. Have fun training models!

Latest posts

Get in touch.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.