Best way to create readable image segmentation dataset using numpy

I want to segment a bunch of images and hence have to train some classifier. For this I must create the earthly truths of my image. So far I've done it like this:

  • Image segment in gimp with magnetic scissor
  • Fill three different segments of the image with simple green / red / blue.
  • Import image as array with numpy
  • Since there is a transition between two colors at the edges between two different classes, I use clustering to sum the pixel in the three classes

Is there a better / more direct way to do this? (Note: I don't want to use a pre-sharded dataset, I want to create my own)

+3


source to share





All Articles