William Grimes
2016-02-23 16:04:38 UTC
Hi,
I'm trying to use the Python and Mahotas library for image processing,
specifically I am porting a script written for ImageJ to Python.
A critical component of this involves image segmentation by Bernsen local
thresholding, documentation in Python is:
http://fiji.sc/Auto_Local_Threshold
run("Auto Local Threshold", "method=Bernsen radius=5 parameter_1=15
parameter_2=0 white"); // stack
I want to implement exactly the same thing in Python, and happened to come
across the function in Mahotas. I use a radius of 5 and contrast threshold
of 15 in python.
I apply the same local thresholding in mahotas as below
binary = mahotas.thresholding.bernsen(image, 5, 15, gthresh={128})
However, there seems to be some difference in the results. It is easiest to
see the segmentation contour images below.
What I think is happening in the thresholding, mahotas for a foreground
pixel assigns all pixels in the locality as foreground. ImageJ on the other
hand for a foreground pixel only assigns that pixel, then looks at the
locality of the next pixel. If that makes sense.
Any help on replicating the exact method as in ImageJ would be much
appreciated, I don't want these bloated contours as in Mahotas.
Many Thanks,
Will
ImageJ
<Loading Image...>
Mahotas
<Loading Image...>
I'm trying to use the Python and Mahotas library for image processing,
specifically I am porting a script written for ImageJ to Python.
A critical component of this involves image segmentation by Bernsen local
thresholding, documentation in Python is:
http://fiji.sc/Auto_Local_Threshold
run("Auto Local Threshold", "method=Bernsen radius=5 parameter_1=15
parameter_2=0 white"); // stack
I want to implement exactly the same thing in Python, and happened to come
across the function in Mahotas. I use a radius of 5 and contrast threshold
of 15 in python.
I apply the same local thresholding in mahotas as below
binary = mahotas.thresholding.bernsen(image, 5, 15, gthresh={128})
However, there seems to be some difference in the results. It is easiest to
see the segmentation contour images below.
What I think is happening in the thresholding, mahotas for a foreground
pixel assigns all pixels in the locality as foreground. ImageJ on the other
hand for a foreground pixel only assigns that pixel, then looks at the
locality of the next pixel. If that makes sense.
Any help on replicating the exact method as in ImageJ would be much
appreciated, I don't want these bloated contours as in Mahotas.
Many Thanks,
Will
ImageJ
<Loading Image...>
Mahotas
<Loading Image...>
--
You received this message because you are subscribed to the Google Groups "pythonvision" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pythonvision+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
You received this message because you are subscribed to the Google Groups "pythonvision" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pythonvision+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.