When measuring distances and dimensions, the machine vision system basically looks for places with high brightness contrast. Such an area in the image refers to an edge – a place that differs in pixel brightness from surrounding pixels. The edge shows a more or less steep increase or decrease in the value of the image function of the digital image. At the point of greatest steepness of change, the first derivative of the function reaches a maximum and the second derivative of the function passes through zero here. The slope of the function change indicates its gradient as a vector, determining the direction of the greatest growth of the function and the slope of this growth.
Software tools with edge detection capability use several principles. The first may be the use of convolution kernels approximating the first derivative of the image function. Convolution kernels can take many forms, from a simple 2×2 matrix to complicated but more efficient ones such as Sobel’s, Robinson’s or Canny’s operator. In this case, it is a matrix of 3×3 elements that more or less help to highlight the edges.
A common problem with edge detection is the sensor scale – the size of the convolution core should match the size of the details in the image. Noise sensitivity is related to the scale, and thus the size of the convolution nucleus. In the case of small cores, their response to noise increases.
The last step in edge detection is always thresholding. You need to use it to decide how strong the response already means. Too low will mark both edges and noise, too high will remove some important edges. The so-called hysteresis thresholding is usually used, when two thresholds are set. High-edge pixels are first found using a high threshold, and from there, we continue to label those pixels in which the edge detector response is greater than the low threshold. This reduces noise and keeps the edges continuous.
Edge detection based on gradient thresholding
Aa indicated by DZOptics, a simpler and faster procedure can be a simple approximation of the first derivative calculating the difference of each pixel from its surroundings. Based on the threshold of the obtained differences, the found edges are then placed. The disadvantage of this detector is the repeated finding of edges on adjacent pixels.
Morphological edge detection
Morphological edge detection works on completely different principles, which are based on morphological transformations of binary images. The steepness of the image function change is not evaluated here, but is based on a “mere” brightness level threshold. For some cases, however, this method of detecting the contours of objects may be the best solution.
We can detect the inner or outer edges of threshold objects. The principle is very simple. First we need a well-threshold binary image. In this case, the thresholding algorithm decides the position of the edges of the objects in the brightness gradients. Then, depending on whether we want to find the inner or outer contours of the objects, we perform erosion or, conversely, dilation of the image. Finally, we perform a logical XOR operation of this modified image with the original image. So we can obtain exactly one-pixel continuous edge outlines of threshold objects.