Introduction
How can the image resolution affect the statistical results on an image processing application?
On your left you can observe the same object (an ellipsoid, AR=2) with different resolutions. Low resolution presents a major axe of 12 pixels and high resolution object is about 80 pixels. It seems to be evident (see previous posts, analysis of synthetic ‘rice’ kernes with low resolution and high resolution) that low resolution will generate problems as overlapping and outliers as high resolution will not. At this point we can think on powerful (and expensive) hardware image acquisition devices (typically HD cameras) to improve resolution and overcome that kind of problems but… which is the minimum resolution not to affect the results? This question is closely related with the idea of minimizing resources and so, the amount of money to be invested in our laboratory.
The exercise that is proposed here is related with the problem about the resolution of images stated above. The steps to follow would be:
- High-resolution image generation by means of Matlab’s synthetic microstructure generator. You can use a high volume fraction of particles (up to the 15%) in order to facilitate the outlier making at later low-resolution images
- Experimental image database creation downsampling at different levels the original image (using, for example GIMP)
- Objects segmentation and feature extraction with ImageJ
- Starting from the data files generated in step 3, perform the outlier detection for each one of the images in the database
- Statistical analysis and display the results (for example, a graph with resolution vs. number of outliers
Expected results
Throughout this experiment, the student should define the lowest resolution level that ensures a good feature extraction process (in terms of outlier generation this means that no outliers have been created due to downsampling).