A Color Moment CBIR with a Local Binary Pattern and an Oriented Gradient Histogram
Article Main Content
In response to the demand for quick retrieval from huge picture collections, Content-Based Image Retrieval (CBIR) has grown in prominence as a field of study. A CBIR technique that combines color, texture, and form features is proposed in this paper. By segmenting photos into areas and determining color moments for each, color characteristics may be extracted. Gray-Level Co-occurrence Matrices (GLCMs) are used to assess texture. Five Fourier Descriptors are used to represent shape features. The 1000 photos in 10 categories of the Corel-1k database are used to test the system, which is constructed using MATLAB. Metrics for recall and precision are used to assess performance. The outcomes demonstrate enhanced retrieval precision in comparison to current techniques for all ten image classes. Additional texture and color characteristics may be investigated in future research depending on the application.
Introduction
The introduction of content-based image retrieval (CBIR) has gained prominence as a way to address the need for quick retrieval from large image databases [1]. This paper presents a CBIR method that combines attributes such as color, texture, and shape features [1]. Color features are derived by segmenting images into regions and calculating color moments for each [1]. Texture is analyzed using Gray-Level Co-occurrence Matrices (GLCMs). Shape features are represented using Fourier Descriptors 3, 5. The increased availability of bandwidth to access the internet in the near future will enable users to search for and navigate through video and image databases located at remote locations. As a result, quick recovery of images from extensive image databases has gained prominence in research. As a result, quick image retrieval from big databases is a critical issue that requires attention. The ideal properties for CBIR systems are low computational complexity and high retrieval efficiency. Images in traditional picture databases are based on keyword searches and have text annotations applied to them.
The following are a few drawbacks of this strategy: First, because the content of an image cannot be adequately described by a fixed set of words, keyword-based image retrieval is inappropriate; second the keyword annotation is significant subjectivity. A different method to manual annotation is content-based image retrieval (CBIR), where photos are organized based on their visual features (color, texture, shape, etc.), and the needed images are selected from a huge database based on its characteristics.
A significant amount of research has been conducted to evaluate distance metrics, extract these low-level visual elements, and find effective search strategies. In general, the majority of CBIR systems operate similarly: Every image stored in the database yields a feature vector, and the collection of entire feature vectors is then indexed in a database. At query time, a feature vector is taken from the query image then compared to the feature vectors stored in the index. The features that each system extracts and the techniques it uses to compare feature vectors are where the key distinctions between them may be found.
Literature Survey
The abundance of digital image data has made content-based image retrieval (CBIR) a popular area of study.
1. Rather of using keywords to index images, CBIR systems use visual qualities like color, texture, and shape.
2. Techniques like color histograms, color coherence vectors, and color moments are frequently used to extract color information.
3. Methods such as gray-level co-occurrence matrices (GLCMs) and filter banks are frequently utilized for texture analysis. Moment-based techniques and Fourier descriptors are two methods that can be used to represent shape characteristics. To enhance retrieval performance, several CBIR systems integrate several elements. The use of local binary patterns (LBP) and its variations for texture analysis has been investigated recently; the center-symmetric LBP (CS-LBP) has shown encouraging results.
4. Typically, metrics such as precision and recall are used to assess CBIR systems.
In the CBIR-System, image retrieval is dependent upon the extraction of visual features such as color, texture, and shape [1]. Separate approaches were utilized by entirely separate CBIR-Systems. Worldwide color and texture highlights have not been used in many methods [2]–[4], while local color and texture highlights have not been used in nearly as many strategies. Compared to grayscale photos, color is more abundant, delicate, and invariant to nature’s many facets [5]. A number of methods are available to remove color from photographs.
The most common techniques for removing color highlights are the color-histogram, color-coherence-vector, and color-correlogram. The color histograms are mostly used to determine the color transmission in images. Histogram techniques collapse when two images have the same histogram [5]. Color-Moments swiftly distinguish these color-appropriations.
Color-Moments swiftly distinguish these color- appropriations. Because Color-Moments also find the spatial information of pixels, they are therefore more valuable than Histograms [6]. The distribution of shading in an image is given by Color-Moments. Thus, we must quickly review the three requests. Moment of Color three orders of magnitude: mean, standard deviation, and skewness.
The visual pattern used to identify the image’s content is referred to as texture. By dividing the image into sub-blocks, Deepak, Tharani and others have planned native texture features from image victimization GLCM [7]. To accurately capture a picture’s textural qualities, Texture-Analysis computations employ channel banks or co-occurrence Gray-Level matrices (GLCMs), which have the capacity to consider several scales and directions. Lastly, in the CBIR- System, form is one of the most important required options.
Ultimately, one of the most important required options in the CBIR-System is form. There are various Shape- representations and retrieval techniques available. Fuzzy C- means clump, with Hu-Moments and LBP as alternatives, was created by Neelima and Sreenivasa Redye in [8] for image segmentation. The feature-vectors used in this analytic work [9] supported the Fourier Descriptors as suggested by Chenyang Xu and J. L. Prince in [10], who suggested using Gradient Vector Flow (GVF) fields to process edge-picture procedures for paper submission.
In order to enhance the picture retrieval interpretation with clever accuracy in comparison with other available alternatives, we therefore frequently employ Color-Moments as an MPEG-7 color descriptor, GLCM as a texture feature, and Fourier Descriptors [9] as shape options during this work.
There are numerous image datasets accessible. But in CBIR-Systems, 1,000 photographs from the Corel-Photo- DB are manually chosen to build 10 classes, each with 100 images, using the Corel-1k database, also known as the WANG database. The 99 remaining same-class images are regarded as relevant and the remainder images as irrelevant when a query-image is provided. This is because it is assumed that the user is attempting to retrieve same-class images. The following URL will take you to the database: http://wang.ist.psu.edu/docs/related/.
MATLAB R2015A is used to implement the image- retrieval techniques that have been outlined. The 384 × 256 photographs in the Corel-picture database are categorized as African people, beaches, buildings, buses, elephants, dinosaurs, flowers, mountains, horses and food. The images with other size are resized to 384 × 256. Experiments are conducted for the retrieval of top 15 or images, where top 15 methods user is intrigued to display the 15 most comparable images as described in Fig. 1.
Fig 1. Sample images for Corel-1k data.
Implementation
RGB and indexed photos require extra computation time due to their high value. Therefore, it is necessary to transform [11] RGB 3D components to 2D parts that transmit characteristics somewhere between 0 and 255 as a preprocessing step. Fig. 2 provides an illustration to explain the method.
Fig. 2. (a) RGB to gray image with (b) histogram of the image.
CSBLP
Center-Symmetric Local Binary Pattern (CS-LBP) is a highly effective texture descriptor used in Content-Based Image Retrieval (CBIR) systems 4. Key aspects of CS-LBP include:
1. Modification of LBP: CS-LBP is a variant of the standard Local Binary Pattern operator, designed to be more efficient and discriminative 4.
2. Reduced Feature Dimensionality: CS-LBP produces a more compact feature representation compared to standard LBP, making it computationally efficient 4.
3. Performance: Experimental results have shown that CS-LBP’s performance is comparable to the popular SIFT descriptor, while being 2 to 3 times quicker 4.
4. Applications: Because of it’s effectiveness and efficiency, CS-LBP has been widely employed in various applications including image retrieval, pedestrian detection, and video copy detection.
5. Adaptability: CS-LBP has been extended into variants like pyramid CS-LBP for specific applications such as pedestrian detection.
CS-LBP incorporates the strengths of LBP and SIFT, making it a powerful tool for texture analysis in CBIR systems. Due to its simplicity in theory and computation, efficiency, and suitability for use in analysis of texture, gender categorization, face recognition, and image retrieval, the Local Binary Pattern (LBP) has drawn a lot of interest. Only a pixel’s eight neighbors are taken into account in the original version of the LBP operator. Later, it was expanded upon and produced numerous modifier variants. Heikkila introduced CS-LBP operator to utilize LBP for the region descriptor. Because it incorporates the beneficial aspects of both SIFT and LBP, the suggested CSLBP descriptor works well as a region descriptor.
In his comparison of SIFT with the CS-LBP descriptor for image matching, Heikkaila found that while CSLBP is often two to three times faster, CS-LBP’s performance is nearly as promising as that of the well-known SIFT descriptor. Due to its simplicity in theory and computation, efficiency, and suitability for use in texture analysis, gender categorization, face recognition and image retrieval, the local binary pattern (LBP) has drawn a lot of interest. Only a pixel’s eight neighbors are taken into account in the original version of the LBP operator.
Later, it was expanded upon and produced numerous modifier variants. Heikkila introduced CS-LBP operator to employ LBP for region descriptor. Because it incorporates the beneficial aspects of both SIFT and LBP, the suggested CSLBP descriptor works well as a region descriptor. In his comparison of SIFT with the CS-LBP descriptor for image matching, Heikkaila found that while CSLBP is often two to three times faster, CS-LBP’s performance is nearly as promising as that of the well-known SIFT descriptor. Owing to its effectiveness and efficiency, CS-LBP has been applied extensively in numerous fields. For the image retrieval challenge, the CS LBP descriptor was utilized. For the purpose of a pedestrian detection application, it extracted the CS-LBP descriptor from blocks within a detection window. Later, they created pyramid CS-LBP, an extension of the normal CS- LBP, to identify pedestrians.
Video-related apps also make use of these CS-LBP descriptor. It was also used in an application for dynamic background subtraction. Additionally, it was used in the video copy of detection task, where the returned erroneous videos were filtered using the CS-LBP descriptor. Through the consolidation of surprisingly surprising color descriptor, texture, and form alternatives, we have anticipated another image signature. Color-Moments were selected and altered in MPEG-7 descriptions in order to remove Color Alternatives. Although Fourier Descriptors are used to extract form highlights and GLCM is used for texture highlights. Comparing the expected new mark with dimensions 536 to a few low-level list of capabilities now in use, the test result demonstrates that the new mark has achieved a high precision rate.
Proposed Architecture
The architecture of a typical CBIR system consists of several key components 1:1.
1. Feature Extraction: Images are processed to extract visual features such as color, texture, and shape 1, 3.
2. Feature Database: Extracted features are organized and stored in a database index for efficient retrieval 1.
3. Query Processing: When a query image is input, its features are extracted using the same methods as the database images 1.
4. Similarity Measurement: The query image’s features are compared to those in the database using distance metrics like Euclidean distance 5.
5. Retrieval and Ranking: Similar images are retrieved and ranked based on their similarity scores 5.
6. User Interface: Presents query results to the user, often showing top-ranked images 11.
This architecture allows for efficient searching of large image databases based on visual content rather than text annotations 1,
Feature-Extraction
The component is described as a procedure consisting of one or more estimates, each of which identifies certain quantitative characteristics of an object and is established to measure certain essential characteristics of the thing. One special kind of decrease in spatial properties could be feature extraction. Advanced data on color, texture, and shape are extracted as features throughout this process. Fourier-Descriptors, GLCM, and Color-Moments are the methods used to extract options for color, shape, and texture.
Color-Retrieval
A key component among the most crucial elements in obtaining the image is the color feature. A picture’s color is imagined using well-known color spaces such as RGB, U * V * W, YUV, XYZ, YIQ, L * a * b, and HSV. Numerous methods are proposed in the literature to retrieve images based on color; yet, the majority of them differ with respect to a common basic scheme. In addition to several color descriptors that indicate the extent of each shade at which the image extends, we often extract the Color-Moments (statistical-properties) for each image and use them as a district of feature-vector inside the information.
Color-Moments
Color moments are an effective method for extracting color features in CBIR systems 6, 7. The process involves:
1. Dividing the image into regions, is achieved by splitting it into three equal non-overlapping horizontal sections 1.
2. For each region, calculating the initial three moments of the color distribution within each color channel 1, 7:
● First moment (mean): Represents the average color value
● Second moment (standard deviation): Measures color variance
● Third moment (skewness): Captures the asymmetry of the color distribution 7
3. Storing these moments as a feature vector, resulting in 9 values for RGB images (3 moments x 3 channels) 7.
One of the most crucial elements in obtaining the image is the color feature. A picture’s color is imagined using well- known color spaces such as RGB, U * V * W, YUV, XYZ, YIQ, L * a * b, and HSV. Numerous methods are suggested in the literature to retrieve images based on the color; yet, the majority among them differ with respect to a common basic scheme. In addition to several color descriptors that indicate the extent of each shade at which the image extends, we often extract the Color-Moments (statistical-properties) for each image and using as a district of feature-vector inside the information.
Color-Moments are invariant to scaling and rotation. Color- Moments will be calculated for any color model. Per channel 3 Color- Moments are computed (e.g. nine moments, if the color model is RGB and twelve moments, if the color model is CMYK). As computing moments of a chance distribution, Computing Color-Moments is finished within the same manner.
Color-Moment is predicated on numerical methods and it will describe the color distribution by conniving the instant. Since, the color-distribution data in the main targeting the lower order moments, therefore mean value (mean), moment (variance) and third moment (skewness) are normally employed strategies to represent the image color distribution as shown in Fig. 3.
Fig. 3. Color-moment feature-extraction.
The mathematical formula is as follows:
Where, N is that the pixels-range within the image, fij is that the worth of the ith color part of component j. Mean: It provides average Color-esteem within the image.
Standard-deviation: The standard-deviation is that the root of the variance of the distribution.
Skewness: It gives proportion of level of spatiality inside the distribution.
Texture-Retrieval
Texture retrieval is a significant component of Content-Based Image Retrieval (CBIR) systems. It involves analyzing and extracting features that describe the visual patterns and surface characteristics of images. Key aspects of texture retrieval include:
1. Gray-Level Co-occurrence Matrix (GLCM): This is a common approach for analysis of texture which examines the spatial relationships of pixel intensities 7, 9.
2. Statistical Measures: Features like energy, contrast, correlation, and homogeneity are often computed from the GLCM to quantify texture properties 9.
the color feature. A picture’s color is imagined using well- known color spaces such as RGB, U * V * W, YUV, XYZ, YIQ, L * a * b, and HSV. Numerous methods are proposed in the literature to retrieve images based on color; yet, the majority of them differ with respect to a common basic scheme. In addition to several color descriptors that indicate the extent of each shade at which the image extends, we often extract the Color-Moments (statistical-properties) for each image and use them as a district of feature-vector inside the information.
Other Texture-Retrieval
Texture retrieval is a significant component of Content- Based Image Retrieval (CBIR) systems. It involves analyzing and extracting features that describe the visual patterns and surface characteristics of images 3. Key aspects of texture retrieval include:
1. Gray-Level Co-occurrence Matrix (GLCM): This is a common approach for analysis of texture which examines the spatial relationships of pixel intensities 7, 9.
2. Statistical Measures: Features like energy, contrast, correlation, and homogeneity are often computed from the GLCM to quantify texture properties 9.
3. Filter Banks: Some systems use filter banks or wavelet transforms to capture texture information at different scales and orientations 3.
4. Local Binary Patterns (LBP): This texture descriptor and its variants, such as Center-Symmetric LBP (CS-LBP), have shown effectiveness in texture analysis 4.
Combination with Other Features: Texture features are often used in conjunction with color and shape features to improve retrieval accuracy 3. Texture retrieval helps in discriminating between images with similar color distributions but different surface patterns, enhancing the overall performance of CBIR systems 3, 7. Another essential component that will make it easier to divide images into areas of interest, and describing those areas is based on texture. Texture provides America detail about the color intensities or their spacing inside an image. Texture analysis uses a variety of mathematical operations to alter, compare, and revise textures in an attempt to establish a universal, inexpensive, and a concise quantitative description of textures (such as rough, silky, smooth, or bumpy).
There is a possibility that these techniques differ in the way texture features are retrieved or presented in the description. Texture-Analysis has four key application domains: form from texture, texture synthesis, texture segmentation, and unit texture classification [5]. According to this view, unpleasantness or roughness related to differences in force esteems, or Gray-Levels. Analysis of texture has been employed in a spread of uses, just as far off detecting, machine-controlled assessment, and clinical picture process. Texture-investigation is significant once questions in an image zone unit extra described by their texture other than intensity, and customary thresholding methods are not utilized viably [4].
Gray-Level Co-Occurrence-Matrix (GLCM)
Gray-Level Co-occurrence Matrix (GLCM) is the widely used approach for texture analysis in Content-Based Image Retrieval (CBIR) systems 7, 9. Key aspects of GLCM include:
1. Definition: The GLCM is a statistical method that investigates the spatial relationship between image pixels 7.
2. Computation: It computes frequency of pairs of pixels with particular values occur in relation to each other 7.
3. Directionality: GLCMs can be computed for different angles (typically 0°, 45°, 90°, 135°) to capture texture patterns in various orientations 7.
4. Texture Features: Common features extracted from GLCM include energy, contrast, correlation, and homogeneity 9.
5. Normalization: The GLCM is often normalized to improve comparability between images 7.
6. Applications: GLCM is used in various texture analysis tasks, including image segmentation, classification, and retrieval 9.
7. Effectiveness: GLCM has proven to be an effective tool for capturing texture information, especially when combined with other features in CBIR systems 3, 7.
GLCM provides a robust method for quantifying texture patterns, enhancing the ability of CBIR systems to distinguish between images with similar color distributions but different surface textures 3, 7, 9. GLCM also alluded as co-occurrence distribution is that the common second order statistical strategy for Texture-Analysis. An image comprises of pixels each with an intensity (a particular Gray-Level), the GLCM might be an organization of anyway ordinarily entirely unexpected combos of Gray-Levels often occur within an image or picture segment. The GLCM computation is explained with example.
Texture component counts utilize the contents of the GLCM-matrix to gauge the variety in intensity at the specific district of intrigue. GLCM is calculated any angle or offset. There is total eight directions GLCM is calculated, every angle separates by 45° with eight neighboring constituents a s an area of the pixel. We tend to get eight GLCM matrix for a constituent, what percentage no of times a 0 pixel (Gray-Level) is occurring with another 0 pixel within the given direction on this normalized GLCM scheming numerous textures as for the image presented in Fig. 4a, 4b and in Fig. 5.
Fig. 4. (a) GLCM Offset and (b) direction with Gray-level co-occurrence matrix.
Fig. 5. Symmetric GLCM.
Symmetric-GLCM
The Symmetric Gray-Level Co-occurrence Matrix (GLCM) represent a important modification of the standard GLCM used in texture analysis for Content-Based Image Retrieval (CBIR) systems 7. Key aspects of symmetric GLCM include:
1. Definition: A symmetric GLCM is created by adding the original GLCM with its transpose 7.
2. Relationship: It ensures that the relationship from pixel i to j is the same as from j to i 7.
3. Computation: To create a symmetric GLCM, take the transpose of the original GLCM and add it to itself 7.
4. Advantages: Symmetric GLCM provides a more comprehensive representation of texture patterns by considering both directions of pixel relationships 7.
5. Applications: It is widely used in texture analysis for image retrieval, classification, and segmentation tasks 9.
Symmetric GLCM enhances the robustness of texture feature extraction, contributing to improved performance in CBIR systems 7, 9.
Steps for making GLCM Symmetric:
• Take a transpose of GLCM-Matrix
• Add the translated duplicate to the GLCM itself.
This gives a symmetric-lattice in which the relationship i to j is undefined for the relationship j to I as illustrated in Fig. 5.
Normalized-GLCM
For Normalizing the GLCM isolate every component symmetric GLCM by the entirety everything being equal as shown in Fig. 6.
Fig. 6. Normalized GLCM.
Features on Co-Occurrence Matrix
Co-occurrence matrices can capture texture properties however they do not offer direct means for further analysis. Numeric options area unit computed from a matrix as shown in Fig. 7.
Fig. 7. GLCM overview.
The GLCM delineated by Haralick will uncover sure attributes in regards to the dimensional dispersion of the Gray- Levels inside the picture. Typically a constituent with power esteem I occurs in an incredibly explicit spatial relationship to a constituent with the value j as shown in the Fig. 8.
Fig. 8. The extraction of Co-occurrence Matrices from the input image.
In the GLCM, each segment p (i, j) value is exclusively the unit of the number of times that a constituent with esteem I happened in an incredibly explicit spatial relationship to a constituent with the j value. Four GLCM texture highlights unremarkably utilized that territory unit Energy, Contrast, Correlation and Homogeneity.
Energy could be a texture proportion of dim scale picture speaks to homogeneity ever-changing, intelligent the circulation of picture dim scale consistency of weight and texture.
Contrast is that the corner to corner near the snapshot of idleness, that measure the value of the framework is conveyed and pictures of local changes in run, intelligent the picture clearness and texture of shadow profundity.
Correlation estimates picture texture arbitrariness. At the point when the space co-occurrence network for all qualities is equivalent, it accomplished base worth; on contrary hand, if the co-occurrence grid value is lopsided, it’s worth is bigger.
Homogeneity conjointly known as the Inverse Distinction Moment, estimates picture homogeneity by assuming bigger qualities for the littler dim tones within segmentation attempts. It is a great deal of delicate to the nearness of near corner to corner segments inside the GLCM. It’s most worth once all parts inside the picture are same. Homogeneity diminishes if qualification will increment though vitality is whole consistent.
Shape-Retrieval
Shape retrieval is a key component of the Content-Based Image Retrieval (CBIR) systems. Key aspects of shape retrieval include:
1. Importance: Shape plays a crucial role in describing image contents for CBIR systems 5.
2. Representation: 2D shapes can be represented in two main ways - external (boundary-based) and internal (region-based) representations 5.
3. Desirable Properties: A good shape representation should be invariant, robust and easily to derivable and match 5.
4. Techniques: Various techniques exist for shape retrieval, but some methods struggle to adequately represent shapes and make matching difficult 5.
5. Fourier Descriptors: One effective solution is using Fourier Descriptors (FDs), in which it can be made unchangeable to translation, rotation, scale, and starting point 5.
6. Advantages of FDs: Fourier Descriptors can retain basic information about the shape of image segments and achieve good representation and normalization of shapes 5.
7. Shape retrieval enhances the ability of CBIR systems to distinguish between images with similar color and texture but different object shapes, improving overall retrieval accuracy 5.
Shape plays a crucial role in illustrating image-contents and for CBIR-System purpose, a shape illustration ought to be strong, invariant, and simple in deriving and match. 2 dimensional Shapes may be represented through 2 alternative means like external illustration and mental object. The boundary based mostly illustration is named as the External illustration that is assessed into 2 classes, transform and spatial. Ex: Gabor filter and Gaussian Derivatives fall below this class.
Regionalized illustration is additionally known as internal representation. There exist a few techniques for retrieval dependent on shape however these strategies don’t well speak to the shape and coordinating is troublesome with those strategies. The Fourier-Descriptors (FDs) which are invariant against translation, rotation, scale, and beginning stage is one answer for the recovering pictures dependent on the state of picture parts. Basic data regarding the form of picture segment can be held using Fourier Descriptors. Effective portrayal and proficient standardization of shapes that can be accomplished with the help of Fourier Descriptors [12].
Results and Discussion
The demonstration of our application is explained with the help of this example: Here we are taking a test image that is there in our database as in Fig. 9. The uses of feature-extraction algorithm described above corresponding top 15 retrieved images are shown in Fig. 10.
Fig. 9. Test image.
Fig. 10. Matched retrieved image for query image.
Evaluation-Metric
In this activity, execution is assessed to quantify the effectiveness of the suggested technique in examination with the current strategy. The recommended strategy restores the ideal arrangement of comparable pictures from the database dependent on the score processed by the Euclidean-separation metric. The representation of this framework evaluated by precision and recall.
Comparison Based on Precision with Previous Methods
To approve the proposed approach, Fig. 11 indicates the correlation of proposed image-retrieval strategy dependent on the Precision with past CBIR-System procedures. Since the coral database consists 10 classes, subsequently, exactness accomplished of each class is then chosen to look at the exhibition of the proposed Image-retrieval approach along with past methodologies. The presentation of suggested approach is better results for almost all of the 10 classes. In any case, generally speaking execution of the proposed Image-retrieval method utilizing consolidated color-descriptors, Shape and Texture features is superior to 4 other techniques.
Fig. 11. Comparison of precision Graph with State-of-the-art System.
Comparison Based on Recall with Previous Methods
To additionally approve the exhibition of the proposed Image-retrieval method, Recall is determined and contrasted and the past methodologies. General execution of recommended Image-Retrieval method utilizing color-moments, Shape and Texture highlights is superior to other four strategies. Predicted Query-image belongs to Flowers Class (Class 7) and Accuracy = 80.4% as shown in Fig. 12.
Fig. 12. Comparison of recall graph with State-of-the-art System.
Conclusion
This methodology proposed another mark to speak to the picture as far as feature-vector which enhanced the presentation of CBIR-System. The proposed Image-retrieval method retrieves comparable pictures from the database of Digital-Images utilizing optical-elements: for example, Texture, Color, and Shape. All of these highlights are separated using the Color-descriptor like Color-Moments, GLCM Texture and Shape Fourier-Descriptors methods respectively and the Corel DB is employed for the creation of dataset with 10 classes of images.
This algorithm has better performance against others in retrieving all 10 categories of images in the Corel-database having 1000 images. Future work of the study can be carried-out with Gray-Level Co-occurrence matrices with different angle with different distances and different features and with color descriptors such as CLD and Color Auto correlogram features in HSV color-space.
Conflict of Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
References
-
Datta R, Joshi D, Li J, Wang JZ. Image retrieval: ideas, influences, and trends of the new age. ACM Comput Surv. 2008;40(2):1–60.
Google Scholar
1
-
Gudivada VN, Raghavan VV. Content based image retrieval systems. IEEE Comput. 1995;28(9):18–22.
Google Scholar
2
-
Manjunath BS, Ma WY. Texture features for browsing and retrieval of image data. IEEE Trans Pattern Anal Mach Intell. 1996;18(8):837–42.
Google Scholar
3
-
Rui Y, Huang TS, Ortega M, Mehrotra S. Relevance feedback: a power tool for interactive content-based image retrieval. IEEE Trans Circuits Syst Video Technol. 1998;8(5):644–55.
Google Scholar
4
-
Swets D, Weng J. Hierarchical discriminant analysis for image retrieval. IEEE Trans Pattern Anal Mach Intell. 1999;21(5): 386–400.
Google Scholar
5
-
Zhang H, Zhong D. A scheme for visual feature-based image retrieval. Proceedings of the SPIE Storage and Retrieval for Image and Video Database, 1995.
Google Scholar
6
-
Smeulders AWM, Worring M, Santini S, Gupta A, Jain R. Content-based image retrieval at the end of the early years. IEEE Trans Pattern Anal Mach Intell. 2000;22(12):1349–80.
Google Scholar
7
-
Choras R. Content-based image retrieval using color, texture, and shape information. In Progress in Pattern Recognition, Speech and Image Analysis. Ruiz-Shulcloper J, Kropatsch WG, Eds. Heidelberg: Springer, 2003.
Google Scholar
8
-
Haralick RM, Shanmugam K. A theoretical comparison of texture algorithms. IEEE Trans Pattern Anal Mach Intell. 1980;2:204–22.
Google Scholar
9
-
Howarth P, Rüger S. Evaluation of texture features for content based image retrieval. In Image and Video Retrieval. Enser P, et al., Eds. Berlin: Springer, 2004. pp. 326–34.
Google Scholar
10
-
Gonzalez RC, Woods RE. Digital Image Processing. 4th ed. Pearson Education; 2018.
Google Scholar
11
-
Tuceryan M, Jain AK. Texture analysis. In The Handbook of Pattern Recognition and Computer Vision. Chen CH, Pau LF, Wang PSP, Eds. 2nd ed. World Scientific; 1998, pp. 207–48.
Google Scholar
12





