Due to the pollution of the air and water environment and the problem of forgery, it is difficult to identify oil paintings. The reason is that air pollution and water pollution can lead to moisture, mold, and even water stains on the picture, which will seriously damage the integrity and color performance of the picture. At the same time, chemicals in the water may also have a corrosive effect on the oil painting, further destroying the color and detail of the picture. The problem of relying entirely on the conventional experience of experts is too subjective. Some controversial works are difficult to convince people with rational identification evidence, so it is necessary to explore a scientific and effective method to quantify the authenticity of oil paintings. This paper constructs an oil painting authenticity identification method based on multi-feature fusion based on the artistic style analysis and feature extraction of oil painting shape, color and texture. The recognition accuracy of the proposed method is compared with that of the existing neural network. The results show that the recognition rate of the proposed model is 73.0%, which is the best performance.

  • This paper constructs an oil painting authenticity identification method based on multi-feature fusion.

  • Simulation analysis is carried out on Giorgio Morandi's still life oil painting and other painters’ copied and forged works, and the performance of the proposed method is verified.

  • The results show that the algorithm can accurately give the authenticity identification results.

Oil painting is an important kind of painting and has a long history of hundreds of years. Its methods are different from other kinds of painting, and it has its own special charm. In recent years, with the development of global integration, the development of oil painting tends to be more international and diversified. For various reasons, a large number of forgeries exist in traditional painting works, and more forgeries exist in the works of famous painters in history. The authentication of oil paintings has become a very difficult problem. This kind of appraisal method completely relies on the conventional experience of experts, which is too subjective, often brings certain disputes, and cannot convince everyone. In this case, it is of great practical significance to put forward an objective, scientific and intelligent method of oil painting appraisal.

For image enhancement and intelligent recognition of oil painting, experts and scholars at home and abroad have also carried out extensive research. Seow & Asari (2005) adopt a synchronous filtering method to process image frequency and represent any image in the form of an illuminance component × reflection component. By reducing the illuminance component and increasing the reflection component, the purpose of improving image visibility is to further improve the fog removal performance of homomorphic filtering. Ledig et al. (2017) put forward the SRGAN neural network, which realizes super-resolution image restoration based on perception loss optimization algorithm. Kupyn et al. (2018) first applied the Generative Adversarial Networks (GAN) network structure in the field of deblurring. The generator restores the fuzzy image, the discriminator distinguishes the clear image from the fuzzy image, and the two are trained against each other to achieve the removal of the blur caused by camera motion.

In terms of intelligent recognition of oil paintings, Tan et al. (2016), using real-time spectral image technology, proposed a simulation platform to explore and evaluate the performance of System on a Chip (SoC) architecture for the identification of oil paintings. The simulation platform is based on the Network-on-Chip (NoC) and parametric evaluation module. The communication structure of SoC was simulated on FPGA, and the time performance was compared with the simulation results. A study by Su (2020) adopts the oil painting feature fusion method based on intelligent vision, fuses the color and shape features of oil painting features, and identifies the authenticity of oil painting by calculating the difference coefficient and difference threshold of oil painting features. The research results verify that the identification accuracy rate of this method is higher than 95%. PASCAL Mathieu Tayo created a Plastic Tree at Art Basel, Switzerland. This work combines architecture, natural elements and artificial elements, and the theme is the strange mutation related to radiation and genetic modification, reflecting the impact of human social activities on nature.

Through the sorting and summary of the above literature on image enhancement and oil painting intelligent recognition, it can be seen that most of the experts' research on image enhancement adopts methods such as de-fogging and filtering. However, due to the fact that the degree of scene degradation and the depth of the scene are not linear, methods such as de-fogging are difficult to be widely applied. In terms of oil painting authenticity identification, the accuracy of spectral image technology is not high enough, and the feature fusion method has high requirements for application conditions, so it is difficult to be widely used. Based on this, this paper first analyzes the artistic style of oil painting shape, color and texture, and on this basis, proposes an oil painting authenticity identification algorithm based on multi-feature fusion based on an artificial neural network. The key contributions of this paper are presented as follows:

  • (1)

    The main outline of oil painting is obtained by introducing HIS spatial multi-scale and multi-structure element color morphology edge detection technology. On this basis, effective region segmentation is used to extract the artistic style of oil painting. The algorithm for extracting the artistic style of oil painting is given.

  • (2)

    The HSV model is used to extract the color features of the oil painting, and the gray co-existence matrix is used to extract the texture features of the oil painting.

  • (3)

    Based on the extraction of the shape, color and texture features of the oil painting, Pixellevel, Feature-level and Decision-level model frames are used to integrate the features. Then the identification method of the oil painting image is presented.

  • (4)

    A simulation analysis of Giorgio Morandi's still life oil painting and other painters' copies and forgeries is carried out to verify the performance of the identification method presented in this paper.

The main research contents of this paper are as follows: In view of the impact of air and water pollution on the identification of oil painting, this paper first introduces the calculation method of the shape artistic style of oil painting by introducing HIS and gives the model of artistic style extraction. Then, it introduces the extraction method of the texture and color features of oil painting. On this basis, an oil painting recognition method based on the multi-feature fusion of artistic shape, color and texture is constructed. Finally, taking Giorgio Morandi's still life oil painting as an example, the performance of the identification method presented in this paper is verified.

The works of artists often contain their own unique forms of creation, such as creation concept, composition style, brushwork, and this characteristic exists in all art treasures (Jaafar et al. 2021). This paper introduces HIS space multi-scale and multi-structure meta color morphology edge detection technology to get the main outline of oil painting. On this basis, effective region segmentation is used to extract the artistic style of oil painting.

HIS space multi-scale and multi-structure element morphology edge detection

Conversion of the RGB model to the HSL model

In HIS color space, the traditional edge detection algorithm is prone to produce breakpoints, missing edges and other defects. In order to better detect edges, morphology is used in digital image processing technology (Jabez et al. 2020). RGB image can use H, S, and L quantities to represent each R, G, and B and its expression is:
formula
(1)
formula
(2)
formula
(3)

Multi-structure element color form operator definition

V(x) is the picture of the RGB color space, the V(x) = (R(x), G(x), B(x)), of which R(x), G(x), B(x) are greater than or equal to zero, and , for the domain of the V(x). Setting up U(x) V(x) corresponding to an HSL color image of the space, U(x) = (H(x), S(x), L(x)), which H(x), S(x), L(x) are greater than or equal to zero. The sum of structural elements S is defined and as follows:
formula
(4)
formula
(5)

Determine the structure of the element

Different scale elements have different sizes, but their shapes are the same. These are columns of similar sets. Let us construct element B1, B2,, …, Bn and S1,S2, …, Sn. It has different scales, and B1 = [010;111;010], S1 = [101,010,101], then it can be defined as: It can be seen that the expansion of structural elements is obtained.

Multi-scale and multi-structure elements are used to get the edge of the picture, so that it can better eliminate the noise and get a good result, and then the result is multiplied by different weight values; in addition, you can get an image boundary (Jaber et al. 2022).

Under scale i, edge detection is carried out by Bi and Si, and the result is: . The expression is:
formula
(6)
represents the result obtained with Bi and Si, is the information quotient, then:
formula
(7)

In the formula, is the specific gravity of j in the edge , and its grayscale range is [0,L–1]. The content contained in a graph is represented by the concept of information entropy. Information entropy can also represent the probabilities of different edges in the graph.

Algorithm

  • (1)

    The image is converted from RGB to HIS, and then decomposed into H, I, and S three channels.

  • (2)

    For H channels, when i = 1,2,3, the edge is calculated by the expansion corrosion edge detection operator of Bi and Si.

  • (3)

    In the same way, the I channel and S channel perform the same operation as the H channel.

  • (4)

    The detection results of the I channel, S channel and H channel are fused to determine the edge shape of the final image.

If one edge is affected by different noises, then the results obtained will be different (Kadyan et al. 2021). The weights are chosen according to how much information the edges of different scales contain. It is an adaptive method.

Artistic style feature extraction

After edge detection, it is also necessary to segment the image region. The method chosen in this paper is the fixed module method, which has the advantages of easy operation and good experimental performance. The size of the module selected in this paper is 64 × 64 pixels, the size of the module can just reduce the time complexity, and can ensure the details of the strokes in the image through this module to find the part that represents the characteristics of the artist's painting:
formula
(8)
formula
(9)

In the formula, 0 k<N, where N is the number of modules and is the distance moved by modules on the image, and the range is [1,64].

The angle between pen segments

To find the angle between two pen segments, it is necessary to fit them. The method has the end line fitting, the end line translation fitting, the least square fitting and so on. At first, it is necessary to fit the two close strokes in a straight line, and then find their slope after fitting them. The method used in this article is the least square method. The first is to fit the points of the number (xi,yi) y=b+ax, find a,b such that:
formula
(10)

After finding the values of a and b, y=b+ax is the fitted line. After fitting the strokes, the formula is obtained, the formula is used to find the included angle, and the value obtained is the included angle to be obtained.

Slope

The slope K indicates the general direction of the writer's stroke. It is calculated by the relative power-off position of the two strokes (Kasimov et al. 2023). Any stroke is formed with an initial and an endpoint. By connecting the initial and endpoints, the corresponding slope value can be obtained by calculating the numerical value. By connecting the initial point and the endpoint, a straight line is obtained. The general direction of the author's pen is obtained by calculating the slope of the line. If the initial point is (xa,ya) and the endpoint is (xb,yb), the formula for calculating the slope is as follows:
formula
(11)
formula
(12)

In the formula, is the similarity distance between point pji and point p.

Extract geometric features

The size of a shape is often expressed in terms of circumference, and area. These two values are for binary pictures, i.e. if a pixel is part of an object, then f(i,j) = 1; for all non-object or background pixels, f(i,j) = 0. Compare the two values by calculating the area and circumference of any module to get the properties that the module represents. Roughly speaking, the area is the same number of S kinds (Kaur et al. 2021). The L of the module is the sum of the lengths of the points. A perimeter L is any point, starting from this pixel to the endpoint, that calculates all the boundaries. Add the sum of all experienced pixels to get the desired perimeter. The area S of each object in the image is the number of all pixels in the region f(i,j) = 1.

All shapes that are round can be represented by the circularity R0, which is calculated by:
formula
(13)
In the formula, S is the area of the figure and L is the circumference of the figure. The radius of the tangent circle is r, and its calculation formula is:
formula
(14)
Complexity is replaced by the exponent e, the calculation formula is:
formula
(15)

The size of e represents the circumference of a circle: the larger it becomes, the longer the length of the shape, the more complex the shape is, and if it is smaller, the opposite of the above situation.

Color feature extraction

Color can be a good expression of what the painter sees or thinks of when he is painting. Therefore, color features are often used in digital image processing. HSV (Hue, Saturation, Value) is a color model commonly used in computer vision and image processing. Especially for the space of three channels, HSV color histogram has a better representation effect, which can more accurately describe the color characteristics of the image. In general, the sizes of H, S and V are respectively [0,1], [0,360] and [0,1]. In order to avoid this situation, the best way is to carry out a quantization operation. After quantization processing, the operation time will be greatly reduced. According to our usual recognition habits, the H, S, and V are quantized at non-equal intervals and are set as 16,4,4.

The form of the selection range of the three quantities is left for open operation and right for closed operation. Because the light irradiated has different wavelengths and frequencies, we usually see different colors. The reason for the non-uniform quantization is that different kinds of light have different properties in a vacuum (La Nasa et al. 2021). The final values 0,1,2,3,7,15 are the interval values chosen for H, S, and V. To make the calculation easier, different weights are set in the calculation, and the three quantities are combined to get a one-dimensional vector. The human eye's recognition degree of these three quantities is as follows: the first is hue H, the second is saturation S, and the third is brightness V, set different weights for these three quantities, in accordance with the above-described several characteristics to determine a quantity L:
formula
(16)
In the formula, and for the uniform interval grade number, t, , then there is the following formula:
formula
(17)
the value range of L is [0,255], and the 256-handle one-dimensional histogram of L can be obtained through calculation. Evenly lay out the three quantities of space on the one-dimensional quantities. Setting the weight of these three quantities to 16, 4, and 1 at a time can reduce the negative results brought by V in the process of image recognition, and also eliminate the negative results brought by S. An oil painting has different color space, so it can be easy to recognize. Therefore, we often ask for the histogram, in order to obtain the color feature data in the later stage to provide the necessary basis for image recognition.

After the color histogram of an image is obtained, the data representing the histogram should be calculated before image recognition. We call these data statistics (Liao et al. 2019). Whether it is a grayscale image, a color image, or a binary image, the statistical properties of the histogram are the same. Six data quantities are often calculated, namely mean, variance, skewness, kurtosis, energy, and lineal descent.

Texture feature extraction

Texture is the scene of the surface of the object. In this paper, the gray co-occurrence matrix method is used to extract the texture features of oil paintings. The change of a picture and other situations, if you want to use specific values, that is the texture features. That's the symbiosis matrix. That is, starting from i until some fixed position, the distance between the two points is , and the position relationship between the two pixels is shown in Figure 1.
Figure 1

The position relationship between the two pixels.

Figure 1

The position relationship between the two pixels.

Close modal
In order to get some features better, this paper selects four angles and calculates their symbiosis on this basis, that is, introduces some directions to represent the texture features. Generally speaking, the selected angles are shown in Figure 2.
Figure 2

Pixel relationship diagram of different spatial positions.

Figure 2

Pixel relationship diagram of different spatial positions.

Close modal
The expression of the co-occurrence matrix for the 0°, 45°, 90°, 135° directions is as follows:
formula
(18)
formula
(19)
formula
(20)
formula
(21)
Choose a certain distance d, a certain Angle θ, choose an arbitrary point (i,j), and count how many times the pixel appears in the position of the (i,j) series. For example, if d = 1, then the symbiosis matrix at 0° is:
formula
(22)

Choose two plots with large differences, one fine and one rough. This method of judging is determined by their epoch. For a relatively large epoch, if the selected length d is less than its size, it means that the gray level of most pixels is relatively the same, which shows that the symbiosis matrix accounts for a relatively large proportion, and all the data is close to the diagonal (Liu et al. 2023). For the graph with a small epoch, the length of d is the same as the size of the epoch, but the level of gray is very high, so the distribution of gray is discrete. If the roughness in different directions is different, the value in the main diagonal is constantly changing, which facilitates the analysis of directionality. The direction of the symbiosis can be determined by assigning different positions to the symbiosis.

After obtaining the gray scale symbiosis matrix, some specific values are calculated on it to characterize the features. The following eigenvalues are selected in this paper:

  • (1)
    Angle second moment C1, used to represent the average gray value. Its expression is as follows:
    formula
    (23)
  • (2)
    Contrast C2 is used to represent whether the image is clear, which is related to the change of gray level in the module, that is, whether the distribution is ordered. Its expression is as follows:
    formula
    (24)
  • (3)
    deficit component moment C3. Its expression is as follows:
    formula
    (25)
  • (1)
    Moisture C4 is used to indicate whether the texture is uniform. Its expression is as follows:
    formula
    (26)
  • (2)
    Gray correlation C5, a matrix composed of rows and columns, where the similarity of elements is expressed by gray correlation, its expression is as follows:
    formula
    (27)
  • (3)
    Energy C6, if the values of P are clustered in one place, which indicates that the distribution of elements is messy, then the energy value of the texture is large; otherwise, the energy is smaller. Its expression is as follows:
    formula
    (28)

In order to better characterize the texture characteristics, it is necessary to calculate some specific values on the basis of it, first require the gray symbiosis moment, and then calculate its statistical characteristics. In order to reduce the time complexity in the calculation process, the commonly used method is to compress it, and then calculate the gray symbiosis moment.

Only using any of the three features of color, shape, and texture can not well represent the characteristics of the artist's painting style. Therefore, in the process of image recognition, it is necessary to reduce the obtained feature dimension, and then use it as data input for pattern recognition (Luo et al. 2022). The specific step of pattern recognition is to extract the features first and send them to the neural network for training. Considering that the neural network has the advantage of strong learning ability and is not affected by the external environment, this paper chooses to use the neural network for image recognition. Here, a suitable pattern recognition framework is selected in this paper, which needs to be fused twice, the first level is feature-level fusion, and the second level is decision-level fusion. Using this method can not only reduce the input of features, but also avoid the impact on features, and can realize the complementary advantages and disadvantages of each neural network, so as to achieve better recognition.

Image data fusion identification

In this paper, we divide image data fusion into three levels. The first level is Pixel-level, the second is Feature-level, and the third is Decision-level. The first one belongs to the low-level fusion mode. Its advantage lies in improving the quality of the target object. This method has to do with pixels. It's about the finer effects of registration, and the end result is a new image (Moyya et al. 2023). The second is intermediate-level fusion, which has the advantage of manipulating features. The third is high-level fusion, which belongs to the final fusion of pattern training. It analyzes the training results of each network and makes its own final judgment after the analysis is completed. And the final judgment result is the final recognition result of the whole neural network. The fusion process of image feature data is shown in Figure 3.
Figure 3

Process of image feature data fusion.

Figure 3

Process of image feature data fusion.

Close modal

Three neural networks are selected here, namely, single-layer perceptron, BP neural network, and LVQ (Learning Vector Quantization) neural network. The advantages and disadvantages of these three networks are integrated to identify, and then the two fusion methods are applied. Then the experiment is carried out, and the results show the advantages of this method compared with the previous single network without fusion.

Feature-level data fusion

After the operation of the feature extraction method described above, a huge amount of data is obtained, among which there are some unusable data. In order to remove them, it is necessary to carry out feature dimensionality reduction (Nsabimana et al. 2023). Here, this paper uses the principal component analysis method, which is a matrix formula to calculate the actual data into a small amount of test data method. After the reduction of the value, there is a certain correlation between them, but they each carry different information. These different data can be characterized by a large number of extracted data, and these values are called the principal element.

Set the number of graphs as s, for each graph, m values can be obtained to represent, this s graph, m values can be represented by a matrix. The formula is as follows:
formula
(29)

Before processing these data, we must first reduce the dimension with the above formula, and get a set of values with fewer dimensions that can represent the previous data . In this way, the new value, the principal component, can be determined. By mathematical calculation, neither the eigenroot nor the eigenquantity is 0, so the useful value that we do for the principal component is . To make the calculation better, we locate the value here at 98%.

Decision-level data fusion

The three kinds of neural networks used in this paper are single-layer perceptron, BP and LvQ. First, the network is trained with the final data, and then the decision method is used to judge the output layer, and the final recognition result is obtained (Revathi et al. 2022). In this process, several methods mentioned above are used, including the fusion of two times.

Single-layer perceptron neural network recognition

Perceptron is forward-oriented and consists of two main parts: the first is the input layer and the second is the output layer. This kind of network is supervised. It learns step by step, influenced by external values. The resulting data is first sent to the input layer. It is processed by neurons, and a weight is set according to the actual situation, which is mainly processed by neurons. The model is shown in Figure 4.
Figure 4

Single-layer perceptron model.

Figure 4

Single-layer perceptron model.

Close modal

BP neural network recognition

Error backpropagation is the opposite of the first model mentioned above. It is in reverse and has a lot of layers. It is usually composed of three layers: the first is input, then implied, and finally output (Rivera 2021). The second layer can be arbitrarily set to multiple layers. Instead of setting the second layer, we will use the default first layer. The model used in this article is shown in Figure 5.
Figure 5

BP neural network model.

Figure 5

BP neural network model.

Close modal

The main process of the BP network is as follows:

  • (1)

    First for the resulting x1, …, xn data is sent to the input layer;

  • (2)

    Design the hidden layer;

  • (3)

    The output layer is determined according to the actual situation. In this process, Signoid is selected for texture and the Levenberg–Mrarquardt algorithm is used to achieve it. The training accuracy is set to e = 0.005.

LVQ neural network recognition

LVQ, like BP, is fed back by external data. It has the advantage of combining the two ideas. One is competition and the other is supervision. Because of the combination of these two methods, it has a better advantage than other networks. LVQ network is composed of three parts, including input, competition and output structure. The model used in this paper is shown in Figure 6.
Figure 6

LVQ neural network model.

Figure 6

LVQ neural network model.

Close modal

The first layer has m processing units, and the second part has two processing units, and they are arranged horizontally; Each processing unit has a weight of 1 and is connected to the third level one-to-one. The rules of competition in this network are similar to those of a winner-contest in biological evolution. Whichever processing unit wins, the value of that unit is one, and the value of the losing unit must be 0.

Decision making and data fusion based on majority voting

This paper adopts the majority voting method to judge the output results of the three networks. If the type has more outputs, then this type is the output of the whole (Shulkin & Kavun 2023). First of all, we treat this system as a black box, just look at their input and output, do not need to understand the processing process inside the black box; the input data is located X, and its subscript is located j. The final output is E(X), which is E(X) = j. The neural network, as another output, can be represented by the following function:
formula
(30)

In order to verify the effect of classification and recognition, 50 still life oil paintings and 50 imitation oil paintings by Giorgio Morandi, a modern Italian painter, were selected to extract their style characteristics. The data obtained are shown in Table 1.

Table 1

Extracted feature data

Histogram based color featuresFeature texture based on gray symbiosis momentsShape features
Giorgio Morandi 90.1288621, 2109.531426.90128766, 607218320008432, 91, 7.01893263 17.73147535, 11352.83201
19.30163892, 11835.40321
… … …
0.561243586, 462.3920183
0.408329133, 332.3844508 
0.03158412, 10.534023410.10384785, 0.022312300
0.000956621, 9.914321412, 1.92323 e-06… … … 
In the
A copy 
87.13223913, 1971.723138.90201576 567.24654650.009213227, 7.082780625
… … … 
19.24883252, 8202.6177
13.31071506, 8222.9195
… … …
0.589855238, 543.09092
0.403482155, 343.37617 
0.1128829178, 12.27128677, 0.7533837837, 0.023761973, 0.001693263, 11.93716999, 3.31287 e-06
… … … 
Histogram based color featuresFeature texture based on gray symbiosis momentsShape features
Giorgio Morandi 90.1288621, 2109.531426.90128766, 607218320008432, 91, 7.01893263 17.73147535, 11352.83201
19.30163892, 11835.40321
… … …
0.561243586, 462.3920183
0.408329133, 332.3844508 
0.03158412, 10.534023410.10384785, 0.022312300
0.000956621, 9.914321412, 1.92323 e-06… … … 
In the
A copy 
87.13223913, 1971.723138.90201576 567.24654650.009213227, 7.082780625
… … … 
19.24883252, 8202.6177
13.31071506, 8222.9195
… … …
0.589855238, 543.09092
0.403482155, 343.37617 
0.1128829178, 12.27128677, 0.7533837837, 0.023761973, 0.001693263, 11.93716999, 3.31287 e-06
… … … 

The selected oil paintings were used to train the new network, and the results obtained are shown in Table 2.

Table 2

Shows the recognition rate statistics of three kinds of neural networks and fusion

Data SetSingle-layer perceptronBP neural networkLVQ neural networkAfter fusion
Giorgio Morandi 66.6% 60.5% 68.3% 73.0% 
Imitation work 62.0% 65.5% 63.6% 70.6% 
Data SetSingle-layer perceptronBP neural networkLVQ neural networkAfter fusion
Giorgio Morandi 66.6% 60.5% 68.3% 73.0% 
Imitation work 62.0% 65.5% 63.6% 70.6% 

The data given in the table above are the recognition results of the three nerves selected separately, and the results of recombining the three networks and adding two layers of fusion (Sun et al. 2022). The experiment shows that compared with the single neural network, the recognition effect of the three networks is better after the complementary recombination. In addition, on the basis of the three networks, the two-layer fusion method is added, which is more helpful in improving the recognition effect of the whole system, and the recognition rate is obviously improved.

Preparation for experiment

In this experiment, the still life oil painting of Giorgio Morandi, a modern Italian painter, and the copied and forged works of other painters were selected for comparison and analysis (Valentini et al. 2023). The neural network was used for recognition training. A total of 80 works and 50 fake works were selected to test the usability of the system. The hardware environment of the simulation platform is :i5-13600H, 4 cores and 8 threads, the maximum Rui frequency is 4.8GHz, and the maximum cache is 18MB. The software development tool is the Windows 10 operating system, MATLAB2021.

Feature extraction

For each image, a total of 49 data of the above color, texture and shape are extracted. Take color and texture layout as a whole, and calculate 8 data of the overall style of oil painting on the basis of the histogram. On the basis of gray symbiosis moment, the overall style data of oil painting is calculated as 36. The extracted details of the painter's brush information and brush features are 6 local features, and the geometric features of the region are two-dimensional (Wagner et al. 2019). Here, the real painting as shown in Figure 7 and the fake painting as shown in Figure 8 are selected in this paper to verify the authenticity identification effect of the system. The extracted feature data are shown in Table 3.
Table 3

Extract feature data

Histogram based color featuresFeature texture based on gray symbiosis momentsShape feature
Masaku 88.03623913, 1971.723138.90201576 567.24654650.009213227, 7.082780625 15.24883252, 8202.6177815.31071506, 8222.919503
… … …
0.589855238, 543.09092380.403482155, 343.3761793 
0.023761937 0.001693263, 0.013829178, 12.271286770.213837837–11.937169993.3128 E - 06 
Pastiche 95.1388621, 2109.531428.90128766, 607218320008432, 91, 7.01893263 18.73147535, 11352.8320117.30163892, 11835.40321
… … …
0.561243586, 462.3920183
0.408329133, 332.3844508 
0.03299412, 10.931023412, 0.20384785, 0.0248123002, 0.000956621, - 9.9124084121.92323 E - 06 
Histogram based color featuresFeature texture based on gray symbiosis momentsShape feature
Masaku 88.03623913, 1971.723138.90201576 567.24654650.009213227, 7.082780625 15.24883252, 8202.6177815.31071506, 8222.919503
… … …
0.589855238, 543.09092380.403482155, 343.3761793 
0.023761937 0.001693263, 0.013829178, 12.271286770.213837837–11.937169993.3128 E - 06 
Pastiche 95.1388621, 2109.531428.90128766, 607218320008432, 91, 7.01893263 18.73147535, 11352.8320117.30163892, 11835.40321
… … …
0.561243586, 462.3920183
0.408329133, 332.3844508 
0.03299412, 10.931023412, 0.20384785, 0.0248123002, 0.000956621, - 9.9124084121.92323 E - 06 
Figure 7

Works of Giorgio Morandi.

Figure 7

Works of Giorgio Morandi.

Close modal
Figure 8

Pastiche.

Data fusion classification and identification

Feature data fusion

In the first step of fusion, the princomp function is used, and the pareto graph of variance of each principal component is obtained, as shown in Figure 9. It can be seen that the newly generated data can well represent the 49 data extracted before.
Figure 9

Pareto diagram of variance of each principal component.

Figure 9

Pareto diagram of variance of each principal component.

Close modal

Decision-level data fusion

For each input image, after training the neural network, when the output of the neural network is 0y·, it is Morandi's work, and when the output is 1y·, it is not by the artist (Yamini & Ganapathy 2021). The majority voting method is adopted to realize the data fusion at the decision level, and the final classification and recognition result is obtained. For the two oil paintings used in this experiment, the algorithm can accurately give the authenticity identification results.

In summary, oil painting color recognition mainly involves the understanding, analysis, discrimination and interpretation of colors in oil painting works, which is an important part of art appreciation and criticism. Environmental pollution refers to the deterioration of environmental quality caused by human activities, which has a negative impact on human beings and other organisms. However, serious air pollution and water pollution may affect oil painting color recognition. If the oil painting work is exposed to a polluted environment for a long time, it may be eroded by chemicals, resulting in problems such as fading and discoloration of the picture, which will also affect the recognition and understanding of the color of the oil painting. Therefore, it is of great significance to study the color recognition of oil painting from the angle of environmental pollution.

Based on the artistic style analysis and feature extraction of oil painting shape, color and texture, this paper constructs a method of oil painting authenticity identification based on multi-feature fusion, and simulates Giorgio Morandi's still life oil painting and other painters' copied and forged works to verify the performance of the proposed method. The results show that the algorithm can accurately give the authenticity identification results. On the whole, the proposed oil painting image recognition method based on artificial intelligence achieves the research purpose, but at the same time, there are some defects in feature extraction, such as extracting global features and using semi-automatic methods to segment images. In future research work, in order to better meet the application and practical needs, it is also necessary to strengthen the extraction of image semantic features. More in-depth exploration and research of image recognition and computer technology are needed, in order to authenticate the oil painting.

Data cannot be made publicly available; readers should contact the corresponding author for details.

The authors declare there is no conflict.

Jaafar
N. A.
,
Ismail
N. A.
&
Yusoff
Y. A.
2021
Usability study of enhanced salat learning approach using motion recognition system
.
The International Arab Journal of Information Technology
18
(
3A
),
414
421
.
Jabez
J.
,
Keerthanaa
V.
,
Kaviya
V.
&
Gowri
S.
2020
An enhanced web based attendance application using global positioning system and face recognition
.
Journal of Computational and Theoretical Nanoscience
17
(
8
),
3344
3348
.
Kadyan
V.
,
Dua
M.
&
Dhiman
P.
2021
Enhancing accuracy of long contextual dependencies for Punjabi speech recognition system using deep LSTM
.
International Journal of Speech Technology
24
,
517
527
.
Kasimov
N. S.
,
Kosheleva
N. E.
,
Popovicheva
O. B.
,
Vlasov
D. V.
,
Shinkareva
G. L.
,
Erina
O. N.
,
Chalov
S. R.
,
Chichaeva
M. A.
,
Kovach
R. G.
,
Zavgorodnyaya
Yu. A.
&
Lychagin
M. Y.
2023
Moscow megacity pollution: Monitoring of chemical composition of microparticles in the atmosphere–snow–road dust–soil–surface water system
.
Russian Meteorology and Hydrology
48
(
5
),
391
401
.
Kaur
J.
,
Singh
A.
&
Kadyan
V.
2021
Automatic speech recognition system for tonal languages: state-of-the-art survey
.
Archives of Computational Methods in Engineering
28
,
1039
1068
.
Kupyn
O.
,
Budzan
V.
,
Mykhailych
M.
,
Mishkin
D.
&
Matas
J.
2018
Deblurgan: Blind motion deblurring using condition aladversarial networks. In
.
Proceedings of the IEEE conference on computer vision and pattern recognition
18–23 June 2018. IEEE, New York, pp. 8183–8192
.
La Nasa
J.
,
Mazurek
J.
,
Degano
I.
&
Rogge
C. E.
2021
The identification of fish oils in 20th century paints and paintings
.
Journal of Cultural Heritage
50
,
49
60
.
Ledig
C.
,
Theis
L.
,
Huszár
F.
,
Caballero
J.
,
Cunningham
A.
,
Acosta
A.
, Aitken, A., Tejani, A., Totz, J., Wang, Z. &
Shi
W.
2017
Photo-realistic single image super-resolution using a generative adversarial network
. In:
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
, pp.
4681
4690
.
Liao
Z.
,
Gao
L.
,
Zhou
T.
,
Fan
X.
,
Zhang
Y.
&
Wu
J.
2019
An oil painters recognition method based on cluster multiple kernel learning algorithm
.
IEEE Access
7
,
26842
26854
.
Luo
S.
,
Lan
Y. T.
,
Peng
D.
,
Li
Z.
,
Zheng
W. L.
&
Lu
B. L.
2022
Multimodal emotion recognition in response to oil paintings
. In:
2022 44th Annual International Conference of the IEEE Engineering in Medicine & Biology Society
(
EMBC)
.
IEEE
, pp.
4167
4170
.
Moyya
P. D.
,
Neelamsetti
K. K.
&
Thirumoorthy
G.
2023
Deep attention network for enhanced hand gesture recognition system
.
SN Computer Science
4
(
4
),
324
.
Nsabimana
A.
,
Wu
J.
,
Wu
J.
&
Xu
F.
2023
Forecasting groundwater quality using automatic exponential smoothing model (AESM) in **anyang City, China
.
Human and Ecological Risk Assessment: An International Journal
29
(
2
),
347
368
.
Revathi
A.
,
Sasikaladevi
N.
&
Arunprasanth
D.
2022
Development of CNN-based robust dysarthric isolated digit recognition system by enhancing speech intelligibility
.
Research on Biomedical Engineering
38
(
4
),
1067
1079
.
Rivera
R. B.
2021
Enhanced attendance monitoring system using biometric fingerprint recognition
.
International Journal of Recent Technology and Engineering
9
(
5
),
1
4
.
Seow
M. J.
&
Asari
V. K.
2006
Ratio rule and homomorphic filter for enhancement of digital colour image
.
Neurocomputing
69
(
7–9
),
954
958
.
Shulkin
V. M.
&
Kavun
V. Y.
2023
Long-term monitoring of pollution of the coastal water area of ussuriysk bay with metals: case study of ‘Green’ oysters magallana gigas (=Crassostrea gigas)(Thunberg, 1793)
.
Biologiya Morya
49
(
2
),
105
113
.
Su
X.
2020
Research on oil painting authenticity identification technology based on intelligent vision
.
Modern Electronic Technology
5
,
61
64
.
Tan
J.
,
Fresse
B.
&
Rousseau
F.
2016
Rapid prototype of Networks-on-Chip on multi-FPGA platforms. In
.
MATEC Web of Conferences, Vol. 54
.
EDP Sciences
,
Cedex, France
, p.
12002
.
Valentini
F.
,
De Angelis
S.
,
Marinelli
L.
,
Zaratti
C.
,
Colapietro
M.
,
Tarquini
O.
&
Macchia
A.
2023
Multianalytical non-invasive characterization of ‘Mater boni consilii’ iconography oil painting
.
Heritage
6
(
4
),
3499
3513
.
Wagner
B.
,
Kępa
L.
,
Donten
M.
,
Wrzosek
B.
,
Żukowska
G. Z.
&
Lewandowska
A.
2019
Laser ablation inductively coupled plasma mass spectrometry appointed to subserve pigment identification
.
Microchemical Journal
146
,
279
285
.
Yamini
G.
&
Ganapathy
G.
2021
Enhanced sensing and activity recognition system using IoT for healthcare
.
International Journal of Information Communication Technologies and Human Development
(
IJICTHD)
13
(
2
),
42
49
.
This is an Open Access article distributed under the terms of the Creative Commons Attribution Licence (CC BY 4.0), which permits copying, adaptation and redistribution, provided the original work is properly cited (http://creativecommons.org/licenses/by/4.0/).