All issues
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Mathematical modeling of SHS process in heterogeneous reactive powder mixtures
Computer Research and Modeling, 2011, v. 3, no. 2, pp. 147-153Views (last year): 2. Citations: 5 (RSCI).In this paper we present a mathematical model and numerical results on a propagation of the combustion front of the SHS compound, where the rate of chemical reaction at each point of the SHS sample is determined by solving the problem of diffusion and chemical reaction in the reaction cell. We obtained the dependence of the combustion front on the size of the average element of a heterogeneous structure with different values of the diffusion intensity. These dependences agree qualitatively with the experimental data. We studied the effect of activation energy for diffusion on the propagation velocity of combustion front. It is revealed the propagation of the combustion front transforms to an oscillatory regime at increase in activation energy of diffusion. A transition boundary of the combustion front propagation from the steady-state regime to the oscillatory one is defined.
-
Reduction of decision rule of multivariate interpolation and approximation method in the problem of data classification
Computer Research and Modeling, 2016, v. 8, no. 3, pp. 475-484Views (last year): 5.This article explores a method of machine learning based on the theory of random functions. One of the main problems of this method is that decision rule of a model becomes more complicated as the number of training dataset examples increases. The decision rule of the model is the most probable realization of a random function and it's represented as a polynomial with the number of terms equal to the number of training examples. In this article we will show the quick way of the number of training dataset examples reduction and, accordingly, the complexity of the decision rule. Reducing the number of examples of training dataset is due to the search and removal of weak elements that have little effect on the final form of the decision function, and noise sampling elements. For each $(x_i,y_i)$-th element sample was introduced the concept of value, which is expressed by the deviation of the estimated value of the decision function of the model at the point $x_i$, built without the $i$-th element, from the true value $y_i$. Also we show the possibility of indirect using weak elements in the process of training model without increasing the number of terms in the decision function. At the experimental part of the article, we show how changed amount of data affects to the ability of the method of generalizing in the classification task.
-
Numerical simulation of frequency dependence of dielectric permittivity and electrical conductivity of saturated porous media
Computer Research and Modeling, 2016, v. 8, no. 5, pp. 765-773Views (last year): 8.This article represents numerical simulation technique for determining effective spectral electromagnetic properties (effective electrical conductivity and relative dielectric permittivity) of saturated porous media. Information about these properties is vastly applied during the interpretation of petrophysical exploration data of boreholes and studying of rock core samples. The main feature of the present paper consists in the fact, that it involves three-dimensional saturated digital rock models, which were constructed based on the combined data considering microscopic structure of the porous media and the information about capillary equilibrium of oil-water mixture in pores. Data considering microscopic structure of the model are obtained by means of X-ray microscopic tomography. Information about distributions of saturating fluids is based on hydrodynamic simulations with density functional technique. In order to determine electromagnetic properties of the numerical model time-domain Fourier transform of Maxwell equations is considered. In low frequency approximation the problem can be reduced to solving elliptic equation for the distribution of complex electric potential. Finite difference approximation is based on discretization of the model with homogeneous isotropic orthogonal grid. This discretization implies that each computational cell contains exclusively one medium: water, oil or rock. In order to obtain suitable numerical model the distributions of saturating components is segmented. Such kind of modification enables avoiding usage of heterogeneous grids and disregards influence on the results of simulations of the additional techniques, required in order to determine properties of cells, filled with mixture of media. Corresponding system of differential equations is solved by means of biconjugate gradient stabilized method with multigrid preconditioner. Based on the results of complex electric potential computations average values of electrical conductivity and relative dielectric permittivity is calculated. For the sake of simplicity, this paper considers exclusively simulations with no spectral dependence of conductivities and permittivities of model components. The results of numerical simulations of spectral dependence of effective characteristics of heterogeneously saturated porous media (electrical conductivity and relative dielectric permittivity) in broad range of frequencies and multiple water saturations are represented in figures and table. Efficiency of the presented approach for determining spectral electrical properties of saturated rocks is discussed in conclusion.
-
Conversion of the initial indices of the technological process of the smelting of steel for the subsequent simulation
Computer Research and Modeling, 2017, v. 9, no. 2, pp. 187-199Views (last year): 6. Citations: 1 (RSCI).Efficiency of production directly depends on quality of the management of technology which, in turn, relies on the accuracy and efficiency of the processing of control and measuring information. Development of the mathematical methods of research of the system communications and regularities of functioning and creation of the mathematical models taking into account structural features of object of researches, and also writing of the software products for realization of these methods are an actual task. Practice has shown that the list of parameters that take place in the study of complex object of modern production, ranging from a few dozen to several hundred names, and the degree of influence of each factor in the initial time is not clear. Before working for the direct determination of the model in these circumstances, it is impossible — the amount of the required information may be too great, and most of the work on the collection of this information will be done in vain due to the fact that the degree of influence on the optimization of most factors of the original list would be negligible. Therefore, a necessary step in determining a model of a complex object is to work to reduce the dimension of the factor space. Most industrial plants are hierarchical group processes and mass volume production, characterized by hundreds of factors. (For an example of realization of the mathematical methods and the approbation of the constructed models data of the Moldavian steel works were taken in a basis.) To investigate the systemic linkages and patterns of functioning of such complex objects are usually chosen several informative parameters, and carried out their sampling. In this article the sequence of coercion of the initial indices of the technological process of the smelting of steel to the look suitable for creation of a mathematical model for the purpose of prediction is described. The implementations of new types became also creation of a basis for development of the system of automated management of quality of the production. In the course of weak correlation the following stages are selected: collection and the analysis of the basic data, creation of the table the correlated of the parameters, abbreviation of factor space by means of the correlative pleiads and a method of weight factors. The received results allow to optimize process of creation of the model of multiple-factor process.
-
Development of advanced intrusion detection approach using machine and ensemble learning for industrial internet of things networks
Computer Research and Modeling, 2025, v. 17, no. 5, pp. 799-827The Industrial Internet of Things (IIoT) networks plays a significant role in enhancing industrial automation systems by connecting industrial devices for real time data monitoring and predictive maintenance. However, this connectivity introduces new vulnerabilities which demand the development of advanced intrusion detection systems. The nuclear facilities are considered one of the closest examples of critical infrastructures that suffer from high vulnerability through the connectivity of IIoT networks. This paper develops a robust intrusion detection approach using machine and ensemble learning algorithms specifically determined for IIoT networks. This approach can achieve optimal performance with low time complexity suitable for real-time IIoT networks. For each algorithm, Grid Search is determined to fine-tune the hyperparameters for optimizing the performance while ensuring time computational efficiency. The proposed approach is investigated on recent IIoT intrusion detection datasets, WUSTL-IIOT-2021 and Edge-IIoT-2022 to cover a wider range of attacks with high precision and minimum false alarms. The study provides the effectiveness of ten machine and ensemble learning models on selected features of the datasets. Synthetic Minority Over-sampling Technique (SMOTE)-based multi-class balancing is used to manipulate dataset imbalances. The ensemble voting classifier is used to combine the best models with the best hyperparameters for raising their advantages to improve the performance with the least time complexity. The machine and ensemble learning algorithms are evaluated based on accuracy, precision, recall, F1 Score, and time complexity. This evaluation can discriminate the most suitable candidates for further optimization. The proposed approach is called the XCL approach that is based on Extreme Gradient Boosting (XGBoost), CatBoost (Categorical Boosting), and Light Gradient- Boosting Machine (LightGBM). It achieves high accuracy, lower false positive rate, and efficient time complexity. The results refer to the importance of ensemble strategies, algorithm selection, and hyperparameter optimization in enhancing the performance to detect the different intrusions across the IIoT datasets over the other models. The developed approach produced a higher accuracy of 99.99% on the WUSTL-IIOT-2021 dataset and 100% on the Edge-IIoTset dataset. Our experimental evaluations have been extended to the CIC-IDS-2017 dataset. These additional evaluations not only highlight the applicability of the XCL approach on a wide spectrum of intrusion detection scenarios but also confirm its scalability and effectiveness in real-world complex network environments.
-
Simulation of properties of composite materials reinforced by carbon nanotubes using perceptron complexes
Computer Research and Modeling, 2015, v. 7, no. 2, pp. 253-262Views (last year): 2. Citations: 1 (RSCI).Use of algorithms based on neural networks can be inefficient for small amounts of experimental data. Authors consider a solution of this problem in the context of modelling of properties of ceramic composite materials reinforced with carbon nanotubes using perceptron complex. This approach allowed us to obtain a mathematical description of the object of study with a minimal amount of input data (the amount of necessary experimental samples decreased 2–3.3 times). Authors considered different versions of perceptron complex structures. They found that the most appropriate structure has perceptron complex with breakthrough of two input variables. The relative error was only 6%. The selected perceptron complex was shown to be effective for predicting the properties of ceramic composites. The relative errors for output components were 0.3%, 4.2%, 0.4%, 2.9%, and 11.8%.
-
Signal and noise parameters’ determination at rician data analysis by method of moments of lower odd orders
Computer Research and Modeling, 2017, v. 9, no. 5, pp. 717-728Views (last year): 10. Citations: 1 (RSCI).The paper develops a new mathematical method of the joint signal and noise parameters determination at the Rice statistical distribution by method of moments based upon the analysis of data for the 1-st and the 3-rd raw moments of the random rician value. The explicit equations’ system have been obtained for required parameters of the signal and noise. In the limiting case of the small value of the signal-to-noise ratio the analytical formulas have been derived that allow calculating the required parameters without the necessity of solving the equations numerically. The technique having been elaborated in the paper ensures an efficient separation of the informative and noise components of the data to be analyzed without any a-priori restrictions, just based upon the processing of the results of the signal’s sampled measurements. The task is meaningful for the purposes of the rician data processing, in particular in the systems of magnetic-resonance visualization, in ultrasound visualization systems, at the optical signals’ analysis in range measuring systems, in radio location, etc. The results of the investigation have shown that the two parameter task solution of the proposed technique does not lead to the increase in demanded volume of computing resources compared with the one parameter task being solved in approximation that the second parameter of the task is known a-priori There are provided the results of the elaborated technique’s computer simulation. The results of the signal and noise parameters’ numerical calculation have confirmed the efficiency of the elaborated technique. There has been conducted the comparison of the accuracy of the sought-for parameters estimation by the technique having been developed in this paper and by the previously elaborated method of moments based upon processing the measured data for lower even moments of the signal to be analyzed.
-
Identification of an object model in the presence of unknown disturbances with a wide frequency range based on the transition to signal increments and data sampling
Computer Research and Modeling, 2024, v. 16, no. 2, pp. 315-337The work is devoted to the problem of creating a model with stationary parameters using historical data under conditions of unknown disturbances. The case is considered when a representative sample of object states can be formed using historical data accumulated only over a significant period of time. It is assumed that unknown disturbances can act in a wide frequency range and may have low-frequency and trend components. In such a situation, including data from different time periods in the sample can lead to inconsistencies and greatly reduce the accuracy of the model. The paper provides an overview of approaches and methods for data harmonization. In this case, the main attention is paid to data sampling. An assessment is made of the applicability of various data sampling options as a tool for reducing the level of uncertainty. We propose a method for identifying a self-leveling object model using data accumulated over a significant period of time under conditions of unknown disturbances with a wide frequency range. The method is focused on creating a model with stationary parameters that does not require periodic reconfiguration to new conditions. The method is based on the combined use of sampling and presentation of data from individual periods of time in the form of increments relative to the initial point in time for the period. This makes it possible to reduce the number of parameters that characterize unknown disturbances with a minimum of assumptions that limit the application of the method. As a result, the dimensionality of the search problem is reduced and the computational costs associated with setting up the model are minimized. It is possible to configure both linear and, in some cases, nonlinear models. The method was used to develop a model of closed cooling of steel on a unit for continuous hot-dip galvanizing of steel strip. The model can be used for predictive control of thermal processes and for selecting strip speed. It is shown that the method makes it possible to develop a model of thermal processes from a closed cooling section under conditions of unknown disturbances, including low-frequency components.
-
Investigation the material properties of a plate by laser ultrasound using the analysis of multiple waves
Computer Research and Modeling, 2019, v. 11, no. 4, pp. 653-673Views (last year): 3.Ultrasound examination of material properties is a precision method for determining their elastic and strength properties in connection with the small wavelength formed in the material after impact of a laser beam. In this paper, the wave processes arising during these measurements are considered in detail. It is shown that full-wave numerical modeling allows us to study in detail the types of waves, topological characteristics of their profile, speed of arrival of waves at various points, identification the types of waves whose measurements are most optimal for examining a sample made of a specific material of a particular shape, and to develop measurement procedures.
To carry out full-wave modeling, a grid-characteristic method on structured grids was used in this work and a hyperbolic system of equations that describes the propagation of elastic waves in the material of the thin plate under consideration on a specific example of a ratio of thickness to width of 1:10 was solved.
To simulate an elastic front that arose in the plate due to a laser beam, a model of the corresponding initial conditions was proposed. A comparison of the wave effects that arise during its use in the case of a point source and with the data of physical experiments on the propagation of laser ultrasound in metal plates was made.
A study was made on the basis of which the characteristic topological features of the wave processes under consideration were identified and revealed. The main types of elastic waves arising due to a laser beam are investigated, the possibility of their use for studying the properties of materials is analyzed. A method based on the analysis of multiple waves is proposed. The proposed method for studying the properties of a plate with the help of multiple waves on synthetic data was tested, and it showed good results.
It should be noted that most of the studies of multiple waves are aimed at developing methods for their suppression. Multiple waves are not used to process the results of ultrasound studies due to the complexity of their detection in the recorded data of a physical experiment.
Due to the use of full wave modeling and analysis of spatial dynamic wave processes, multiple waves are considered in detail in this work and it is proposed to divide materials into three classes, which allows using multiple waves to obtain information about the material of the plate.
The main results of the work are the developed problem statements for the numerical simulation of the study of plates of a finite thickness by laser ultrasound; the revealed features of the wave phenomena arising in plates of a finite thickness; the developed method for studying the properties of the plate on the basis of multiple waves; the developed classification of materials.
The results of the studies presented in this paper may be of interest not only for developments in the field of ultrasonic non-destructive testing, but also in the field of seismic exploration of the earth's interior, since the proposed approach can be extended to more complex cases of heterogeneous media and applied in geophysics.
-
A modified model of the effect of stress concentration near a broken fiber on the tensile strength of high-strength composites (MLLS-6)
Computer Research and Modeling, 2020, v. 12, no. 3, pp. 559-573The article proposes a model for assessing the potential strength of a composite material based on modern fibers with brittle fracture.
Materials consisting of parallel cylindrical fibers that are quasi-statically stretched in one direction are simulated. It is assumed that the sample is not less than 100 pieces, which corresponds to almost significant cases. It is known that the fibers have a distribution of ultimate deformation in the sample and are not destroyed at the same moment. Usually the distribution of their properties is described by the Weibull–Gnedenko statistical distribution. To simulate the strength of the composite, a model of fiber breaks accumulation is used. It is assumed that the fibers united by the polymer matrix are crushed to twice the inefficient length — the distance at which the stresses increase from the end of the broken fiber to the middle one. However, this model greatly overestimates the strength of composites with brittle fibers. For example, carbon and glass fibers are destroyed in this way.
In some cases, earlier attempts were made to take into account the stress concentration near the broken fiber (Hedgepest model, Ermolenko model, shear analysis), but such models either required a lot of initial data or did not coincide with the experiment. In addition, such models idealize the packing of fibers in the composite to the regular hexagonal packing.
The model combines the shear analysis approach to stress distribution near the destroyed fiber and the statistical approach of fiber strength based on the Weibull–Gnedenko distribution, while introducing a number of assumptions that simplify the calculation without loss of accuracy.
It is assumed that the stress concentration on the adjacent fiber increases the probability of its destruction in accordance with the Weibull distribution, and the number of such fibers with an increased probability of destruction is directly related to the number already destroyed before. All initial data can be obtained from simple experiments. It is shown that accounting for redistribution only for the nearest fibers gives an accurate forecast.
This allowed a complete calculation of the strength of the composite. The experimental data obtained by us on carbon fibers, glass fibers and model composites based on them (CFRP, GFRP), confirm some of the conclusions of the model.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"




