All issues
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Application of a balanced identification method for gap-filling in CO2 flux data in a sphagnum peat bog
Computer Research and Modeling, 2019, v. 11, no. 1, pp. 153-171Views (last year): 19.The method of balanced identification was used to describe the response of Net Ecosystem Exchange of CO2 (NEE) to change of environmental factors, and to fill the gaps in continuous CO2 flux measurements in a sphagnum peat bog in the Tver region. The measurements were provided in the peat bog by the eddy covariance method from August to November of 2017. Due to rainy weather conditions and recurrent periods with low atmospheric turbulence the gap proportion in measured CO2 fluxes at our experimental site during the entire period of measurements exceeded 40%. The model developed for the gap filling in long-term experimental data considers the NEE as a difference between Ecosystem Respiration (RE) and Gross Primary Production (GPP), i.e. key processes of ecosystem functioning, and their dependence on incoming solar radiation (Q), soil temperature (T), water vapor pressure deficit (VPD) and ground water level (WL). Applied for this purpose the balanced identification method is based on the search for the optimal ratio between the model simplicity and the data fitting accuracy — the ratio providing the minimum of the modeling error estimated by the cross validation method. The obtained numerical solutions are characterized by minimum necessary nonlinearity (curvature) that provides sufficient interpolation and extrapolation characteristics of the developed models. It is particularly important to fill the missing values in NEE measurements. Reviewing the temporary variability of NEE and key environmental factors allowed to reveal a statistically significant dependence of GPP on Q, T, and VPD, and RE — on T and WL, respectively. At the same time, the inaccuracy of applied method for simulation of the mean daily NEE, was less than 10%, and the error in NEE estimates by the method was higher than by the REddyProc model considering the influence on NEE of fewer number of environmental parameters. Analyzing the gap-filled time series of NEE allowed to derive the diurnal and inter-daily variability of NEE and to obtain cumulative CO2 fluxs in the peat bog for selected summer-autumn period. It was shown, that the rate of CO2 fixation by peat bog vegetation in August was significantly higher than the rate of ecosystem respiration, while since September due to strong decrease of GPP the peat bog was turned into a consistent source of CO2 for the atmosphere.
-
Experimental identification of the organization of mental calculations of the person on the basis of algebras of different associativity
Computer Research and Modeling, 2019, v. 11, no. 2, pp. 311-327Views (last year): 16.The work continues research on the ability of a person to improve the productivity of information processing, using parallel work or improving the performance of analyzers. A person receives a series of tasks, the solution of which requires the processing of a certain amount of information. The time and the validity of the decision are recorded. The dependence of the average solution time on the amount of information in the problem is determined by correctly solved problems. In accordance with the proposed method, the problems contain calculations of expressions in two algebras, one of which is associative and the other is nonassociative. To facilitate the work of the subjects in the experiment were used figurative graphic images of elements of algebra. Non-associative calculations were implemented in the form of the game “rock-paper-scissors”. It was necessary to determine the winning symbol in the long line of these figures, considering that they appear sequentially from left to right and play with the previous winner symbol. Associative calculations were based on the recognition of drawings from a finite set of simple images. It was necessary to determine which figure from this set in the line is not enough, or to state that all the pictures are present. In each problem there was no more than one picture. Computation in associative algebra allows the parallel counting, and in the absence of associativity only sequential computations are possible. Therefore, the analysis of the time for solving a series of problems reveals a consistent uniform, sequential accelerated and parallel computing strategy. In the experiments it was found that all subjects used a uniform sequential strategy to solve non-associative problems. For the associative task, all subjects used parallel computing, and some have used parallel computing acceleration of the growth of complexity of the task. A small part of the subjects with a high complexity, judging by the evolution of the solution time, supplemented the parallel account with a sequential stage of calculations (possibly to control the solution). We develop a special method for assessing the rate of processing of input information by a person. It allowed us to estimate the level of parallelism of the calculation in the associative task. Parallelism of level from two to three was registered. The characteristic speed of information processing in the sequential case (about one and a half characters per second) is twice less than the typical speed of human image recognition. Apparently the difference in processing time actually spent on the calculation process. For an associative problem in the case of a minimum amount of information, the solution time is near to the non-associativity case or less than twice. This is probably due to the fact that for a small number of characters recognition almost exhausts the calculations for the used non-associative problem.
-
Neuro-fuzzy model of fuzzy rules formation for objects state evaluation in conditions of uncertainty
Computer Research and Modeling, 2019, v. 11, no. 3, pp. 477-492Views (last year): 12.This article solves the problem of constructing a neuro-fuzzy model of fuzzy rules formation and using them for objects state evaluation in conditions of uncertainty. Traditional mathematical statistics or simulation modeling methods do not allow building adequate models of objects in the specified conditions. Therefore, at present, the solution of many problems is based on the use of intelligent modeling technologies applying fuzzy logic methods. The traditional approach of fuzzy systems construction is associated with an expert attraction need to formulate fuzzy rules and specify the membership functions used in them. To eliminate this drawback, the automation of fuzzy rules formation, based on the machine learning methods and algorithms, is relevant. One of the approaches to solve this problem is to build a fuzzy neural network and train it on the data characterizing the object under study. This approach implementation required fuzzy rules type choice, taking into account the processed data specificity. In addition, it required logical inference algorithm development on the rules of the selected type. The algorithm steps determine the number and functionality of layers in the fuzzy neural network structure. The fuzzy neural network training algorithm developed. After network training the formation fuzzyproduction rules system is carried out. Based on developed mathematical tool, a software package has been implemented. On its basis, studies to assess the classifying ability of the fuzzy rules being formed have been conducted using the data analysis example from the UCI Machine Learning Repository. The research results showed that the formed fuzzy rules classifying ability is not inferior in accuracy to other classification methods. In addition, the logic inference algorithm on fuzzy rules allows successful classification in the absence of a part of the initial data. In order to test, to solve the problem of assessing oil industry water lines state fuzzy rules were generated. Based on the 303 water lines initial data, the base of 342 fuzzy rules was formed. Their practical approbation has shown high efficiency in solving the problem.
-
Quantitative analysis of “structure – anticancer activity” and rational molecular design of bi-functional VEGFR-2/HDAC-inhibitors
Computer Research and Modeling, 2019, v. 11, no. 5, pp. 911-930Inhibitors of histone deacetylases (HDACi) have considered as a promising class of drugs for the treatment of cancers because of their effects on cell growth, differentiation, and apoptosis. Angiogenesis play an important role in the growth of most solid tumors and the progression of metastasis. The vascular endothelial growth factor (VEGF) is a key angiogenic agent, which is secreted by malignant tumors, which induces the proliferation and the migration of vascular endothelial cells. Currently, the most promising strategy in the fight against cancer is the creation of hybrid drugs that simultaneously act on several physiological targets. In this work, a series of hybrids bearing N-phenylquinazolin-4-amine and hydroxamic acid moieties were studied as dual VEGFR-2/HDAC inhibitors using simplex representation of the molecular structure and Support Vector Machine (SVM). The total sample of 42 compounds was divided into training and test sets. Five-fold cross-validation (5-fold) was used for internal validation. Satisfactory quantitative structure—activity relationship (QSAR) models were constructed (R2test = 0.64–0.87) for inhibitors of HDAC, VEGFR-2 and human breast cancer cell line MCF-7. The interpretation of the obtained QSAR models was carried out. The coordinated effect of different molecular fragments on the increase of antitumor activity of the studied compounds was estimated. Among the substituents of the N-phenyl fragment, the positive contribution of para bromine for all three types of activity can be distinguished. The results of the interpretation were used for molecular design of potential dual VEGFR-2/HDAC inhibitors. For comparative QSAR research we used physicochemical descriptors calculated by the program HYBOT, the method of Random Forest (RF), and on-line version of the expert system OCHEM (https://ochem.eu). In the modeling of OCHEM PyDescriptor descriptors and extreme gradient boosting was chosen. In addition, the models obtained with the help of the expert system OCHEM were used for virtual screening of 300 compounds to select promising VEGFR-2/HDAC inhibitors for further synthesis and testing.
-
Mathematical and numerical modeling of a drop-shaped microcavity laser
Computer Research and Modeling, 2019, v. 11, no. 6, pp. 1083-1090This paper studies electromagnetic fields, frequencies of lasing, and emission thresholds of a drop-shaped microcavity laser. From the mathematical point of view, the original problem is a nonstandard two-parametric eigenvalue problem for the Helmholtz equation on the whole plane. The desired positive parameters are the lasing frequency and the threshold gain, the corresponding eigenfunctions are the amplitudes of the lasing modes. This problem is usually referred to as the lasing eigenvalue problem. In this study, spectral characteristics are calculated numerically, by solving the lasing eigenvalue problem on the basis of the set of Muller boundary integral equations, which is approximated by the Nystr¨om method. The Muller equations have weakly singular kernels, hence the corresponding operator is Fredholm with zero index. The Nyström method is a special modification of the polynomial quadrature method for boundary integral equations with weakly singular kernels. This algorithm is accurate for functions that are well approximated by trigonometric polynomials, for example, for eigenmodes of resonators with smooth boundaries. This approach leads to a characteristic equation for mode frequencies and lasing thresholds. It is a nonlinear algebraic eigenvalue problem, which is solved numerically by the residual inverse iteration method. In this paper, this technique is extended to the numerical modeling of microcavity lasers having a more complicated form. In contrast to the microcavity lasers with smooth contours, which were previously investigated by the Nyström method, the drop has a corner. We propose a special modification of the Nyström method for contours with corners, which takes also the symmetry of the resonator into account. The results of numerical experiments presented in the paper demonstrate the practical effectiveness of the proposed algorithm.
-
Numerical study of intense shock waves in dusty media with a homogeneous and two-component carrier phase
Computer Research and Modeling, 2020, v. 12, no. 1, pp. 141-154The article is devoted to the numerical study of shock-wave flows in inhomogeneous media–gas mixtures. In this work, a two-speed two-temperature model is used, in which the dispersed component of the mixture has its own speed and temperature. To describe the change in the concentration of the dispersed component, the equation of conservation of “average density” is solved. This study took into account interphase thermal interaction and interphase pulse exchange. The mathematical model allows the carrier component of the mixture to be described as a viscous, compressible and heat-conducting medium. The system of equations was solved using the explicit Mac-Cormack second-order finite-difference method. To obtain a monotone numerical solution, a nonlinear correction scheme was applied to the grid function. In the problem of shock-wave flow, the Dirichlet boundary conditions were specified for the velocity components, and the Neumann boundary conditions were specified for the other unknown functions. In numerical calculations, in order to reveal the dependence of the dynamics of the entire mixture on the properties of the solid component, various parameters of the dispersed phase were considered — the volume content as well as the linear size of the dispersed inclusions. The goal of the research was to determine how the properties of solid inclusions affect the parameters of the dynamics of the carrier medium — gas. The motion of an inhomogeneous medium in a shock duct divided into two parts was studied, the gas pressure in one of the channel compartments is more important than in the other. The article simulated the movement of a direct shock wave from a high-pressure chamber to a low–pressure chamber filled with a dusty medium and the subsequent reflection of a shock wave from a solid surface. An analysis of numerical calculations showed that a decrease in the linear particle size of the gas suspension and an increase in the physical density of the material from which the particles are composed leads to the formation of a more intense reflected shock wave with a higher temperature and gas density, as well as a lower speed of movement of the reflected disturbance reflected wave.
-
Computer simulation of the process soil treatment by tillage tools of soil processing machines
Computer Research and Modeling, 2020, v. 12, no. 3, pp. 607-627The paper analyzes the methods of studying the process of interaction of soil environments with the tillage tools of soil processing machines. The mathematical methods of numerical modeling are considered in detail, which make it possible to overcome the disadvantages of analytical and empirical approaches. A classification and overview of the possibilities the continuous (FEM — finite element method, CFD — computational fluid dynamics) and discrete (DEM — discrete element method, SPH — hydrodynamics of smoothed particles) numerical methods is presented. Based on the discrete element method, a mathematical model has been developed that represents the soil in the form of a set of interacting small spherical elements. The working surfaces of the tillage tool are presented in the framework of the finite element approximation in the form of a combination of many elementary triangles. The model calculates the movement of soil elements under the action of contact forces of soil elements with each other and with the working surfaces of the tillage tool (elastic forces, dry and viscous friction forces). This makes it possible to assess the influence of the geometric parameters of the tillage tools, technological parameters of the process and soil parameters on the geometric indicators of soil displacement, indicators of the self-installation of tools, power loads, quality indicators of loosening and spatial distribution of indicators. A total of 22 indicators were investigated (or the distribution of the indicator in space). This makes it possible to reproduce changes in the state of the system of elements of the soil (soil cultivation process) and determine the total mechanical effect of the elements on the moving tillage tools of the implement. A demonstration of the capabilities of the mathematical model is given by the example of a study of soil cultivation with a disk cultivator battery. In the computer experiment, a virtual soil channel of 5×1.4 m in size and a 3D model of a disk cultivator battery were used. The radius of the soil particles was taken to be 18 mm, the speed of the tillage tool was 1 m/s, the total simulation time was 5 s. The processing depth was 10 cm at angles of attack of 10, 15, 20, 25 and 30°. The verification of the reliability of the simulation results was carried out on a laboratory stand for volumetric dynamometry by examining a full-scale sample, made in full accordance with the investigated 3D-model. The control was carried out according to three components of the traction resistance vector: $F_x$, $F_y$ and $F_z$. Comparison of the data obtained experimentally with the simulation data showed that the discrepancy is not more than 22.2%, while in all cases the maximum discrepancy was observed at angles of attack of the disk battery of 30°. Good consistency of data on three key power parameters confirms the reliability of the whole complex of studied indicators.
-
Fast method for analyzing the electromagnetic field perturbation by small spherical scatterer
Computer Research and Modeling, 2020, v. 12, no. 5, pp. 1039-1050In this work, we consider a special approximation of the general perturbation formula for the electromagnetic field by a set of electrically small inhomogeneities located in the domain of interest. The problem considered in this paper arises in many applications of technical electrodynamics, radar technologies and subsurface remote sensing. In the general case, it is formulated as follows: at some point in the perturbed domain, it is necessary to determine the amplitude of the electromagnetic field. The perturbation of electromagnetic waves is caused by a set of electrically small scatterers distributed in space. The source of electromagnetic waves is also located in perturbed domain. The problem is solved by introducing the far field approximation and through the formulation for the scatterer radar cross section value. This, in turn, allows one to significantly speed up the calculation process of the perturbed electromagnetic field by a set of a spherical inhomogeneities identical to each other with arbitrary electrophysical parameters. In this paper, we consider only the direct scattering problem; therefore, all parameters of the scatterers are known. In this context, it may be argued that the formulation corresponds to the well-posed problem and does not imply the solution of the integral equation in the generalized formula. One of the features of the proposed algorithm is the allocation of a characteristic plane at the domain boundary. All points of observation of the state of the system belong to this plane. Set of the scatterers is located inside the observation region, which is formed by this surface. The approximation is tested by comparing the results obtained with the solution of the general formula method for the perturbation of the electromagnetic field. This approach, among other things, allows one to remove a number of restrictions on the general perturbation formula for E-filed analysis.
-
Tracking on the BESIII CGEM inner detector using deep learning
Computer Research and Modeling, 2020, v. 12, no. 6, pp. 1361-1381The reconstruction of charged particle trajectories in tracking detectors is a key problem in the analysis of experimental data for high energy and nuclear physics.
The amount of data in modern experiments is so large that classical tracking methods such as Kalman filter can not process them fast enough. To solve this problem, we have developed two neural network algorithms of track recognition, based on deep learning architectures, for local (track by track) and global (all tracks in an event) tracking in the GEM tracker of the BM@N experiment at JINR (Dubna). The advantage of deep neural networks is the ability to detect hidden nonlinear dependencies in data and the capability of parallel execution of underlying linear algebra operations.
In this work we generalize these algorithms to the cylindrical GEM inner tracker of BESIII experiment. The neural network model RDGraphNet for global track finding, based on the reverse directed graph, has been successfully adapted. After training on Monte Carlo data, testing showed encouraging results: recall of 98% and precision of 86% for track finding.
The local neural network model TrackNETv2 was also adapted to BESIII CGEM successfully. Since the tracker has only three detecting layers, an additional neuro-classifier to filter out false tracks have been introduced. Preliminary tests demonstrated the recall value at the first stage of 99%. After applying the neuro-classifier, the precision was 77% with a slight decrease of the recall to 94%. This result can be improved after the further model optimization.
-
Approaches for image processing in the decision support system of the center for automated recording of administrative offenses of the road traffic
Computer Research and Modeling, 2021, v. 13, no. 2, pp. 405-415We suggested some approaches for solving image processing tasks in the decision support system (DSS) of the Center for Automated Recording of Administrative Offenses of the Road Traffic (CARAO). The main task of this system is to assist the operator in obtaining accurate information about the vehicle registration plate and the vehicle brand/model based on images obtained from the photo and video recording systems. We suggested the approach for vehicle registration plate recognition and brand/model classification on the images based on modern neural network models. LPRNet neural network model supplemented by Spatial Transformer Layer was used to recognize the vehicle registration plate. The ResNeXt-101-32x8d neural network model was used to classify for vehicle brand/model. We suggested the approach to construct the training set for the neural network of vehicle registration plate recognition. The approach is based on computer vision methods and machine learning algorithms. The SIFT algorithm was used to detect and describe local features on images with the vehicle registration plate. DBSCAN clustering was used to detect and delete outliers in such local features. The accuracy of vehicle registration plate recognition was 96% on the testing set. We suggested the approach to improve the efficiency of using the ResNeXt-101-32x8d model at additional training and classification stages. The approach is based on the new architecture of convolutional neural networks with “freezing” weight coefficients of convolutional layers, an additional convolutional layer for parallelizing the classification process, and a set of binary classifiers at the output. This approach significantly reduced the time of additional training of neural network when new vehicle brand/model classification was needed. The final accuracy of vehicle brand/model classification was 99% on the testing set. The proposed approaches were tested and implemented in the DSS of the CARAO of the Republic of Tatarstan.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"