All issues
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Variance reduction for minimax problems with a small dimension of one of the variables
Computer Research and Modeling, 2022, v. 14, no. 2, pp. 257-275The paper is devoted to convex-concave saddle point problems where the objective is a sum of a large number of functions. Such problems attract considerable attention of the mathematical community due to the variety of applications in machine learning, including adversarial learning, adversarial attacks and robust reinforcement learning, to name a few. The individual functions in the sum usually represent losses related to examples from a data set. Additionally, the formulation admits a possibly nonsmooth composite term. Such terms often reflect regularization in machine learning problems. We assume that the dimension of one of the variable groups is relatively small (about a hundred or less), and the other one is large. This case arises, for example, when one considers the dual formulation for a minimization problem with a moderate number of constraints. The proposed approach is based on using Vaidya’s cutting plane method to minimize with respect to the outer block of variables. This optimization algorithm is especially effective when the dimension of the problem is not very large. An inexact oracle for Vaidya’s method is calculated via an approximate solution of the inner maximization problem, which is solved by the accelerated variance reduced algorithm Katyusha. Thus, we leverage the structure of the problem to achieve fast convergence. Separate complexity bounds for gradients of different components with respect to different variables are obtained in the study. The proposed approach is imposing very mild assumptions about the objective. In particular, neither strong convexity nor smoothness is required with respect to the low-dimensional variable group. The number of steps of the proposed algorithm as well as the arithmetic complexity of each step explicitly depend on the dimensionality of the outer variable, hence the assumption that it is relatively small.
-
The computational algorithm for studying internal laminar flows of a multicomponent gas with different-scale chemical processes
Computer Research and Modeling, 2023, v. 15, no. 5, pp. 1169-1187The article presented the computational algorithm developed to study chemical processes in the internal flows of a multicomponent gas under the influence of laser radiation. The mathematical model is the gas dynamics’ equations with chemical reactions at low Mach numbers. It takes into account dissipative terms that describe the dynamics of a viscous heat-conducting medium with diffusion, chemical reactions and energy supply by laser radiation. This mathematical model is characterized by the presence of several very different time and spatial scales. The computational algorithm is based on a splitting scheme by physical processes. Each time integration step is divided into the following blocks: solving the equations of chemical kinetics, solving the equation for the radiation intensity, solving the convection-diffusion equations, calculating the dynamic component of pressure and calculating the correction of the velocity vector. The solution of a stiff system of chemical kinetics equations is carried out using a specialized explicit second-order accuracy scheme or a plug-in RADAU5 module. Numerical Rusanov flows and a WENO scheme of an increased order of approximation are used to find convective terms in the equations. The code based on the obtained algorithm has been developed using MPI parallel computing technology. The developed code is used to calculate the pyrolysis of ethane with radical reactions. The superequilibrium concentrations’ formation of radicals in the reactor volume is studied in detail. Numerical simulation of the reaction gas flow in a flat tube with laser radiation supply is carried out, which is in demand for the interpretation of experimental results. It is shown that laser radiation significantly increases the conversion of ethane and yields of target products at short lengths closer to the entrance to the reaction zone. Reducing the effective length of the reaction zone allows us to offer new solutions in the design of ethane conversion reactors into valuable hydrocarbons. The developed algorithm and program will find their application in the creation of new technologies of laser thermochemistry.
-
Simulation model of spline interpolation of piecewise linear trajectory for CNC machine tools
Computer Research and Modeling, 2025, v. 17, no. 2, pp. 225-242In traditional CNC systems, each segment of a piecewise linear trajectory is described by a separate block of the control program. In this case, a trapezoidal trajectory of movement is formed, and the stitching of individual sections is carried out at zero values of speed and acceleration. Increased productivity is associated with continuous processing, which in modern CNC systems is achieved through the use of spline interpolation. For a piecewise linear trajectory, which is basic for most products, the most appropriate is a first-degree spline. However, even in the simplest case of spline interpolation, the closed nature of the basic software from leading manufacturers of CNC systems limits the capabilities of not only developers, but also users. Taking this into account, the purpose of this work is a detailed study of the structural organization and operation algorithms of the simulation model of piecewise linear spline interpolation. Limitations on jerk and acceleration are considered as the main measure to reduce dynamic processing errors. In this case, special attention is paid to the S-shaped shape of the speed curve in the acceleration and deceleration sections. This is due to the conditions for the implementation of spline interpolation, one of which is the continuity of movement, which is ensured by the equality of the first and second derivatives when joining sections of the trajectory. Such a statement corresponds to the principles of implementing combined control systems of a servo electric drive, which provide partial invariance to control and disturbing effects. The reference model of a spline interpolator is adopted as the basis of the structural organization. The issues of processing scaling, which are based on a decrease in the vector speed in relation to the base value, are also considered. This allows increasing the accuracy of movements. It is shown that the range of changes in the speed of movements can be more than ten thousand, and is limited only by the speed control capabilities of the actuators.
Keywords: piecewise linear trajectory, jerk, S-shaped speed curve, spline, processing scale, servo drive. -
Finding equilibrium in two-stage traffic assignment model
Computer Research and Modeling, 2021, v. 13, no. 2, pp. 365-379Authors describe a two-stage traffic assignment model. It contains of two blocks. The first block consists of a model for calculating a correspondence (demand) matrix, whereas the second block is a traffic assignment model. The first model calculates a matrix of correspondences using a matrix of transport costs (it characterizes the required volumes of movement from one area to another, it is time in this case). To solve this problem, authors propose to use one of the most popular methods of calculating the correspondence matrix in urban studies — the entropy model. The second model describes exactly how the needs for displacement specified by the correspondence matrix are distributed along the possible paths. Knowing the ways of the flows distribution along the paths, it is possible to calculate the cost matrix. Equilibrium in a two-stage model is a fixed point in the sequence of these two models. In practice the problem of finding a fixed point can be solved by the fixed-point iteration method. Unfortunately, at the moment the issue of convergence and estimations of the convergence rate for this method has not been studied quite thoroughly. In addition, the numerical implementation of the algorithm results in many problems. In particular, if the starting point is incorrect, situations may arise where the algorithm requires extremely large numbers to be computed and exceeds the available memory even on the most modern computers. Therefore the article proposes a method for reducing the problem of finding the equilibrium to the problem of the convex non-smooth optimization. Also a numerical method for solving the obtained optimization problem is proposed. Numerical experiments were carried out for both methods of solving the problem. The authors used data for Vladivostok (for this city information from various sources was processed and collected in a new dataset) and two smaller cities in the USA. It was not possible to achieve convergence by the method of fixed-point iteration, whereas the second model for the same dataset demonstrated convergence rate $k^{-1.67}$.
-
Homogenized model of two-phase capillary-nonequilibrium flows in a medium with double porosity
Computer Research and Modeling, 2023, v. 15, no. 3, pp. 567-580A mathematical model of two-phase capillary-nonequilibrium isothermal flows of incompressible phases in a double porosity medium is constructed. A double porosity medium is considered, which is a composition of two porous media with contrasting capillary properties (absolute permeability, capillary pressure). One of the constituent media has high permeability and is conductive, the second is characterized by low permeability and forms an disconnected system of matrix blocks. A feature of the model is to take into account the influence of capillary nonequilibrium on mass transfer between subsystems of double porosity, while the nonequilibrium properties of two-phase flow in the constituent media are described in a linear approximation within the Hassanizadeh model. Homogenization by the method of formal asymptotic expansions leads to a system of partial differential equations, the coefficients of which depend on internal variables determined from the solution of cell problems. Numerical solution of cell problems for a system of partial differential equations is computationally expensive. Therefore, a thermodynamically consistent kinetic equation is formulated for the internal parameter characterizing the phase distribution between the subsystems of double porosity. Dynamic relative phase permeability and capillary pressure in the processes of drainage and impregnation are constructed. It is shown that the capillary nonequilibrium of flows in the constituent subsystems has a strong influence on them. Thus, the analysis and modeling of this factor is important in transfer problems in systems with double porosity.
-
The use of GIS INTEGRO in searching tasks for oil and gas deposits
Computer Research and Modeling, 2015, v. 7, no. 3, pp. 439-444Views (last year): 4.GIS INTEGRO is the geo-information software system forming the basis for the integrated interpretation of geophysical data in researching a deep structure of Earth. GIS INTEGRO combines a variety of computational and analytical applications for the solution of geological and geophysical problems. It includes various interfaces that allow you to change the form of representation of data (raster, vector, regular and irregular network of observations), the conversion unit of map projections, application blocks, including block integrated data analysis and decision prognostic and diagnostic tasks.
The methodological approach is based on integration and integrated analysis of geophysical data on regional profiles, geophysical potential fields and additional geological information on the study area. Analytical support includes packages transformations, filtering, statistical processing, calculation, finding of lineaments, solving direct and inverse tasks, integration of geographic information.
Technology and software and analytical support was tested in solving problems tectonic zoning in scale 1:200000, 1:1000000 in Yakutia, Kazakhstan, Rostov region, studying the deep structure of regional profiles 1:S, 1-SC, 2-SAT, 3-SAT and 2-DV, oil and gas forecast in the regions of Eastern Siberia, Brazil.
The article describes two possible approaches of parallel calculations for data processing 2D or 3D nets in the field of geophysical research. As an example presented realization in the environment of GRID of the application software ZondGeoStat (statistical sensing), which create 3D net model on the basis of data 2d net. The experience has demonstrated the high efficiency of the use of environment of GRID during realization of calculations in field of geophysical researches.
-
Dynamical theory of information as a basis for natural-constructive approach to modeling a cognitive process
Computer Research and Modeling, 2017, v. 9, no. 3, pp. 433-447Views (last year): 6.The main statements and inferences of the Dynamic Theory Information (DTI) are considered. It is shown that DTI provides the possibility two reveal two essentially important types of information: objective (unconventional) and subjective (conventional) informtion. There are two ways of obtaining information: reception (perception of an already existing one) and generation (production of new) information. It is shown that the processes of generation and perception of information should proceed in two different subsystems of the same cognitive system. The main points of the Natural-Constructivist Approach to modeling the cognitive process are discussed. It is shown that any neuromorphic approach faces the problem of Explanatory Gap between the “Brain” and the “Mind”, i. e. the gap between objectively measurable information about the ensemble of neurons (“Brain”) and subjective information about the human consciousness (“Mind”). The Natural-Constructive Cognitive Architecture developed within the framework of this approach is discussed. It is a complex block-hierarchical combination of several neuroprocessors. The main constructive feature of this architecture is splitting the whole system into two linked subsystems, by analogy with the hemispheres of the human brain. One of the subsystems is processing the new information, learning, and creativity, i.e. for the generation of information. Another subsystem is responsible for processing already existing information, i.e. reception of information. It is shown that the lowest (zero) level of the hierarchy is represented by processors that should record images of real objects (distributed memory) as a response to sensory signals, which is objective information (and refers to the “Brain”). The next hierarchy levels are represented by processors containing symbols of the recorded images. It is shown that symbols represent subjective (conventional) information created by the system itself and providing its individuality. The highest hierarchy levels containing the symbols of abstract concepts provide the possibility to interpret the concepts of “consciousness”, “sub-consciousness”, “intuition”, referring to the field of “Mind”, in terms of the ensemble of neurons. Thus, DTI provides an opportunity to build a model that allows us to trace how the “Mind” could emerge basing on the “Brain”.
-
Computational modeling of the thermal and physical processes in the high-temperature gas-cooled reactor
Computer Research and Modeling, 2023, v. 15, no. 4, pp. 895-906The development of a high-temperature gas-cooled reactor (HTGR) constituting a part of nuclear power-and-process station and intended for large-scale hydrogen production is now in progress in the Russian Federation. One of the key objectives in development of the high-temperature gas-cooled reactor is the computational justification of the accepted design.
The article gives the procedure for the computational analysis of thermal and physical characteristics of the high-temperature gas-cooled reactor. The procedure is based on the use of the state-of-the-art codes for personal computer (PC).
The objective of thermal and physical analysis of the reactor as a whole and of the core in particular was achieved in three stages. The idea of the first stage is to justify the neutron physical characteristics of the block-type core during burn-up with the use of the MCU-HTR code based on the Monte Carlo method. The second and the third stages are intended to study the coolant flow and the temperature condition of the reactor and the core in 3D with the required degree of detailing using the FlowVision and the ANSYS codes.
For the purpose of carrying out the analytical studies the computational models of the reactor flow path and the fuel assembly column were developed.
As per the results of the computational modeling the design of the support columns and the neutron physical characteristics of the fuel assembly were optimized. This results in the reduction of the total hydraulic resistance of the reactor and decrease of the maximum temperature of the fuel elements.
The dependency of the maximum fuel temperature on the value of the power peaking factors determined by the arrangement of the absorber rods and of the compacts of burnable absorber in the fuel assembly is demonstrated.
-
Advanced neural network models for UAV-based image analysis in remote pathology monitoring of coniferous forests
Computer Research and Modeling, 2025, v. 17, no. 4, pp. 641-663The key problems of remote forest pathology monitoring for coniferous forests affected by insect pests have been analyzed. It has been demonstrated that addressing these tasks requires the use of multiclass classification results for coniferous trees in high- and ultra-high-resolution images, which are promptly obtained through monitoring via satellites or unmanned aerial vehicles (UAVs). An analytical review of modern models and methods for multiclass classification of coniferous forest images was conducted, leading to the development of three fully convolutional neural network models: Mo-U-Net, At-Mo-U-Net, and Res-Mo-U-Net, all based on the classical U-Net architecture. Additionally, the Segformer transformer model was modified to suit the task. For RGB images of fir trees Abies sibirica affected by the four-eyed bark beetle Polygraphus proximus, captured using a UAV-mounted camera, two datasets were created: the first dataset contains image fragments and their corresponding reference segmentation masks sized 256 × 256 × 3 pixels, while the second dataset contains fragments sized 480 × 480 × 3 pixels. Comprehensive studies were conducted on each trained neural network model to evaluate both classification accuracy for assessing the degree of damage (health status) of Abies sibirica trees and computation speed using test datasets from each set. The results revealed that for fragments sized 256 × 256 × 3 pixels, the At-Mo-U-Net model with an attention mechanism is preferred alongside the Modified Segformer model. For fragments sized 480 × 480 × 3 pixels, the Res-Mo-U-Net hybrid model with residual blocks demonstrated superior performance. Based on classification accuracy and computation speed results for each developed model, it was concluded that, for production-scale multiclass classification of affected fir trees, the Res-Mo-U-Net model is the most suitable choice. This model strikes a balance between high classification accuracy and fast computation speed, meeting conflicting requirements effectively.
-
Numerical studies of the parameters of the perturbed region formed in the lower ionosphere under the action of a directed radio waves flux from a terrestrial source
Computer Research and Modeling, 2018, v. 10, no. 5, pp. 679-708Views (last year): 17.The paper presents a physico-mathematical model of the perturbed region formed in the lower D-layer of the ionosphere under the action of directed radio emission flux from a terrestrial stand of the megahertz frequency range, obtained as a result of comprehensive theoretical studies. The model is based on the consideration of a wide range of kinetic processes taking into account their nonequilibrium and in the two-temperature approximation for describing the transformation of the radio beam energy absorbed by electrons. The initial data on radio emission achieved by the most powerful radio-heating stands are taken in the paper. Their basic characteristics and principles of functioning, and features of the altitude distribution of the absorbed electromagnetic energy of the radio beam are briefly described. The paper presents the decisive role of the D-layer of the ionosphere in the absorption of the energy of the radio beam. On the basis of theoretical analysis, analytical expressions are obtained for the contribution of various inelastic processes to the distribution of the absorbed energy, which makes it possible to correctly describe the contribution of each of the processes considered. The work considers more than 60 components. The change of the component concentration describe about 160 reactions. All the reactions are divided into five groups according to their physical content: ionization-chemical block, excitation block of metastable electronic states, cluster block, excitation block of vibrational states and block of impurities. Blocks are interrelated and can be calculated both jointly and separately. The paper presents the behavior of the parameters of the perturbed region in daytime and nighttime conditions is significantly different at the same radio flux density: under day conditions, the maximum electron concentration and temperature are at an altitude of ~45–55 km; in night ~80 km, with the temperature of heavy particles rapidly increasing, which leads to the occurrence of a gas-dynamic flow. Therefore, a special numerical algorithm are developed to solve two basic problems: kinetic and gas dynamic. Based on the altitude and temporal behavior of concentrations and temperatures, the algorithm makes it possible to determine the ionization and emission of the ionosphere in the visible and infrared spectral range, which makes it possible to evaluate the influence of the perturbed region on radio engineering and optoelectronic devices used in space technology.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"




