All issues
- 2026 Vol. 18
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Extraction of characters and events from narratives
Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1593-1600Events and character extraction from narratives is a fundamental task in text analysis. The application of event extraction techniques ranges from the summarization of different documents to the analysis of medical notes. We identify events based on a framework named “four W” (Who, What, When, Where) to capture all the essential components like the actors, actions, time, and places. In this paper, we explore two prominent techniques for event extraction: statistical parsing of syntactic trees and semantic role labeling. While these techniques were investigated by different researchers in isolation, we directly compare the performance of the two approaches on our custom dataset, which we have annotated.
Our analysis shows that statistical parsing of syntactic trees outperforms semantic role labeling in event and character extraction, especially in identifying specific details. Nevertheless, semantic role labeling demonstrate good performance in correct actor identification. We evaluate the effectiveness of both approaches by comparing different metrics like precision, recall, and F1-scores, thus, demonstrating their respective advantages and limitations.
Moreover, as a part of our work, we propose different future applications of event extraction techniques that we plan to investigate. The areas where we want to apply these techniques include code analysis and source code authorship attribution. We consider using event extraction to retrieve key code elements as variable assignments and function calls, which can further help us to analyze the behavior of programs and identify the project’s contributors. Our work provides novel understandings of the performance and efficiency of statistical parsing and semantic role labeling techniques, offering researchers new directions for the application of these techniques.
-
Neuromorphic processor with hardware learning based on a convolutional neural network for audio spectrogram analysis
Computer Research and Modeling, 2026, v. 18, no. 1, pp. 81-99This paper proposes an architectural solution for organizing a convolutional neural network (CNN) oriented towards hardware implementation on edge devices under limited resources. To this goal, an approach to compressing spectrograms to a given size (28 × 28) is proposed using discretization, monoconversion, windowed Fourier transform, and two-dimensional interpolation. A balanced convolution procedure is developed based on compact convolutional filters, the size of which provides the balance between computational complexity and accuracy required for edge devices. An algorithm that enables convolution operations and calculation of the error function gradient in the convolutional layer in a single cycle ensuring increased performance in both inference and training modes of the CNN is proposed. The tradeoff between network trainability and its resistance to overfitting is optimized by applying the Dropout regularization method with a dropout coefficient of 0.5 for the fully connected layer.
The effectiveness of the proposed solution was demonstrated using the example of recognizing audio spectrograms of car and airplane engine sounds. The CNN was trained on a balanced dataset consisting of 7160 audio recordings. The trained network demonstrated high recognition accuracy (95%), low loss values (< 0.2), and balanced precision/recall/F-metric, demonstrating the effectiveness of the developed CNN model.
-
A.S. Komarov’s publications about cellular automata modelling of the population-ontogenetic development in plants: a review
Computer Research and Modeling, 2016, v. 8, no. 2, pp. 285-295The possibilities of cellular automata simulation applied to herbs and dwarf shrubs are described. Basicprinciples of discrete description of the ontogenesis of plants on which the mathematical modeling based are presents. The review discusses the main research results obtained with the use of models that revealing the patterns of functioning of populations and communities. The CAMPUS model and the results of computer experiment to study the growth of two clones of lingonberry with different geometry of the shoots are described. The paper is dedicated to the works of the founder of the direction of prof. A. S. Komarov. A list of his major publications on this subject is given.
Keywords: computer models, individual-based approach.Views (last year): 2. Citations: 6 (RSCI). -
Layered Bénard–Marangoni convection during heat transfer according to the Newton’s law of cooling
Computer Research and Modeling, 2016, v. 8, no. 6, pp. 927-940Views (last year): 10. Citations: 3 (RSCI).The paper considers mathematical modeling of layered Benard–Marangoni convection of a viscous incompressible fluid. The fluid moves in an infinitely extended layer. The Oberbeck–Boussinesq system describing layered Benard–Marangoni convection is overdetermined, since the vertical velocity is zero identically. We have a system of five equations to calculate two components of the velocity vector, temperature and pressure (three equations of impulse conservation, the incompressibility equation and the heat equation). A class of exact solutions is proposed for the solvability of the Oberbeck–Boussinesq system. The structure of the proposed solution is such that the incompressibility equation is satisfied identically. Thus, it is possible to eliminate the «extra» equation. The emphasis is on the study of heat exchange on the free layer boundary, which is considered rigid. In the description of thermocapillary convective motion, heat exchange is set according to the Newton’s law of cooling. The application of this heat distribution law leads to the third-kind initial-boundary value problem. It is shown that within the presented class of exact solutions to the Oberbeck–Boussinesq equations the overdetermined initial-boundary value problem is reduced to the Sturm–Liouville problem. Consequently, the hydrodynamic fields are expressed using trigonometric functions (the Fourier basis). A transcendental equation is obtained to determine the eigenvalues of the problem. This equation is solved numerically. The numerical analysis of the solutions of the system of evolutionary and gradient equations describing fluid flow is executed. Hydrodynamic fields are analyzed by a computational experiment. The existence of counterflows in the fluid layer is shown in the study of the boundary value problem. The existence of counterflows is equivalent to the presence of stagnation points in the fluid, and this testifies to the existence of a local extremum of the kinetic energy of the fluid. It has been established that each velocity component cannot have more than one zero value. Thus, the fluid flow is separated into two zones. The tangential stresses have different signs in these zones. Moreover, there is a fluid layer thickness at which the tangential stresses at the liquid layer equal to zero on the lower boundary. This physical effect is possible only for Newtonian fluids. The temperature and pressure fields have the same properties as velocities. All the nonstationary solutions approach the steady state in this case.
-
Computational investigation of aerodynamic performance of the generic flying-wing aircraft model using FlowVision computational code
Computer Research and Modeling, 2017, v. 9, no. 1, pp. 67-74Views (last year): 10. Citations: 1 (RSCI).Modern approach to modernization of the experimental techniques involves design of mathematical models of the wind-tunnel, which are also referred to as Electronic of Digital Wind-Tunnels. They are meant to supplement experimental data with computational analysis. Using Electronic Wind-Tunnels is supposed to provide accurate information on aerodynamic performance of an aircraft basing on a set of experimental data, to obtain agreement between data from different test facilities and perform comparison between computational results for flight conditions and data with the presence of support system and test section.
Completing this task requires some preliminary research, which involves extensive wind-tunnel testing as well as RANS-based computational research with the use of supercomputer technologies. At different stages of computational investigation one may have to model not only the aircraft itself but also the wind-tunnel test section and the model support system. Modelling such complex geometries will inevitably result in quite complex vertical and separated flows one will have to simulate. Another problem is that boundary layer transition is often present in wind-tunnel testing due to quite small model scales and therefore low Reynolds numbers.
In the current article the first stage of the Electronic Wind-Tunnel design program is covered. This stage involves computational investigation of aerodynamic characteristics of the generic flying-wing UAV model previously tested in TsAGI T-102 wind-tunnel. Since this stage is preliminary the model was simulated without taking test-section and support system geometry into account. The boundary layer was considered to be fully turbulent.
For the current research FlowVision computational code was used because of its automatic grid generation feature and stability of the solver when simulating complex flows. A two-equation k–ε turbulence model was used with special wall functions designed to properly capture flow separation. Computed lift force and drag force coefficients for different angles-of-attack were compared to the experimental data.
-
Optimal control of the motion in an ideal fluid of a screw-shaped body with internal rotors
Computer Research and Modeling, 2017, v. 9, no. 5, pp. 741-759Views (last year): 12. Citations: 1 (RSCI).In this paper we consider the controlled motion of a helical body with three blades in an ideal fluid, which is executed by rotating three internal rotors. We set the problem of selecting control actions, which ensure the motion of the body near the predetermined trajectory. To determine controls that guarantee motion near the given curve, we propose methods based on the application of hybrid genetic algorithms (genetic algorithms with real encoding and with additional learning of the leader of the population by a gradient method) and artificial neural networks. The correctness of the operation of the proposed numerical methods is estimated using previously obtained differential equations, which define the law of changing the control actions for the predetermined trajectory.
In the approach based on hybrid genetic algorithms, the initial problem of minimizing the integral functional reduces to minimizing the function of many variables. The given time interval is broken up into small elements, on each of which the control actions are approximated by Lagrangian polynomials of order 2 and 3. When appropriately adjusted, the hybrid genetic algorithms reproduce a solution close to exact. However, the cost of calculation of 1 second of the physical process is about 300 seconds of processor time.
To increase the speed of calculation of control actions, we propose an algorithm based on artificial neural networks. As the input signal the neural network takes the components of the required displacement vector. The node values of the Lagrangian polynomials which approximately describe the control actions return as output signals . The neural network is taught by the well-known back-propagation method. The learning sample is generated using the approach based on hybrid genetic algorithms. The calculation of 1 second of the physical process by means of the neural network requires about 0.004 seconds of processor time, that is, 6 orders faster than the hybrid genetic algorithm. The control calculated by means of the artificial neural network differs from exact control. However, in spite of this difference, it ensures that the predetermined trajectory is followed exactly.
-
Optimal fishing and evolution of fish migration routes
Computer Research and Modeling, 2019, v. 11, no. 5, pp. 879-893A new discrete ecological-evolutionary mathematical model is presented, in which the search mechanisms for evolutionarily stable migration routes of fish populations are implemented. The proposed adaptive designs have a small dimension, and therefore have high speed. This allows carrying out calculations on long-term perspective for an acceptable machine time. Both geometric approaches of nonlinear analysis and computer “asymptotic” methods were used in the study of stability. The migration dynamics of the fish population is described by a certain Markov matrix, which can change during evolution. The “basis” matrices are selected in the family of Markov matrices (of fixed dimension), which are used to generate migration routes of mutant. A promising direction of the evolution of the spatial behavior of fish is revealed for a given fishery and food supply, as a result of competition of the initial population with mutants. This model was applied to solve the problem of optimal catch for the long term, provided that the reservoir is divided into two parts, each of which has its own owner. Dynamic programming is used, based on the construction of the Bellman function, when solving optimization problems. A paradoxical strategy of “luring” was discovered, when one of the participants in the fishery temporarily reduces the catch in its water area. In this case, the migrating fish spends more time in this area (on condition of equal food supply). This route is evolutionarily fixes and does not change even after the resumption of fishing in the area. The second participant in the fishery can restore the status quo by applying “luring” to its part of the water area. Endless sequence of “luring” arises as a kind of game “giveaway”. A new effective concept has been introduced — the internal price of the fish population, depending on the zone of the reservoir. In fact, these prices are Bellman's private derivatives, and can be used as a tax on caught fish. In this case, the problem of long-term fishing is reduced to solving the problem of one-year optimization.
-
Hypergraph approach in the decomposition of complex technical systems
Computer Research and Modeling, 2020, v. 12, no. 5, pp. 1007-1022The article considers a mathematical model of decomposition of a complex product into assembly units. This is an important engineering problem, which affects the organization of discrete production and its operational management. A review of modern approaches to mathematical modeling and automated computer-aided of decompositions is given. In them, graphs, networks, matrices, etc. serve as mathematical models of structures of technical systems. These models describe the mechanical structure as a binary relation on a set of system elements. The geometrical coordination and integrity of machines and mechanical devices during the manufacturing process is achieved by means of basing. In general, basing can be performed on several elements simultaneously. Therefore, it represents a variable arity relation, which can not be correctly described in terms of binary mathematical structures. A new hypergraph model of mechanical structure of technical system is described. This model allows to give an adequate formalization of assembly operations and processes. Assembly operations which are carried out by two working bodies and consist in realization of mechanical connections are considered. Such operations are called coherent and sequential. This is the prevailing type of operations in modern industrial practice. It is shown that the mathematical description of such operation is normal contraction of an edge of the hypergraph. A sequence of contractions transforming the hypergraph into a point is a mathematical model of the assembly process. Two important theorems on the properties of contractible hypergraphs and their subgraphs proved by the author are presented. The concept of $s$-hypergraphs is introduced. $S$-hypergraphs are the correct mathematical models of mechanical structures of any assembled technical systems. Decomposition of a product into assembly units is defined as cutting of an $s$-hypergraph into $s$-subgraphs. The cutting problem is described in terms of discrete mathematical programming. Mathematical models of structural, topological and technological constraints are obtained. The objective functions are proposed that formalize the optimal choice of design solutions in various situations. The developed mathematical model of product decomposition is flexible and open. It allows for extensions that take into account the characteristics of the product and its production.
-
Distributed computing model for the organization of a software environment that provides management of intelligent building automation systems
Computer Research and Modeling, 2021, v. 13, no. 3, pp. 557-570The present article describes the authors’ model of construction of the distributed computer network and realization in it of the distributed calculations which are carried out within the limits of the software-information environment providing management of the information, automated and engineering systems of intellectual buildings. The presented model is based on the functional approach with encapsulation of the non-determined calculations and various side effects in monadic calculations that allows to apply all advantages of functional programming to a choice and execution of scenarios of management of various aspects of life activity of buildings and constructions. Besides, the described model can be used together with process of intellectualization of technical and sociotechnical systems for increase of level of independence of decision-making on management of values of parameters of the internal environment of a building, and also for realization of methods of adaptive management, in particular application of various techniques and approaches of an artificial intellect. An important part of the model is a directed acyclic graph, which is an extension of the blockchain with the ability to categorically reduce the cost of transactions taking into account the execution of smart contracts. According to the authors it will allow one to realize new technologies and methods — the distributed register on the basis of the directed acyclic graph, calculation on edge and the hybrid scheme of construction of artificial intellectual systems — and all this together can be used for increase of efficiency of management of intellectual buildings. Actuality of the presented model is based on necessity and importance of translation of processes of management of life cycle of buildings and constructions in paradigm of Industry 4.0 and application for management of methods of an artificial intellect with universal introduction of independent artificial cognitive agents. Model novelty follows from cumulative consideration of the distributed calculations within the limits of the functional approach and hybrid paradigm of construction of artificial intellectual agents for management of intellectual buildings. The work is theoretical. The article will be interesting to scientists and engineers working in the field of automation of technological and industrial processes both within the limits of intellectual buildings, and concerning management of complex technical and social and technical systems as a whole.
-
On the relations of stochastic convex optimization problems with empirical risk minimization problems on $p$-norm balls
Computer Research and Modeling, 2022, v. 14, no. 2, pp. 309-319In this paper, we consider convex stochastic optimization problems arising in machine learning applications (e. g., risk minimization) and mathematical statistics (e. g., maximum likelihood estimation). There are two main approaches to solve such kinds of problems, namely the Stochastic Approximation approach (online approach) and the Sample Average Approximation approach, also known as the Monte Carlo approach, (offline approach). In the offline approach, the problem is replaced by its empirical counterpart (the empirical risk minimization problem). The natural question is how to define the problem sample size, i. e., how many realizations should be sampled so that the quite accurate solution of the empirical problem be the solution of the original problem with the desired precision. This issue is one of the main issues in modern machine learning and optimization. In the last decade, a lot of significant advances were made in these areas to solve convex stochastic optimization problems on the Euclidean balls (or the whole space). In this work, we are based on these advances and study the case of arbitrary balls in the $p$-norms. We also explore the question of how the parameter $p$ affects the estimates of the required number of terms as a function of empirical risk.
In this paper, both convex and saddle point optimization problems are considered. For strongly convex problems, the existing results on the same sample sizes in both approaches (online and offline) were generalized to arbitrary norms. Moreover, it was shown that the strong convexity condition can be weakened: the obtained results are valid for functions satisfying the quadratic growth condition. In the case when this condition is not met, it is proposed to use the regularization of the original problem in an arbitrary norm. In contradistinction to convex problems, saddle point problems are much less studied. For saddle point problems, the sample size was obtained under the condition of $\gamma$-growth of the objective function. When $\gamma = 1$, this condition is the condition of sharp minimum in convex problems. In this article, it was shown that the sample size in the case of a sharp minimum is almost independent of the desired accuracy of the solution of the original problem.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"




