All issues
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Iterative diffusion importance: advancing edge criticality evaluation in complex networks
Computer Research and Modeling, 2025, v. 17, no. 5, pp. 783-797This paper is devoted to the problem of edge criticality identification and ranking in complex networks, which is a part of a modern research direction in the novel network science. The diffusion importance belongs to the set of acknowledged methods that help to identify the significant connections in the graph that are critical to retaining structural integrity. In the present work, we develop the Iterative Diffusion Importance algorithm that is based on the re-estimation of critical topological features at each step of the graph deconstruction. The Iterative Diffusion Importance has been compared with methods such as diffusion importance and degree product, which are two very well-known benchmark algorithms. As for benchmark networks, we tested the Iterative Diffusion Importance on three standard networks, such as Zachary’s Karate Club, the American Football Network, and the Dolphins Network, which are often used for algorithm efficiency evaluation and are different in size and density. Also, we proposed a new benchmark network representing the airplane communication between Japan and the US. The numerical experiment on finding the ranking of critical edges and the following network decomposition demonstrated that the proposed Iterative Diffusion Importance exceeds the conventional diffusion importance by the efficiency for 2–35% depending on the network complexity, the number of nodes, and the number of edges. The only drawback of the Iterative Diffusion Importance is an increase in computation complexity and hencely in the runtime, but this drawback can be easily compensated for by the preliminary planning of the network deconstruction or protection and by reducing the re-evaluation frequency of the iterative process.
-
Conversion of the initial indices of the technological process of the smelting of steel for the subsequent simulation
Computer Research and Modeling, 2017, v. 9, no. 2, pp. 187-199Views (last year): 6. Citations: 1 (RSCI).Efficiency of production directly depends on quality of the management of technology which, in turn, relies on the accuracy and efficiency of the processing of control and measuring information. Development of the mathematical methods of research of the system communications and regularities of functioning and creation of the mathematical models taking into account structural features of object of researches, and also writing of the software products for realization of these methods are an actual task. Practice has shown that the list of parameters that take place in the study of complex object of modern production, ranging from a few dozen to several hundred names, and the degree of influence of each factor in the initial time is not clear. Before working for the direct determination of the model in these circumstances, it is impossible — the amount of the required information may be too great, and most of the work on the collection of this information will be done in vain due to the fact that the degree of influence on the optimization of most factors of the original list would be negligible. Therefore, a necessary step in determining a model of a complex object is to work to reduce the dimension of the factor space. Most industrial plants are hierarchical group processes and mass volume production, characterized by hundreds of factors. (For an example of realization of the mathematical methods and the approbation of the constructed models data of the Moldavian steel works were taken in a basis.) To investigate the systemic linkages and patterns of functioning of such complex objects are usually chosen several informative parameters, and carried out their sampling. In this article the sequence of coercion of the initial indices of the technological process of the smelting of steel to the look suitable for creation of a mathematical model for the purpose of prediction is described. The implementations of new types became also creation of a basis for development of the system of automated management of quality of the production. In the course of weak correlation the following stages are selected: collection and the analysis of the basic data, creation of the table the correlated of the parameters, abbreviation of factor space by means of the correlative pleiads and a method of weight factors. The received results allow to optimize process of creation of the model of multiple-factor process.
-
Marks of stochastic determinacy of forest ecosystem autogenous succession in Markov models
Computer Research and Modeling, 2016, v. 8, no. 2, pp. 255-265Views (last year): 2. Citations: 2 (RSCI).This article describes a method to model the course of forest ecosystem succession to the climax state by means of a Markov chain. In contrast to traditional methods of forest succession modelling based on changes of vegetation types, several variants of the vertical structure of communities formed by late-successional tree species are taken as the transition states of the model. Durations of succession courses from any stage are not set in absolute time units, but calculated as the average number of steps before reaching the climax in a unified time scale. The regularities of succession courses are revealed in the proper time of forest ecosystems shaping. The evidences are obtained that internal features of the spatial and population structure do stochastically determine the course and the pace of forest succession. The property of developing vegetation of forest communities is defined as an attribute of stochastic determinism in the course of autogenous succession.
-
Views (last year): 3.
Road network infrastructure is the basis of any urban area. This article compares the structural characteristics (meshedness coefficient, clustering coefficient) road networks of Moscow center (Old Moscow), formed as a result of self-organization and roads near Leninsky Prospekt (postwar Moscow), which was result of cetralized planning. Data for the construction of road networks in the form of graphs taken from the Internet resource OpenStreetMap, allowing to accurately identify the coordinates of the intersections. According to the characteristics of the calculated Moscow road networks areas the cities with road network which have a similar structure to the two Moscow areas was found in foreign publications. Using the dual representation of road networks of centers of Moscow and St. Petersburg, studied the information and cognitive features of navigation in these tourist areas of the two capitals. In the construction of the dual graph of the studied areas were not taken into account the different types of roads (unidirectional or bi-directional traffic, etc), that is built dual graphs are undirected. Since the road network in the dual representation are described by a power law distribution of vertices on the number of edges (scale-free networks), exponents of these distributions were calculated. It is shown that the information complexity of the dual graph of the center of Moscow exceeds the cognitive threshold 8.1 bits, and the same feature for the center of St. Petersburg below this threshold, because the center of St. Petersburg road network was created on the basis of planning and therefore more easy to navigate. In conclusion, using the methods of statistical mechanics (the method of calculating the partition functions) for the road network of some Russian cities the Gibbs entropy were calculated. It was found that with the road network size increasing their entropy decreases. We discuss the problem of studying the evolution of urban infrastructure networks of different nature (public transport, supply , communication networks, etc.), which allow us to more deeply explore and understand the fundamental laws of urbanization.
-
Algorithm of simple graph exploration by a collective of agents
Computer Research and Modeling, 2021, v. 13, no. 1, pp. 33-45The study presented in the paper is devoted to the problem of finite graph exploration using a collective of agents. Finite non-oriented graphs without loops and multiple edges are considered in this paper. The collective of agents consists of two agents-researchers, who have a finite memory independent of the number of nodes of the graph studied by them and use two colors each (three colors are used in the aggregate) and one agentexperimental, who has a finite, unlimitedly growing internal memory. Agents-researches can simultaneously traverse the graph, read and change labels of graph elements, and also transmit the necessary information to a third agent — the agent-experimenter. An agent-experimenter is a non-moving agent in whose memory the result of the functioning of agents-researchers at each step is recorded and, also, a representation of the investigated graph (initially unknown to agents) is gradually built up with a list of edges and a list of nodes.
The work includes detail describes of the operating modes of agents-researchers with an indication of the priority of their activation. The commands exchanged between agents-researchers and an agent-experimenter during the execution of procedures are considered. Problematic situations arising in the work of agentsresearchers are also studied in detail, for example, staining a white vertex, when two agents simultaneously fall into the same node, or marking and examining the isthmus (edges connecting subgraphs examined by different agents-researchers), etc. The full algorithm of the agent-experimenter is presented with a detailed description of the processing of messages received from agents-researchers, on the basis of which a representation of the studied graph is built. In addition, a complete analysis of the time, space, and communication complexities of the constructed algorithm was performed.
The presented graph exploration algorithm has a quadratic (with respect to the number of nodes of the studied graph) time complexity, quadratic space complexity, and quadratic communication complexity. The graph exploration algorithm is based on the depth-first traversal method.
-
Variance reduction for minimax problems with a small dimension of one of the variables
Computer Research and Modeling, 2022, v. 14, no. 2, pp. 257-275The paper is devoted to convex-concave saddle point problems where the objective is a sum of a large number of functions. Such problems attract considerable attention of the mathematical community due to the variety of applications in machine learning, including adversarial learning, adversarial attacks and robust reinforcement learning, to name a few. The individual functions in the sum usually represent losses related to examples from a data set. Additionally, the formulation admits a possibly nonsmooth composite term. Such terms often reflect regularization in machine learning problems. We assume that the dimension of one of the variable groups is relatively small (about a hundred or less), and the other one is large. This case arises, for example, when one considers the dual formulation for a minimization problem with a moderate number of constraints. The proposed approach is based on using Vaidya’s cutting plane method to minimize with respect to the outer block of variables. This optimization algorithm is especially effective when the dimension of the problem is not very large. An inexact oracle for Vaidya’s method is calculated via an approximate solution of the inner maximization problem, which is solved by the accelerated variance reduced algorithm Katyusha. Thus, we leverage the structure of the problem to achieve fast convergence. Separate complexity bounds for gradients of different components with respect to different variables are obtained in the study. The proposed approach is imposing very mild assumptions about the objective. In particular, neither strong convexity nor smoothness is required with respect to the low-dimensional variable group. The number of steps of the proposed algorithm as well as the arithmetic complexity of each step explicitly depend on the dimensionality of the outer variable, hence the assumption that it is relatively small.
-
Non-linear self-interference cancellation on base of mixed Newton method
Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1579-1592The paper investigates a potential solution to the problem of Self-Interference Cancellation (SIC) encountered in the design of In-Band Full-Duplex (IBFD) communication systems. The suppression of selfinterference is implemented in the digital domain using multilayer nonlinear models adapted via the gradient descent method. The presence of local optima and saddle points in the adaptation of multilayer models prevents the use of second-order methods due to the indefinite nature of the Hessian matrix.
This work proposes the use of the Mixed Newton Method (MNM), which incorporates information about the second-order mixed partial derivatives of the loss function, thereby enabling a faster convergence rate compared to traditional first-order methods. By constructing the Hessian matrix solely with mixed second-order partial derivatives, this approach mitigates the issue of “getting stuck” at saddle points when applying the Mixed Newton Method for adapting multilayer nonlinear self-interference compensators in full-duplex system design.
The Hammerstein model with complex parameters has been selected to represent nonlinear selfinterference. This choice is motivated by the model’s ability to accurately describe the underlying physical properties of self-interference formation. Due to the holomorphic property of the model output, the Mixed Newton Method provides a “repulsion” effect from saddle points in the loss landscape.
The paper presents convergence curves for the adaptation of the Hammerstein model using both the Mixed Newton Method and conventional gradient descent-based approaches. Additionally, it provides a derivation of the proposed method along with an assessment of its computational complexity.
-
Electric field effects in chemical patterns
Computer Research and Modeling, 2014, v. 6, no. 5, pp. 705-718Views (last year): 8.Excitation waves are a prototype of self-organized dynamic patterns in non-equilibrium systems. They develop their own intrinsic dynamics resulting in travelling waves of various forms and shapes. Prominent examples are rotating spirals and scroll waves. It is an interesting and challenging task to find ways to control their behavior by applying external signals, upon which these propagating waves react. We apply external electric fields to such waves in the excitable Belousov–Zhabotinsky (BZ) reaction. Remarkable effects include the change of wave speed, reversal of propagation direction, annihilation of counter-rotating spiral waves and reorientation of scroll wave filaments. These effects can be explained in numerical simulations, where the negatively charged inhibitor bromide plays an essential role. Electric field effects have also been investigated in biological excitable media such as the social amoebae Dictyostelium discoideum. Quite recently we have started to investigate electric field effect in the BZ reaction dissolved in an Aerosol OT water-in-oil microemulsion. A drift of complex patterns can be observed, and also the viscosity and electric conductivity change. We discuss the assumption that this system can act as a model for long range communication between neurons.
-
Investigation of the relationships of the size and production characteristics of phyto- and zooplankton in the Vistula and Curonian lagoons of the Baltic Sea. Part 1. The statistical analysis of long-term observation data and development of the structure for the mathematical model of the plankton food chain
Computer Research and Modeling, 2017, v. 9, no. 2, pp. 211-246In the paper the statistical relationships between the size and production characteristics of phytoplankton and zooplankton of the Vistula and Curonian lagoons, the Baltic Sea, were investigated. Research phytoplankton and zooplankton within the Russian part of the area of the Vistula and the Curonian lagoon was carried out on the monthly basis (from April to November) within the framework of long-term monitoring program on evaluating of ecological status of the lagoons. The size structure of plankton is the basis for understanding of the development of production processes, mechanisms of formation of the plankton species diversity and functioning of the lagoon ecosystems. As results of the work it was found that the maximum rate of photosynthesis and the integral value of the primary production with a change in cell volume of phytoplankton are changed according to a power law. The result shows that the smaller the size of algal cells in phytoplankton communities the more actively occur metabolism and the more effective they assimilate the solar energy. It is shown that the formation of plankton species diversity in ecosystems of lagoons is closely linked with the size structure of plankton communities and with features of development of the production processes. It is proposed the structure of a spatially homogenous mathematical model of the plankton food chain for the lagoon ecosystems taking into account the size spectrum and the characteristics of phytoplankton and zooplankton. The model parameters are the sizedependent indicators allometrically linked with average volumes of cells and organisms in different ranges of their sizes. In the model the algorithm for changes over time the coefficients of food preferences in the diet of zooplankton was proposed. Developed the size-dependent mathematical model of aquatic ecosystems allows to consider the impact of turbulent exchange on the size structure and temporal dynamics of the plankton food chain of the Vistula and Curonian lagoons. The model can be used to study the different regimes of dynamic behavior of plankton systems depending on the changes in the values of its parameters and external influences, as well as to quantify the redistribution of matter flows in ecosystems of the lagoons.
Keywords: ecosystem, nutrients, phytoplankton, zooplankton, plankton detritus, size structure, the maximum rate of photosynthesis, integrated primary production, zooplankton production, allometric scaling, Shannon index of species diversity, mathematical modeling, ecological simulation model, turbulent exchange.Views (last year): 9. -
A.S. Komarov’s publications about cellular automata modelling of the population-ontogenetic development in plants: a review
Computer Research and Modeling, 2016, v. 8, no. 2, pp. 285-295The possibilities of cellular automata simulation applied to herbs and dwarf shrubs are described. Basicprinciples of discrete description of the ontogenesis of plants on which the mathematical modeling based are presents. The review discusses the main research results obtained with the use of models that revealing the patterns of functioning of populations and communities. The CAMPUS model and the results of computer experiment to study the growth of two clones of lingonberry with different geometry of the shoots are described. The paper is dedicated to the works of the founder of the direction of prof. A. S. Komarov. A list of his major publications on this subject is given.
Keywords: computer models, individual-based approach.Views (last year): 2. Citations: 6 (RSCI).
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"




