All issues
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Statistical distribution of the quasi-harmonic signal’s phase: basics of theory and computer simulation
Computer Research and Modeling, 2024, v. 16, no. 2, pp. 287-297The paper presents the results of the fundamental research directed on the theoretical study and computer simulation of peculiarities of the quasi-harmonic signal’s phase statistical distribution. The quasi-harmonic signal is known to be formed as a result of the Gaussian noise impact on the initially harmonic signal. By means of the mathematical analysis the formulas have been obtained in explicit form for the principle characteristics of this distribution, namely: for the cumulative distribution function, the probability density function, the likelihood function. As a result of the conducted computer simulation the dependencies of these functions on the phase distribution parameters have been analyzed. The paper elaborates the methods of estimating the phase distribution parameters which contain the information about the initial, undistorted signal. It has been substantiated that the task of estimating the initial value of the phase of quasi-harmonic signal can be efficiently solved by averaging the results of the sampled measurements. As for solving the task of estimating the second parameter of the phase distribution, namely — the parameter, determining the signal level respectively the noise level — a maximum likelihood technique is proposed to be applied. The graphical illustrations are presented that have been obtained by means of the computer simulation of the principle characteristics of the phase distribution under the study. The existence and uniqueness of the likelihood function’s maximum allow substantiating the possibility and the efficiency of solving the task of estimating signal’s level relative to noise level by means of the maximum likelihood technique. The elaborated method of estimating the un-noised signal’s level relative to noise, i. e. the parameter characterizing the signal’s intensity on the basis of measurements of the signal’s phase is an original and principally new technique which opens perspectives of usage of the phase measurements as a tool of the stochastic data analysis. The presented investigation is meaningful for solving the task of determining the phase and the signal’s level by means of the statistical processing of the sampled phase measurements. The proposed methods of the estimation of the phase distribution’s parameters can be used at solving various scientific and technological tasks, in particular, in such areas as radio-physics, optics, radiolocation, radio-navigation, metrology.
-
Computer model of a perfect-mixing extraction reactor in the format of the component circuits method with non-uniform vector connections
Computer Research and Modeling, 2024, v. 16, no. 3, pp. 599-614The features of the component circuits method (MCC) in modeling chemical-technological systems (CTS) are considered, taking into account its practical significance. The software and algorithmic implementation of which is currently a set of computer modeling programs MARS (Modeling and Automatic Research of Systems). MARS allows the development and analysis of mathematical models with specified experimental parameters. Research and calculations were carried out using a specialized software and hardware complex MARS, which allows the development of mathematical models with specified experimental parameters. In the course of this work, the model of a perfect-mixing reactor was developed in the MARS modeling environment taking into account the physicochemical features of the uranium extraction process in the presence of nitric acid and tributyl phosphate. As results, the curves of changes of the concentration of uranium extracted into the organic phase are presented. The possibility of using MCC for the description and analysis of CTS, including extraction processes, has been confirmed. The use of the obtained results is planned to be used in the development of a virtual laboratory, which will include the main apparatus of the chemical industry, as well as complex technical controlled systems (CTСS) based on them and will allow one to acquire a wide range of professional competencies in working with “digital twins” of real control objects, including gaining initial experience working with the main equipment of the nuclear industry. In addition to the direct applied benefits, it is also assumed that the successful implementation of the domestic complex of computer modeling programs and technologies based on the obtained results will make it possible to find solutions to the problems of organizing national technological sovereignty and import substitution.
-
Structural models of product in CAD-systems
Computer Research and Modeling, 2024, v. 16, no. 5, pp. 1079-1091Computer-aided assembly planning of complex products is an important area of modern information technology. The sequence of assembly and decomposition of the product into assembly units largely depend on the mechanical structure of a technical system (machine, mechanical device, etc.). In most modern research, the mechanical structure of products is modeled using a graph of connections and its various modifications. The coordination of parts during assembly can be achieved by implementing several connections at the same time. This generates a $k$-ary basing relation on a set of product parts, which cannot be correctly described by graph means. A hypergraph model of the mechanical structure of a product is proposed. Modern discrete manufacturing uses sequential coherent assembly operations. The mathematical description of such operations is the normal contraction of edges of the hypergraph model. The sequence of contractions that transform the hypergraph into a point is a description of the assembly plan. Hypergraphs for which such a transformation exists are called $s$-hypergraphs. $S$-hypergraphs are correct mathematical models of the mechanical structures of any assembled products. A theorem on necessary conditions for the contractibility of $s$-hypergraphs is given. It is shown that the necessary conditions are not sufficient. An example of a noncontractible hypergraph for which the necessary conditions are satisfied is given. This means that the design of a complex technical system may contain hidden structural errors that make assembly of the product impossible. Therefore, finding sufficient conditions for contractibility is an important task. Two theorems on sufficient conditions for contractibility are proved. They provide a theoretical basis for developing an efficient computational procedure for finding all $s$-subgraphs of an $s$-hypergraph. An $s$-subgraph is a model of any part of a product that can be assembled independently. These are, first of all, assembly units of various levels of hierarchy. The set of all $s$-subgraphs of an $s$-hypergraph, ordered by inclusion, is a lattice. This model can be used to synthesize all possible sequences of assembly and disassembly of a product and its components. The lattice model of the product allows you to analyze geometric obstacles during assembly using algebraic means.
-
Numerical simulation of the propagation of probing pulses in a dense bed of a granular medium
Computer Research and Modeling, 2024, v. 16, no. 6, pp. 1361-1384The need to model high-speed flows of compressible media with shock waves in the presence of dense curtains or layers of particles arises when studying various processes, such as the dispersion of particles from a layer behind a shock wave or propagation of combustion waves in heterogeneous explosives. These directions have been successfully developed over the past few decades, but the corresponding mathematical models and computational algorithms continue to be actively improved. The mechanisms of wave processes in two-phase media differ in different models, so it is important to continue researching and improving these models.
The paper is devoted to the numerical study of the propagation of disturbances inside a sand bed under the action of successive impacts of a normally incident air shock wave. The setting of the problem follows the experiments of A. T.Akhmetov with co-authors. The aim of this study is to investigate the possible reasons for signal amplification on the pressure sensor within the bed, as observed under some conditions in experiments. The mathematical model is based on a one-dimensional system of Baer –Nunziato equations for describing dense flows of two-phase media taking into account intergranular stresses in the particle phase. The computational algorithm is based on the Godunov method for the Baer – Nunziato equations.
The paper describes the dynamics of waves inside and outside a particle bed after applying first and second pressure pulses to it. The main components of the flow within the bed are filtration waves in the gas phase and compaction waves in the solid phase. The compaction wave, generated by the first pulse and reflected from the walls of the shock tube, interacts with the filtration wave caused by the second pulse. As a result, the signal measured by the pressure sensor inside the bed has a sharp peak, explaining the new effect observed in experiments.
-
Multi-agent local voting protocol for online DAG scheduling
Computer Research and Modeling, 2025, v. 17, no. 1, pp. 29-44Scheduling computational workflows represented by directed acyclic graphs (DAGs) is crucial in many areas of computer science, such as cloud/edge tasks with distributed workloads and data mining. The complexity of online DAG scheduling is compounded by the large number of computational nodes, data transfer delays, heterogeneity (by type and processing power) of executors, precedence constraints imposed by DAG, and the nonuniform arrival of tasks. This paper introduces the Multi-Agent Local Voting Protocol (MLVP), a novel approach focused on dynamic load balancing for DAG scheduling in heterogeneous computing environments, where executors are represented as agents. The MLVP employs a local voting protocol to achieve effective load distribution by formulating the problem as a differentiated consensus achievement. The algorithm calculates an aggregated DAG metric for each executor-node pair based on node dependencies, node availability, and executor performance. The balance of these metrics as a weighted sum is optimized using a genetic algorithm to assign tasks probabilistically, achieving efficient workload distribution via information sharing and reaching consensus among the executors across the system and thus improving makespan. The effectiveness of the MLVP is demonstrated through comparisons with the state-of-the-art DAG scheduling algorithm and popular heuristics such as DONF, FIFO, Min- Min, and Max-Min. Numerical simulations show that MLVP achieves makepsan improvements of up to 70% on specific graph topologies and an average makespan reduction of 23.99% over DONF (state-of-the-art DAG scheduling heuristic) across randomly generated diverse set of DAGs. Notably, the algorithm’s scalability is evidenced by enhanced performance with increasing numbers of executors and graph nodes.
-
Tangent search method in time optimal problem for a wheeled mobile robot
Computer Research and Modeling, 2025, v. 17, no. 3, pp. 401-421Searching optimal trajectory of motion is a complex problem that is investigated in many research studies. Most of the studies investigate methods that are applicable to such a problem in general, regardless of the model of the object. With such general approach, only numerical solution can be found. However, in some cases it is possible to find an optimal trajectory in a closed form. Current article considers a time optimal problem with state limitations for a wheeled mobile differential robot that moves on a horizontal plane. The mathematical model of motion is kinematic. The state constraints correspond to the obstacles on the plane defined as circles that need to be avoided during motion. The independent control inputs are the wheel speeds that are limited in absolute value. Such model is commonly used in problems where the transients are considered insignificant, for example, when controlling tracked or wheeled devices that move slowly, prioritizing traction power over speed. In the article it is shown that the optimal trajectory from the starting point to the finishing point in such kinematic approach is a sequence of straight segments of tangents to the obstacles and arcs of the circles that limit the obstacles. The geometrically shortest path between the start and the finish is also a sequence of straight lines and arcs, therefore the time-optimal trajectory corresponds to one of the local minima when searching for the shortest path. The article proposes a method of search for the time-optimal trajectory based on building a graph of possible trajectories, where the edges are the possible segments of the tajectory, and the vertices are the connections between them. The optimal path is sought using Dijkstra’s algorithm. The theoretical foundation of the method is given, and the results of computer investigation of the algorithm are provided.
-
Situational resource allocation: review of technologies for solving problems based on knowledge systems
Computer Research and Modeling, 2025, v. 17, no. 4, pp. 543-566The article presents updated technologies for solving two classes of linear resource allocation problems with dynamically changing characteristics of situational management systems and awareness of experts (and/or trained robots). The search for solutions is carried out in an interactive mode of computational experiment using updatable knowledge systems about problems considered as constructive objects (in accordance with the methodology of formalization of knowledge about programmable problems created in the theory of S-symbols). The technologies are focused on implementation in the form of Internet services. The first class includes resource allocation problems solved by the method of targeted solution movement. The second is the problems of allocating a single resource in hierarchical systems, taking into account the priorities of expense items, which can be solved (depending on the specified mandatory and orienting requirements for the solution) either by the interval method of allocation (with input data and result represented by numerical segments), or by the targeted solution movement method. The problem statements are determined by requirements for solutions and specifications of their applicability, which are set by an expert based on the results of the portraits of the target and achieved situations analysis. Unlike well-known methods for solving resource allocation problems as linear programming problems, the method of targeted solution movement is insensitive to small data changes and allows to find feasible solutions when the constraint system is incompatible. In single-resource allocation technologies, the segmented representation of data and results allows a more adequate (compared to a point representation) reflection of the state of system resource space and increases the practical applicability of solutions. The technologies discussed in the article are programmatically implemented and used to solve the problems of resource basement for decisions, budget design taking into account the priorities of expense items, etc. The technology of allocating a single resource is implemented in the form of an existing online cost planning service. The methodological consistency of the technologies is confirmed by the results of comparison with known technologies for solving the problems under consideration.
Keywords: linear resource allocation problems, technologies for solving situational resource allocation problems, states of system’s resource space, profiles of situations, mandatory and orienting requirements for solutions, method of targeted solution movement, interval method of allocation, theory of S-symbols. -
Iterative diffusion importance: advancing edge criticality evaluation in complex networks
Computer Research and Modeling, 2025, v. 17, no. 5, pp. 783-797This paper is devoted to the problem of edge criticality identification and ranking in complex networks, which is a part of a modern research direction in the novel network science. The diffusion importance belongs to the set of acknowledged methods that help to identify the significant connections in the graph that are critical to retaining structural integrity. In the present work, we develop the Iterative Diffusion Importance algorithm that is based on the re-estimation of critical topological features at each step of the graph deconstruction. The Iterative Diffusion Importance has been compared with methods such as diffusion importance and degree product, which are two very well-known benchmark algorithms. As for benchmark networks, we tested the Iterative Diffusion Importance on three standard networks, such as Zachary’s Karate Club, the American Football Network, and the Dolphins Network, which are often used for algorithm efficiency evaluation and are different in size and density. Also, we proposed a new benchmark network representing the airplane communication between Japan and the US. The numerical experiment on finding the ranking of critical edges and the following network decomposition demonstrated that the proposed Iterative Diffusion Importance exceeds the conventional diffusion importance by the efficiency for 2–35% depending on the network complexity, the number of nodes, and the number of edges. The only drawback of the Iterative Diffusion Importance is an increase in computation complexity and hencely in the runtime, but this drawback can be easily compensated for by the preliminary planning of the network deconstruction or protection and by reducing the re-evaluation frequency of the iterative process.
-
Critical rate of computing net increase for providing the infinity faultless work
Computer Research and Modeling, 2009, v. 1, no. 1, pp. 33-39Fault-tolerance of a finite computing net with arbitrary graph, containing elements with certain probability of fault and restore, is analyzed. Algorithm for net growth at each work cycle is suggested. It is shown that if the rate of net increase is sufficiently big then the probability of infinity faultless work is positive. Estimated critical net increase rate is logarithmic over the number of work cycles.
-
Views (last year): 1.
High-performance paralell computing on GPUs for biological applications
Computer Research and Modeling, 2010, v. 2, no. 2, p. 161
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"




