Результаты поиска по 'complexity':
Найдено статей: 271
  1. Ahmed M., Hegazy M., Klimchik A.S., Boby R.A.
    Lidar and camera data fusion in self-driving cars
    Computer Research and Modeling, 2022, v. 14, no. 6, pp. 1239-1253

    Sensor fusion is one of the important solutions for the perception problem in self-driving cars, where the main aim is to enhance the perception of the system without losing real-time performance. Therefore, it is a trade-off problem and its often observed that most models that have a high environment perception cannot perform in a real-time manner. Our article is concerned with camera and Lidar data fusion for better environment perception in self-driving cars, considering 3 main classes which are cars, cyclists and pedestrians. We fuse output from the 3D detector model that takes its input from Lidar as well as the output from the 2D detector that take its input from the camera, to give better perception output than any of them separately, ensuring that it is able to work in real-time. We addressed our problem using a 3D detector model (Complex-Yolov3) and a 2D detector model (Yolo-v3), wherein we applied the image-based fusion method that could make a fusion between Lidar and camera information with a fast and efficient late fusion technique that is discussed in detail in this article. We used the mean average precision (mAP) metric in order to evaluate our object detection model and to compare the proposed approach with them as well. At the end, we showed the results on the KITTI dataset as well as our real hardware setup, which consists of Lidar velodyne 16 and Leopard USB cameras. We used Python to develop our algorithm and then validated it on the KITTI dataset. We used ros2 along with C++ to verify the algorithm on our dataset obtained from our hardware configurations which proved that our proposed approach could give good results and work efficiently in practical situations in a real-time manner.

  2. Chukanov S.N.
    Comparison of complex dynamical systems based on topological data analysis
    Computer Research and Modeling, 2023, v. 15, no. 3, pp. 513-525

    The paper considers the possibility of comparing and classifying dynamical systems based on topological data analysis. Determining the measures of interaction between the channels of dynamic systems based on the HIIA (Hankel Interaction Index Array) and PM (Participation Matrix) methods allows you to build HIIA and PM graphs and their adjacency matrices. For any linear dynamic system, an approximating directed graph can be constructed, the vertices of which correspond to the components of the state vector of the dynamic system, and the arcs correspond to the measures of mutual influence of the components of the state vector. Building a measure of distance (proximity) between graphs of different dynamic systems is important, for example, for identifying normal operation or failures of a dynamic system or a control system. To compare and classify dynamic systems, weighted directed graphs corresponding to dynamic systems are preliminarily formed with edge weights corresponding to the measures of interaction between the channels of the dynamic system. Based on the HIIA and PM methods, matrices of measures of interaction between the channels of dynamic systems are determined. The paper gives examples of the formation of weighted directed graphs for various dynamic systems and estimation of the distance between these systems based on topological data analysis. An example of the formation of a weighted directed graph for a dynamic system corresponding to the control system for the components of the angular velocity vector of an aircraft, which is considered as a rigid body with principal moments of inertia, is given. The method of topological data analysis used in this work to estimate the distance between the structures of dynamic systems is based on the formation of persistent barcodes and persistent landscape functions. Methods for comparing dynamic systems based on topological data analysis can be used in the classification of dynamic systems and control systems. The use of traditional algebraic topology for the analysis of objects does not allow obtaining a sufficient amount of information due to a decrease in the data dimension (due to the loss of geometric information). Methods of topological data analysis provide a balance between reducing the data dimension and characterizing the internal structure of an object. In this paper, topological data analysis methods are used, based on the use of Vietoris-Rips and Dowker filtering to assign a geometric dimension to each topological feature. Persistent landscape functions are used to map the persistent diagrams of the method of topological data analysis into the Hilbert space and then quantify the comparison of dynamic systems. Based on the construction of persistent landscape functions, we propose a comparison of graphs of dynamical systems and finding distances between dynamical systems. For this purpose, weighted directed graphs corresponding to dynamical systems are preliminarily formed. Examples of finding the distance between objects (dynamic systems) are given.

  3. Aristova E.N., Karavaeva N.I.
    Bicompact schemes for the HOLO algorithm for joint solution of the transport equation and the energy equation
    Computer Research and Modeling, 2023, v. 15, no. 6, pp. 1429-1448

    The numerical solving of the system of high-temperature radiative gas dynamics (HTRGD) equations is a computationally laborious task, since the interaction of radiation with matter is nonlinear and non-local. The radiation absorption coefficients depend on temperature, and the temperature field is determined by both gas-dynamic processes and radiation transport. The method of splitting into physical processes is usually used to solve the HTRGD system, one of the blocks consists of a joint solving of the radiative transport equation and the energy balance equation of matter under known pressure and temperature fields. Usually difference schemes with orders of convergence no higher than the second are used to solve this block. Due to computer memory limitations it is necessary to use not too detailed grids to solve complex technical problems. This increases the requirements for the order of approximation of difference schemes. In this work, bicompact schemes of a high order of approximation for the algorithm for the joint solution of the radiative transport equation and the energy balance equation are implemented for the first time. The proposed method can be applied to solve a wide range of practical problems, as it has high accuracy and it is suitable for solving problems with coefficient discontinuities. The non-linearity of the problem and the use of an implicit scheme lead to an iterative process that may slowly converge. In this paper, we use a multiplicative HOLO algorithm named the quasi-diffusion method by V.Ya.Goldin. The key idea of HOLO algorithms is the joint solving of high order (HO) and low order (LO) equations. The high-order equation (HO) is the radiative transport equation solved in the energy multigroup approximation, the system of quasi-diffusion equations in the multigroup approximation (LO1) is obtained by averaging HO equations over the angular variable. The next step is averaging over energy, resulting in an effective one-group system of quasi-diffusion equations (LO2), which is solved jointly with the energy equation. The solutions obtained at each stage of the HOLO algorithm are closely related that ultimately leads to an acceleration of the convergence of the iterative process. Difference schemes constructed by the method of lines within one cell are proposed for each of the stages of the HOLO algorithm. The schemes have the fourth order of approximation in space and the third order of approximation in time. Schemes for the transport equation were developed by B.V. Rogov and his colleagues, the schemes for the LO1 and LO2 equations were developed by the authors. An analytical test is constructed to demonstrate the declared orders of convergence. Various options for setting boundary conditions are considered and their influence on the order of convergence in time and space is studied.

  4. Ushakov A.O., Gandzha T.V., Dmitriev V.M., Molokov P.B.
    Computer model of a perfect-mixing extraction reactor in the format of the component circuits method with non-uniform vector connections
    Computer Research and Modeling, 2024, v. 16, no. 3, pp. 599-614

    The features of the component circuits method (MCC) in modeling chemical-technological systems (CTS) are considered, taking into account its practical significance. The software and algorithmic implementation of which is currently a set of computer modeling programs MARS (Modeling and Automatic Research of Systems). MARS allows the development and analysis of mathematical models with specified experimental parameters. Research and calculations were carried out using a specialized software and hardware complex MARS, which allows the development of mathematical models with specified experimental parameters. In the course of this work, the model of a perfect-mixing reactor was developed in the MARS modeling environment taking into account the physicochemical features of the uranium extraction process in the presence of nitric acid and tributyl phosphate. As results, the curves of changes of the concentration of uranium extracted into the organic phase are presented. The possibility of using MCC for the description and analysis of CTS, including extraction processes, has been confirmed. The use of the obtained results is planned to be used in the development of a virtual laboratory, which will include the main apparatus of the chemical industry, as well as complex technical controlled systems (CTСS) based on them and will allow one to acquire a wide range of professional competencies in working with “digital twins” of real control objects, including gaining initial experience working with the main equipment of the nuclear industry. In addition to the direct applied benefits, it is also assumed that the successful implementation of the domestic complex of computer modeling programs and technologies based on the obtained results will make it possible to find solutions to the problems of organizing national technological sovereignty and import substitution.

  5. Bozhko A.N.
    Structural models of product in CAD-systems
    Computer Research and Modeling, 2024, v. 16, no. 5, pp. 1079-1091

    Computer-aided assembly planning of complex products is an important area of modern information technology. The sequence of assembly and decomposition of the product into assembly units largely depend on the mechanical structure of a technical system (machine, mechanical device, etc.). In most modern research, the mechanical structure of products is modeled using a graph of connections and its various modifications. The coordination of parts during assembly can be achieved by implementing several connections at the same time. This generates a $k$-ary basing relation on a set of product parts, which cannot be correctly described by graph means. A hypergraph model of the mechanical structure of a product is proposed. Modern discrete manufacturing uses sequential coherent assembly operations. The mathematical description of such operations is the normal contraction of edges of the hypergraph model. The sequence of contractions that transform the hypergraph into a point is a description of the assembly plan. Hypergraphs for which such a transformation exists are called $s$-hypergraphs. $S$-hypergraphs are correct mathematical models of the mechanical structures of any assembled products. A theorem on necessary conditions for the contractibility of $s$-hypergraphs is given. It is shown that the necessary conditions are not sufficient. An example of a noncontractible hypergraph for which the necessary conditions are satisfied is given. This means that the design of a complex technical system may contain hidden structural errors that make assembly of the product impossible. Therefore, finding sufficient conditions for contractibility is an important task. Two theorems on sufficient conditions for contractibility are proved. They provide a theoretical basis for developing an efficient computational procedure for finding all $s$-subgraphs of an $s$-hypergraph. An $s$-subgraph is a model of any part of a product that can be assembled independently. These are, first of all, assembly units of various levels of hierarchy. The set of all $s$-subgraphs of an $s$-hypergraph, ordered by inclusion, is a lattice. This model can be used to synthesize all possible sequences of assembly and disassembly of a product and its components. The lattice model of the product allows you to analyze geometric obstacles during assembly using algebraic means.

  6. Zhitnukhin N.A., Zhadan A.Y., Kondratov I.V., Allahverdyan A.L., Granichin O.N., Petrosian O.L., Romanovskii A.V., Kharin V.S.
    Multi-agent local voting protocol for online DAG scheduling
    Computer Research and Modeling, 2025, v. 17, no. 1, pp. 29-44

    Scheduling computational workflows represented by directed acyclic graphs (DAGs) is crucial in many areas of computer science, such as cloud/edge tasks with distributed workloads and data mining. The complexity of online DAG scheduling is compounded by the large number of computational nodes, data transfer delays, heterogeneity (by type and processing power) of executors, precedence constraints imposed by DAG, and the nonuniform arrival of tasks. This paper introduces the Multi-Agent Local Voting Protocol (MLVP), a novel approach focused on dynamic load balancing for DAG scheduling in heterogeneous computing environments, where executors are represented as agents. The MLVP employs a local voting protocol to achieve effective load distribution by formulating the problem as a differentiated consensus achievement. The algorithm calculates an aggregated DAG metric for each executor-node pair based on node dependencies, node availability, and executor performance. The balance of these metrics as a weighted sum is optimized using a genetic algorithm to assign tasks probabilistically, achieving efficient workload distribution via information sharing and reaching consensus among the executors across the system and thus improving makespan. The effectiveness of the MLVP is demonstrated through comparisons with the state-of-the-art DAG scheduling algorithm and popular heuristics such as DONF, FIFO, Min- Min, and Max-Min. Numerical simulations show that MLVP achieves makepsan improvements of up to 70% on specific graph topologies and an average makespan reduction of 23.99% over DONF (state-of-the-art DAG scheduling heuristic) across randomly generated diverse set of DAGs. Notably, the algorithm’s scalability is evidenced by enhanced performance with increasing numbers of executors and graph nodes.

  7. Belotelov V.N., Daryina A.N.
    Tangent search method in time optimal problem for a wheeled mobile robot
    Computer Research and Modeling, 2025, v. 17, no. 3, pp. 401-421

    Searching optimal trajectory of motion is a complex problem that is investigated in many research studies. Most of the studies investigate methods that are applicable to such a problem in general, regardless of the model of the object. With such general approach, only numerical solution can be found. However, in some cases it is possible to find an optimal trajectory in a closed form. Current article considers a time optimal problem with state limitations for a wheeled mobile differential robot that moves on a horizontal plane. The mathematical model of motion is kinematic. The state constraints correspond to the obstacles on the plane defined as circles that need to be avoided during motion. The independent control inputs are the wheel speeds that are limited in absolute value. Such model is commonly used in problems where the transients are considered insignificant, for example, when controlling tracked or wheeled devices that move slowly, prioritizing traction power over speed. In the article it is shown that the optimal trajectory from the starting point to the finishing point in such kinematic approach is a sequence of straight segments of tangents to the obstacles and arcs of the circles that limit the obstacles. The geometrically shortest path between the start and the finish is also a sequence of straight lines and arcs, therefore the time-optimal trajectory corresponds to one of the local minima when searching for the shortest path. The article proposes a method of search for the time-optimal trajectory based on building a graph of possible trajectories, where the edges are the possible segments of the tajectory, and the vertices are the connections between them. The optimal path is sought using Dijkstra’s algorithm. The theoretical foundation of the method is given, and the results of computer investigation of the algorithm are provided.

  8. Jarrah A.A., Ejjbiri H., Lubashevskiy V.
    Iterative diffusion importance: advancing edge criticality evaluation in complex networks
    Computer Research and Modeling, 2025, v. 17, no. 5, pp. 783-797

    This paper is devoted to the problem of edge criticality identification and ranking in complex networks, which is a part of a modern research direction in the novel network science. The diffusion importance belongs to the set of acknowledged methods that help to identify the significant connections in the graph that are critical to retaining structural integrity. In the present work, we develop the Iterative Diffusion Importance algorithm that is based on the re-estimation of critical topological features at each step of the graph deconstruction. The Iterative Diffusion Importance has been compared with methods such as diffusion importance and degree product, which are two very well-known benchmark algorithms. As for benchmark networks, we tested the Iterative Diffusion Importance on three standard networks, such as Zachary’s Karate Club, the American Football Network, and the Dolphins Network, which are often used for algorithm efficiency evaluation and are different in size and density. Also, we proposed a new benchmark network representing the airplane communication between Japan and the US. The numerical experiment on finding the ranking of critical edges and the following network decomposition demonstrated that the proposed Iterative Diffusion Importance exceeds the conventional diffusion importance by the efficiency for 2–35% depending on the network complexity, the number of nodes, and the number of edges. The only drawback of the Iterative Diffusion Importance is an increase in computation complexity and hencely in the runtime, but this drawback can be easily compensated for by the preliminary planning of the network deconstruction or protection and by reducing the re-evaluation frequency of the iterative process.

  9. Bakhvalov Y.N., Kopylov I.V.
    Training and assessment the generalization ability of interpolation methods
    Computer Research and Modeling, 2015, v. 7, no. 5, pp. 1023-1031

    We investigate machine learning methods with a certain kind of decision rule. In particular, inverse-distance method of interpolation, method of interpolation by radial basis functions, the method of multidimensional interpolation and approximation, based on the theory of random functions, the last method of interpolation is kriging. This paper shows a method of rapid retraining “model” when adding new data to the existing ones. The term “model” means interpolating or approximating function constructed from the training data. This approach reduces the computational complexity of constructing an updated “model” from $O(n^3)$ to $O(n^2)$. We also investigate the possibility of a rapid assessment of generalizing opportunities “model” on the training set using the method of cross-validation leave-one-out cross-validation, eliminating the major drawback of this approach — the necessity to build a new “model” for each element which is removed from the training set.

    Views (last year): 7. Citations: 5 (RSCI).
  10. This article explores a method of machine learning based on the theory of random functions. One of the main problems of this method is that decision rule of a model becomes more complicated as the number of training dataset examples increases. The decision rule of the model is the most probable realization of a random function and it's represented as a polynomial with the number of terms equal to the number of training examples. In this article we will show the quick way of the number of training dataset examples reduction and, accordingly, the complexity of the decision rule. Reducing the number of examples of training dataset is due to the search and removal of weak elements that have little effect on the final form of the decision function, and noise sampling elements. For each $(x_i,y_i)$-th element sample was introduced the concept of value, which is expressed by the deviation of the estimated value of the decision function of the model at the point $x_i$, built without the $i$-th element, from the true value $y_i$. Also we show the possibility of indirect using weak elements in the process of training model without increasing the number of terms in the decision function. At the experimental part of the article, we show how changed amount of data affects to the ability of the method of generalizing in the classification task.

    Views (last year): 5.
Pages: « first previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"