Processing math: 100%
Результаты поиска по 'metric':
Найдено статей: 35
  1. Koganov A.V.
    Uniform graph embedding into metric spaces
    Computer Research and Modeling, 2012, v. 4, no. 2, pp. 241-251

    The task of embedding an infinity countable graph into continuous metric space is considered. The concept of uniform embedding having no accumulation point in a set of vertex images and having all graph edge images of a limited length is introduced. Necessary and sufficient conditions for possibility of uniform embedding into spaces with Euclid and Lorenz metrics are stated in terms of graph structure. It is proved that tree graphs with finite branching have uniform embedding into space with absolute Minkowski metric.

  2. Editor’s note
    Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1533-1538
  3. Breev A.I., Shapovalov A.V.
    Vacuum polarization of scalar field on Lie groups with Bi-invariant metric
    Computer Research and Modeling, 2015, v. 7, no. 5, pp. 989-999

    We consider vacuum polarization of a scalar field on the Lie groups with a bi-invariant metric of Robertson-Walker type. Using the method of orbits we found expression for the vacuum expectation values of the energy-momentum tensor of the scalar field which are determined by the representation character of the group. It is shown that Einstein’s equations with the energy-momentum tensor are consistent. As an example, we consider isotropic Bianchi type IX model.

    Views (last year): 2.
  4. Ahmed M., Hegazy M., Klimchik A.S., Boby R.A.
    Lidar and camera data fusion in self-driving cars
    Computer Research and Modeling, 2022, v. 14, no. 6, pp. 1239-1253

    Sensor fusion is one of the important solutions for the perception problem in self-driving cars, where the main aim is to enhance the perception of the system without losing real-time performance. Therefore, it is a trade-off problem and its often observed that most models that have a high environment perception cannot perform in a real-time manner. Our article is concerned with camera and Lidar data fusion for better environment perception in self-driving cars, considering 3 main classes which are cars, cyclists and pedestrians. We fuse output from the 3D detector model that takes its input from Lidar as well as the output from the 2D detector that take its input from the camera, to give better perception output than any of them separately, ensuring that it is able to work in real-time. We addressed our problem using a 3D detector model (Complex-Yolov3) and a 2D detector model (Yolo-v3), wherein we applied the image-based fusion method that could make a fusion between Lidar and camera information with a fast and efficient late fusion technique that is discussed in detail in this article. We used the mean average precision (mAP) metric in order to evaluate our object detection model and to compare the proposed approach with them as well. At the end, we showed the results on the KITTI dataset as well as our real hardware setup, which consists of Lidar velodyne 16 and Leopard USB cameras. We used Python to develop our algorithm and then validated it on the KITTI dataset. We used ros2 along with C++ to verify the algorithm on our dataset obtained from our hardware configurations which proved that our proposed approach could give good results and work efficiently in practical situations in a real-time manner.

  5. Ahmad U., Ivanov V.
    Automating high-quality concept banks: leveraging LLMs and multimodal evaluation metrics
    Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1555-1567

    Interpretability in recent deep learning models has become an epicenter of research particularly in sensitive domains such as healthcare, and finance. Concept bottleneck models have emerged as a promising approach for achieving transparency and interpretability by leveraging a set of humanunderstandable concepts as an intermediate representation before the prediction layer. However, manual concept annotation is discouraged due to the time and effort involved. Our work explores the potential of large language models (LLMs) for generating high-quality concept banks and proposes a multimodal evaluation metric to assess the quality of generated concepts. We investigate three key research questions: the ability of LLMs to generate concept banks comparable to existing knowledge bases like ConceptNet, the sufficiency of unimodal text-based semantic similarity for evaluating concept-class label associations, and the effectiveness of multimodal information in quantifying concept generation quality compared to unimodal concept-label semantic similarity. Our findings reveal that multimodal models outperform unimodal approaches in capturing concept-class label similarity. Furthermore, our generated concepts for the CIFAR-10 and CIFAR-100 datasets surpass those obtained from ConceptNet and the baseline comparison, demonstrating the standalone capability of LLMs in generating highquality concepts. Being able to automatically generate and evaluate high-quality concepts will enable researchers to quickly adapt and iterate to a newer dataset with little to no effort before they can feed that into concept bottleneck models.

  6. Zhitnukhin N.A., Zhadan A.Y., Kondratov I.V., Allahverdyan A.L., Granichin O.N., Petrosian O.L., Romanovskii A.V., Kharin V.S.
    Multi-agent local voting protocol for online DAG scheduling
    Computer Research and Modeling, 2025, v. 17, no. 1, pp. 29-44

    Scheduling computational workflows represented by directed acyclic graphs (DAGs) is crucial in many areas of computer science, such as cloud/edge tasks with distributed workloads and data mining. The complexity of online DAG scheduling is compounded by the large number of computational nodes, data transfer delays, heterogeneity (by type and processing power) of executors, precedence constraints imposed by DAG, and the nonuniform arrival of tasks. This paper introduces the Multi-Agent Local Voting Protocol (MLVP), a novel approach focused on dynamic load balancing for DAG scheduling in heterogeneous computing environments, where executors are represented as agents. The MLVP employs a local voting protocol to achieve effective load distribution by formulating the problem as a differentiated consensus achievement. The algorithm calculates an aggregated DAG metric for each executor-node pair based on node dependencies, node availability, and executor performance. The balance of these metrics as a weighted sum is optimized using a genetic algorithm to assign tasks probabilistically, achieving efficient workload distribution via information sharing and reaching consensus among the executors across the system and thus improving makespan. The effectiveness of the MLVP is demonstrated through comparisons with the state-of-the-art DAG scheduling algorithm and popular heuristics such as DONF, FIFO, Min- Min, and Max-Min. Numerical simulations show that MLVP achieves makepsan improvements of up to 70% on specific graph topologies and an average makespan reduction of 23.99% over DONF (state-of-the-art DAG scheduling heuristic) across randomly generated diverse set of DAGs. Notably, the algorithm’s scalability is evidenced by enhanced performance with increasing numbers of executors and graph nodes.

  7. Ivanova A.S., Omelchenko S.S., Kotliarova E.V., Matyukhin V.V.
    Calibration of model parameters for calculating correspondence matrix for Moscow
    Computer Research and Modeling, 2020, v. 12, no. 5, pp. 961-978

    In this paper, we consider the problem of restoring the correspondence matrix based on the observations of real correspondences in Moscow. Following the conventional approach [Gasnikov et al., 2013], the transport network is considered as a directed graph whose edges correspond to road sections and the graph vertices correspond to areas that the traffic participants leave or enter. The number of city residents is considered constant. The problem of restoring the correspondence matrix is to calculate all the correspondence from the i area to the j area.

    To restore the matrix, we propose to use one of the most popular methods of calculating the correspondence matrix in urban studies — the entropy model. In our work, which is based on the work [Wilson, 1978], we describe the evolutionary justification of the entropy model and the main idea of the transition to solving the problem of entropy-linear programming (ELP) in calculating the correspondence matrix. To solve the ELP problem, it is proposed to pass to the dual problem. In this paper, we describe several numerical optimization methods for solving this problem: the Sinkhorn method and the Accelerated Sinkhorn method. We provide numerical experiments for the following variants of cost functions: a linear cost function and a superposition of the power and logarithmic cost functions. In these functions, the cost is a combination of average time and distance between areas, which depends on the parameters. The correspondence matrix is calculated for multiple sets of parameters and then we calculate the quality of the restored matrix relative to the known correspondence matrix.

    We assume that the noise in the restored correspondence matrix is Gaussian, as a result, we use the standard deviation as a quality metric. The article provides an overview of gradient-free optimization methods for solving non-convex problems. Since the number of parameters of the cost function is small, we use the grid search method to find the optimal parameters of the cost function. Thus, the correspondence matrix calculated for each set of parameters and then the quality of the restored matrix is evaluated relative to the known correspondence matrix. Further, according to the minimum residual value for each cost function, we determine for which cost function and at what parameter values the restored matrix best describes real correspondence.

  8. Maslovskiy A.Y., Sumenkov O.Y., Vorkutov D.A., Chukanov S.V.
    Application of discrete multicriteria optimization methods for the digital predistortion model design
    Computer Research and Modeling, 2023, v. 15, no. 2, pp. 281-300

    In this paper, we investigate different alternative ideas for the design of digital predistortion models for radiofrequency power amplifiers. When compared to the greedy search algorithm, these algorithms allow a faster identification of the model parameters combination while still performing reasonably well. For the subsequent implementation, different metrics of model costs and score results in the process of optimization enable us to achieve sparse selections of the model, which balance the model accuracy and model resources (according to the complexity of implementation). The results achieved in the process of simulations show that combinations obtained with explored algorithms show the best performance after a lower number of simulations.

  9. Ignashin I.N., Yarmoshik D.V.
    Modifications of the Frank –Wolfe algorithm in the problem of finding the equilibrium distribution of traffic flows
    Computer Research and Modeling, 2024, v. 16, no. 1, pp. 53-68

    The paper presents various modifications of the Frank–Wolfe algorithm in the equilibrium traffic assignment problem. The Beckman model is used as a model for experiments. In this article, first of all, attention is paid to the choice of the direction of the basic step of the Frank–Wolfe algorithm. Algorithms will be presented: Conjugate Frank–Wolfe (CFW), Bi-conjugate Frank–Wolfe (BFW), Fukushima Frank –Wolfe (FFW). Each modification corresponds to different approaches to the choice of this direction. Some of these modifications are described in previous works of the authors. In this article, following algorithms will be proposed: N-conjugate Frank–Wolfe (NFW), Weighted Fukushima Frank–Wolfe (WFFW). These algorithms are some ideological continuation of the BFW and FFW algorithms. Thus, if the first algorithm used at each iteration the last two directions of the previous iterations to select the next direction conjugate to them, then the proposed algorithm NFW is using more than N previous directions. In the case of Fukushima Frank–Wolfe, the average of several previous directions is taken as the next direction. According to this algorithm, a modification WFFW is proposed, which uses a exponential smoothing from previous directions. For comparative analysis, experiments with various modifications were carried out on several data sets representing urban structures and taken from publicly available sources. The relative gap value was taken as the quality metric. The experimental results showed the advantage of algorithms using the previous directions for step selection over the classic Frank–Wolfe algorithm. In addition, an improvement in efficiency was revealed when using more than two conjugate directions. For example, on various datasets, the modification 3FW showed the best convergence. In addition, the proposed modification WFFW often overtook FFW and CFW, although performed worse than NFW.

  10. Moskalev P.V.
    Estimates of threshold and strength of percolation clusters on square lattices with (1,π)-neighborhood
    Computer Research and Modeling, 2014, v. 6, no. 3, pp. 405-414

    In this paper we consider statistical estimates of threshold and strength of percolation clusters on square lattices. The percolation threshold pc and the strength of percolation clusters P for a square lattice with (1,π)-neighborhood depends not only on the lattice dimension, but also on the Minkowski exponent d. To estimate the strength of percolation clusters P proposed a new method of averaging the relative frequencies of the target subset of lattice sites. The implementation of this method is based on the SPSL package, released under GNU GPL-3 using the free programming language R.

    Views (last year): 4. Citations: 5 (RSCI).
Pages: next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"