All issues
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Control theory methods for creating market structures
Computer Research and Modeling, 2014, v. 6, no. 5, pp. 839-859Views (last year): 4. Citations: 4 (RSCI).Control theory methods for creating market structures are discussed for two cases: when market participants are pursuing aims 1) of maximal growth and 2) of maximum economic efficiency of their firms. For the first case method based on variable structure systems principles is developed. For the second case dynamic game approach is proposed based on computation of Nash–Cournot and Stackelberg strategies with the help of Z-transform.
-
Mathematical models of combat and military operations
Computer Research and Modeling, 2020, v. 12, no. 1, pp. 217-242Simulation of combat and military operations is the most important scientific and practical task aimed at providing the command of quantitative bases for decision-making. The first models of combat were developed during the First World War (M. Osipov, F. Lanchester), and now they are widely used in connection with the massive introduction of automation tools. At the same time, the models of combat and war do not fully take into account the moral potentials of the parties to the conflict, which motivates and motivates the further development of models of battle and war. A probabilistic model of combat is considered, in which the parameter of combat superiority is determined through the parameter of moral (the ratio of the percentages of the losses sustained by the parties) and the parameter of technological superiority. To assess the latter, the following is taken into account: command experience (ability to organize coordinated actions), reconnaissance, fire and maneuverability capabilities of the parties and operational (combat) support capabilities. A game-based offensive-defense model has been developed, taking into account the actions of the first and second echelons (reserves) of the parties. The target function of the attackers in the model is the product of the probability of a breakthrough by the first echelon of one of the defense points by the probability of the second echelon of the counterattack repelling the reserve of the defenders. Solved the private task of managing the breakthrough of defense points and found the optimal distribution of combat units between the trains. The share of troops allocated by the parties to the second echelon (reserve) increases with an increase in the value of the aggregate combat superiority parameter of those advancing and decreases with an increase in the value of the combat superiority parameter when repelling a counterattack. When planning a battle (battles, operations) and the distribution of its troops between echelons, it is important to know not the exact number of enemy troops, but their capabilities and capabilities, as well as the degree of preparedness of the defense, which does not contradict the experience of warfare. Depending on the conditions of the situation, the goal of an offensive may be to defeat the enemy, quickly capture an important area in the depth of the enemy’s defense, minimize their losses, etc. For scaling the offensive-defense model for targets, the dependencies of the losses and the onset rate on the initial ratio of the combat potentials of the parties were found. The influence of social costs on the course and outcome of wars is taken into account. A theoretical explanation is given of a loss in a military company with a technologically weak adversary and with a goal of war that is unclear to society. To account for the influence of psychological operations and information wars on the moral potential of individuals, a model of social and information influence was used.
-
An effective segmentation approach for liver computed tomography scans using fuzzy exponential entropy
Computer Research and Modeling, 2021, v. 13, no. 1, pp. 195-202Accurate segmentation of liver plays important in contouring during diagnosis and the planning of treatment. Imaging technology analysis and processing are wide usage in medical diagnostics, and therapeutic applications. Liver segmentation referring to the process of automatic or semi-automatic detection of liver image boundaries. A major difficulty in segmentation of liver image is the high variability as; the human anatomy itself shows major variation modes. In this paper, a proposed approach for computed tomography (CT) liver segmentation is presented by combining exponential entropy and fuzzy c-partition. Entropy concept has been utilized in various applications in imaging computing domain. Threshold techniques based on entropy have attracted a considerable attention over the last years in image analysis and processing literatures and it is among the most powerful techniques in image segmentation. In the proposed approach, the computed tomography (CT) of liver is transformed into fuzzy domain and fuzzy entropies are defined for liver image object and background. In threshold selection procedure, the proposed approach considers not only the information of liver image background and object, but also interactions between them as the selection of threshold is done by find a proper parameter combination of membership function such that the total fuzzy exponential entropy is maximized. Differential Evolution (DE) algorithm is utilizing to optimize the exponential entropy measure to obtain image thresholds. Experimental results in different CT livers scan are done and the results demonstrate the efficient of the proposed approach. Based on the visual clarity of segmented images with varied threshold values using the proposed approach, it was observed that liver segmented image visual quality is better with the results higher level of threshold.
-
Bibliographic link prediction using contrast resampling technique
Computer Research and Modeling, 2021, v. 13, no. 6, pp. 1317-1336The paper studies the problem of searching for fragments with missing bibliographic links in a scientific article using automatic binary classification. To train the model, we propose a new contrast resampling technique, the innovation of which is the consideration of the context of the link, taking into account the boundaries of the fragment, which mostly affects the probability of presence of a bibliographic links in it. The training set was formed of automatically labeled samples that are fragments of three sentences with class labels «without link» and «with link» that satisfy the requirement of contrast: samples of different classes are distanced in the source text. The feature space was built automatically based on the term occurrence statistics and was expanded by constructing additional features — entities (names, numbers, quotes and abbreviations) recognized in the text.
A series of experiments was carried out on the archives of the scientific journals «Law enforcement review» (273 articles) and «Journal Infectology» (684 articles). The classification was carried out by the models Nearest Neighbors, RBF SVM, Random Forest, Multilayer Perceptron, with the selection of optimal hyperparameters for each classifier.
Experiments have confirmed the hypothesis put forward. The highest accuracy was reached by the neural network classifier (95%), which is however not as fast as the linear one that showed also high accuracy with contrast resampling (91–94%). These values are superior to those reported for NER and Sentiment Analysis on comparable data. The high computational efficiency of the proposed method makes it possible to integrate it into applied systems and to process documents online.
-
Application of gradient optimization methods to solve the Cauchy problem for the Helmholtz equation
Computer Research and Modeling, 2022, v. 14, no. 2, pp. 417-444The article is devoted to studying the application of convex optimization methods to solve the Cauchy problem for the Helmholtz equation, which is ill-posed since the equation belongs to the elliptic type. The Cauchy problem is formulated as an inverse problem and is reduced to a convex optimization problem in a Hilbert space. The functional to be optimized and its gradient are calculated using the solution of boundary value problems, which, in turn, are well-posed and can be approximately solved by standard numerical methods, such as finite-difference schemes and Fourier series expansions. The convergence of the applied fast gradient method and the quality of the solution obtained in this way are experimentally investigated. The experiment shows that the accelerated gradient method — the Similar Triangle Method — converges faster than the non-accelerated method. Theorems on the computational complexity of the resulting algorithms are formulated and proved. It is found that Fourier’s series expansions are better than finite-difference schemes in terms of the speed of calculations and improve the quality of the solution obtained. An attempt was made to use restarts of the Similar Triangle Method after halving the residual of the functional. In this case, the convergence does not improve, which confirms the absence of strong convexity. The experiments show that the inaccuracy of the calculations is more adequately described by the additive concept of the noise in the first-order oracle. This factor limits the achievable quality of the solution, but the error does not accumulate. According to the results obtained, the use of accelerated gradient optimization methods can be the way to solve inverse problems effectively.
-
Semantic structuring of text documents based on patterns of natural language entities
Computer Research and Modeling, 2022, v. 14, no. 5, pp. 1185-1197The technology of creating patterns from natural language words (concepts) based on text data in the bag of words model is considered. Patterns are used to reduce the dimension of the original space in the description of documents and search for semantically related words by topic. The process of dimensionality reduction is implemented through the formation of patterns of latent features. The variety of structures of document relations is investigated in order to divide them into themes in the latent space.
It is considered that a given set of documents (objects) is divided into two non-overlapping classes, for the analysis of which it is necessary to use a common dictionary. The belonging of words to a common vocabulary is initially unknown. Class objects are considered as opposition to each other. Quantitative parameters of oppositionality are determined through the values of the stability of each feature and generalized assessments of objects according to non-overlapping sets of features.
To calculate the stability, the feature values are divided into non-intersecting intervals, the optimal boundaries of which are determined by a special criterion. The maximum stability is achieved under the condition that the boundaries of each interval contain values of one of the two classes.
The composition of features in sets (patterns of words) is formed from a sequence ordered by stability values. The process of formation of patterns and latent features based on them is implemented according to the rules of hierarchical agglomerative grouping.
A set of latent features is used for cluster analysis of documents using metric grouping algorithms. The analysis applies the coefficient of content authenticity based on the data on the belonging of documents to classes. The coefficient is a numerical characteristic of the dominance of class representatives in groups.
To divide documents into topics, it is proposed to use the union of groups in relation to their centers. As patterns for each topic, a sequence of words ordered by frequency of occurrence from a common dictionary is considered.
The results of a computational experiment on collections of abstracts of scientific dissertations are presented. Sequences of words from the general dictionary on 4 topics are formed.
-
On Accelerated Methods for Saddle-Point Problems with Composite Structure
Computer Research and Modeling, 2023, v. 15, no. 2, pp. 433-467We consider strongly-convex-strongly-concave saddle-point problems with general non-bilinear objective and different condition numbers with respect to the primal and dual variables. First, we consider such problems with smooth composite terms, one of which has finite-sum structure. For this setting we propose a variance reduction algorithm with complexity estimates superior to the existing bounds in the literature. Second, we consider finite-sum saddle-point problems with composite terms and propose several algorithms depending on the properties of the composite terms. When the composite terms are smooth we obtain better complexity bounds than the ones in the literature, including the bounds of a recently proposed nearly-optimal algorithms which do not consider the composite structure of the problem. If the composite terms are prox-friendly, we propose a variance reduction algorithm that, on the one hand, is accelerated compared to existing variance reduction algorithms and, on the other hand, provides in the composite setting similar complexity bounds to the nearly-optimal algorithm which is designed for noncomposite setting. Besides, our algorithms allow one to separate the complexity bounds, i. e. estimate, for each part of the objective separately, the number of oracle calls that is sufficient to achieve a given accuracy. This is important since different parts can have different arithmetic complexity of the oracle, and it is desired to call expensive oracles less often than cheap oracles. The key thing to all these results is our general framework for saddle-point problems, which may be of independent interest. This framework, in turn is based on our proposed Accelerated Meta-Algorithm for composite optimization with probabilistic inexact oracles and probabilistic inexactness in the proximal mapping, which may be of independent interest as well.
-
Development of a hybrid simulation model of the assembly shop
Computer Research and Modeling, 2023, v. 15, no. 5, pp. 1359-1379In the presented work, a hybrid optimal simulation model of an assembly shop in the AnyLogic environment has been developed, which allows you to select the parameters of production systems. To build a hybrid model of the investigative approach, discrete-event modeling and aggressive modeling are combined into a single model with an integrating interaction. Within the framework of this work, a mechanism for the development of a production system consisting of several participants-agents is described. An obvious agent corresponds to a class in which a set of agent parameters is specified. In the simulation model, three main groups of operations performed sequentially were taken into account, and the logic for working with rejected sets was determined. The product assembly process is a process that occurs in a multi-phase open-loop system of redundant service with waiting. There are also signs of a closed system — scrap flows for reprocessing. When creating a distribution system in the segment, it is mandatory to use control over the execution of requests in a FIFO queue. For the functional assessment of the production system, the simulation model includes several functional functions that describe the number of finished products, the average time of preparation of products, the number and percentage of rejects, the simulation result for the study, as well as functional variables in which the calculated utilization factors will be used. A series of modeling experiments were carried out in order to study the behavior of the agents of the system in terms of the overall performance indicators of the production system. During the experiment, it was found that the indicator of the average preparation time of the product is greatly influenced by such parameters as: the average speed of the set of products, the average time to complete operations. At a given limitation interval, we managed to select a set of parameters that managed to achieve the largest possible operation of the assembly line. This experiment implements the basic principle of agent-based modeling — decentralized agents make a personal contribution and affect the operation of the entire simulated system as a whole. As a result of the experiments, thanks to the selection of a large set of parameters, it was possible to achieve high performance indicators of the assembly shop, namely: to increase the productivity indicator by 60%; reduce the average assembly time of products by 38%.
-
Simulation of traffic flows based on the quasi-gasdynamic approach and the cellular automata theory using supercomputers
Computer Research and Modeling, 2024, v. 16, no. 1, pp. 175-194The purpose of the study is to simulate the dynamics of traffic flows on city road networks as well as to systematize the current state of affairs in this area. The introduction states that the development of intelligent transportation systems as an integral part of modern transportation technologies is coming to the fore. The core of these systems contain adequate mathematical models that allow to simulate traffic as close to reality as possible. The necessity of using supercomputers due to the large amount of calculations is also noted, therefore, the creation of special parallel algorithms is needed. The beginning of the article is devoted to the up-to-date classification of traffic flow models and characterization of each class, including their distinctive features and relevant examples with links. Further, the main focus of the article is shifted towards the development of macroscopic and microscopic models, created by the authors, and determination of the place of these models in the aforementioned classification. The macroscopic model is based on the continuum approach and uses the ideology of quasi-gasdynamic systems of equations. Its advantages are indicated in comparison with existing models of this class. The model is presented both in one-dimensional and two-dimensional versions. The both versions feature the ability to study multi-lane traffic. In the two-dimensional version it is made possible by introduction of the concept of “lateral” velocity, i. e., the speed of changing lanes. The latter version allows for carrying out calculations in the computational domain which corresponds to the actual geometry of the road. The section also presents the test results of modeling vehicle dynamics on a road fragment with the local widening and on a road fragment with traffic lights, including several variants of traffic light regimes. In the first case, the calculations allow to draw interesting conclusions about the impact of a road widening on a road capacity as a whole, and in the second case — to select the optimal regime configuration to obtain the “green wave” effect. The microscopic model is based on the cellular automata theory and the single-lane Nagel – Schreckenberg model and is generalized for the multi-lane case by the authors of the article. The model implements various behavioral strategies of drivers. Test computations for the real transport network section in Moscow city center are presented. To achieve an adequate representation of vehicles moving through the network according to road traffic regulations the authors implemented special algorithms adapted for parallel computing. Test calculations were performed on the K-100 supercomputer installed in the Centre of Collective Usage of KIAM RAS.
-
Adaptive first-order methods for relatively strongly convex optimization problems
Computer Research and Modeling, 2022, v. 14, no. 2, pp. 445-472The article is devoted to first-order adaptive methods for optimization problems with relatively strongly convex functionals. The concept of relatively strong convexity significantly extends the classical concept of convexity by replacing the Euclidean norm in the definition by the distance in a more general sense (more precisely, by Bregman’s divergence). An important feature of the considered classes of problems is the reduced requirements concerting the level of smoothness of objective functionals. More precisely, we consider relatively smooth and relatively Lipschitz-continuous objective functionals, which allows us to apply the proposed techniques for solving many applied problems, such as the intersection of the ellipsoids problem (IEP), the Support Vector Machine (SVM) for a binary classification problem, etc. If the objective functional is convex, the condition of relatively strong convexity can be satisfied using the problem regularization. In this work, we propose adaptive gradient-type methods for optimization problems with relatively strongly convex and relatively Lipschitzcontinuous functionals for the first time. Further, we propose universal methods for relatively strongly convex optimization problems. This technique is based on introducing an artificial inaccuracy into the optimization model, so the proposed methods can be applied both to the case of relatively smooth and relatively Lipschitz-continuous functionals. Additionally, we demonstrate the optimality of the proposed universal gradient-type methods up to the multiplication by a constant for both classes of relatively strongly convex problems. Also, we show how to apply the technique of restarts of the mirror descent algorithm to solve relatively Lipschitz-continuous optimization problems. Moreover, we prove the optimal estimate of the rate of convergence of such a technique. Also, we present the results of numerical experiments to compare the performance of the proposed methods.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"




