All issues
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
On the using the differential schemes to transport equation with drain in grid modeling
Computer Research and Modeling, 2020, v. 12, no. 5, pp. 1149-1164Modern power transportation systems are the complex engineering systems. Such systems include both point facilities (power producers, consumers, transformer substations, etc.) and the distributed elements (f.e. power lines). Such structures are presented in the form of the graphs with different types of nodes under creating the mathematical models. It is necessary to solve the system of partial differential equations of the hyperbolic type to study the dynamic effects in such systems.
An approach similar to one already applied in modeling similar problems earlier used in the work. New variant of the splitting method was used proposed by the authors. Unlike most known works, the splitting is not carried out according to physical processes (energy transport without dissipation, separately dissipative processes). We used splitting to the transport equations with the drain and the exchange between Reimann’s invariants. This splitting makes possible to construct the hybrid schemes for Riemann invariants with a high order of approximation and minimal dissipation error. An example of constructing such a hybrid differential scheme is described for a single-phase power line. The difference scheme proposed is based on the analysis of the properties of the schemes in the space of insufficient coefficients.
Examples of the model problem numerical solutions using the proposed splitting and the difference scheme are given. The results of the numerical calculations shows that the difference scheme allows to reproduce the arising regions of large gradients. It is shown that the difference schemes also allow detecting resonances in such the systems.
-
Fuzzy modeling the mechanism of transmitting panic state among people with various temperament species
Computer Research and Modeling, 2021, v. 13, no. 5, pp. 1079-1092A mass congestion of people always represents a potential danger and threat for their lives. In addition, every year in the world a very large number of people die because of the crush, the main cause of which is mass panic. Therefore, the study of the phenomenon of mass panic in view of her extreme social danger is an important scientific task. Available information, about the processes of her occurrence and spread refers to the category inaccurate. Therefore, the theory of fuzzy sets has been chosen as a tool for developing a mathematical model of the mechanism of transmitting panic state among people with various temperament species.
When developing an fuzzy model, it was assumed that panic, from the epicenter of the shocking stimulus, spreads among people according to the wave principle, passing at different frequencies through different environments (types of human temperament), and is determined by the speed and intensity of the circular reaction of the mechanism of transmitting panic state among people. Therefore, the developed fuzzy model, along with two inputs, has two outputs — the speed and intensity of the circular reaction. In the block «Fuzzyfication», the degrees of membership of the numerical values of the input parameters to fuzzy sets are calculated. The «Inference» block at the input receives degrees of belonging for each input parameter and at the output determines the resulting function of belonging the speed of the circular reaction and her derivative, which is a function of belonging for the intensity of the circular reaction. In the «Defuzzyfication» block, using the center of gravity method, a quantitative value is determined for each output parameter. The quality assessment of the developed fuzzy model, carried out by calculating of the determination coefficient, showed that the developed mathematical model belongs to the category of good quality models.
The result obtained in the form of quantitative assessments of the circular reaction makes it possible to improve the quality of understanding of the mental processes occurring during the transmission of the panic state among people. In addition, this makes it possible to improve existing and develop new models of chaotic humans behaviors. Which are designed to develop effective solutions in crisis situations, aimed at full or partial prevention of the spread of mass panic, leading to the emergence of panic flight and the appearance of human casualties.
-
Efficient and error-free information hiding in the hybrid domain of digital images using metaheuristic optimization
Computer Research and Modeling, 2023, v. 15, no. 1, pp. 197-210Data hiding in digital images is a promising direction of cybersecurity. Digital steganography methods provide imperceptible transmission of secret data over an open communication channel. The information embedding efficiency depends on the embedding imperceptibility, capacity, and robustness. These quality criteria are mutually inverse, and the improvement of one indicator usually leads to the deterioration of the others. A balance between them can be achieved using metaheuristic optimization. Metaheuristics are a class of optimization algorithms that find an optimal, or close to an optimal solution for a variety of problems, including those that are difficult to formalize, by simulating various natural processes, for example, the evolution of species or the behavior of animals. In this study, we propose an approach to data hiding in the hybrid spatial-frequency domain of digital images based on metaheuristic optimization. Changing a block of image pixels according to some change matrix is considered as an embedding operation. We select the change matrix adaptively for each block using metaheuristic optimization algorithms. In this study, we compare the performance of three metaheuristics such as genetic algorithm, particle swarm optimization, and differential evolution to find the best change matrix. Experimental results showed that the proposed approach provides high imperceptibility of embedding, high capacity, and error-free extraction of embedded information. At the same time, storage of change matrices for each block is not required for further data extraction. This improves user experience and reduces the chance of an attacker discovering the steganographic attachment. Metaheuristics provided an increase in imperceptibility indicator, estimated by the PSNR metric, and the capacity of the previous algorithm for embedding information into the coefficients of the discrete cosine transform using the QIM method [Evsutin, Melman, Meshcheryakov, 2021] by 26.02% and 30.18%, respectively, for the genetic algorithm, 26.01% and 19.39% for particle swarm optimization, 27.30% and 28.73% for differential evolution.
-
Assessing the validity of clustering of panel data by Monte Carlo methods (using as example the data of the Russian regional economy)
Computer Research and Modeling, 2020, v. 12, no. 6, pp. 1501-1513The paper considers a method for studying panel data based on the use of agglomerative hierarchical clustering — grouping objects based on the similarities and differences in their features into a hierarchy of clusters nested into each other. We used 2 alternative methods for calculating Euclidean distances between objects — the distance between the values averaged over observation interval, and the distance using data for all considered years. Three alternative methods for calculating the distances between clusters were compared. In the first case, the distance between the nearest elements from two clusters is considered to be distance between these clusters, in the second — the average over pairs of elements, in the third — the distance between the most distant elements. The efficiency of using two clustering quality indices, the Dunn and Silhouette index, was studied to select the optimal number of clusters and evaluate the statistical significance of the obtained solutions. The method of assessing statistical reliability of cluster structure consisted in comparing the quality of clustering on a real sample with the quality of clustering on artificially generated samples of panel data with the same number of objects, features and lengths of time series. Generation was made from a fixed probability distribution. At the same time, simulation methods imitating Gaussian white noise and random walk were used. Calculations with the Silhouette index showed that a random walk is characterized not only by spurious regression, but also by “spurious clustering”. Clustering was considered reliable for a given number of selected clusters if the index value on the real sample turned out to be greater than the value of the 95% quantile for artificial data. A set of time series of indicators characterizing production in the regions of the Russian Federation was used as a sample of real data. For these data only Silhouette shows reliable clustering at the level p < 0.05. Calculations also showed that index values for real data are generally closer to values for random walks than for white noise, but it have significant differences from both. Since three-dimensional feature space is used, the quality of clustering was also evaluated visually. Visually, one can distinguish clusters of points located close to each other, also distinguished as clusters by the applied hierarchical clustering algorithm.
-
Study of the dynamics of the structure of oligopolistic markets with non-market opposition parties
Computer Research and Modeling, 2021, v. 13, no. 1, pp. 219-233The article examines the impact of non-market actions of participants in oligopolistic markets on the market structure. The following actions of one of the market participants aimed at increasing its market share are analyzed: 1) price manipulation; 2) blocking investments of stronger oligopolists; 3) destruction of produced products and capacities of competitors. Linear dynamic games with a quadratic criterion are used to model the strategies of oligopolists. The expediency of their use is due to the possibility of both an adequate description of the evolution of markets and the implementation of two mutually complementary approaches to determining the strategies of oligopolists: 1) based on the representation of models in the state space and the solution of generalized Riccati equations; 2) based on the application of operational calculus methods (in the frequency domain) which owns the visibility necessary for economic analysis.
The article shows the equivalence of approaches to solving the problem with maximin criteria of oligopolists in the state space and in the frequency domain. The results of calculations are considered in relation to a duopoly, with indicators close to one of the duopolies in the microelectronic industry of the world. The second duopolist is less effective from the standpoint of costs, though more mobile. Its goal is to increase its market share by implementing the non-market methods listed above.
Calculations carried out with help of the game model, made it possible to construct dependencies that characterize the relationship between the relative increase in production volumes over a 25-year period of weak and strong duopolists under price manipulation. Constructed dependencies show that an increase in the price for the accepted linear demand function leads to a very small increase in the production of a strong duopolist, but, simultaneously, to a significant increase in this indicator for a weak one.
Calculations carried out with use of the other variants of the model, show that blocking investments, as well as destroying the products of a strong duopolist, leads to more significant increase in the production of marketable products for a weak duopolist than to a decrease in this indicator for a strong one.
-
Raising convergence order of grid-characteristic schemes for 2D linear elasticity problems using operator splitting
Computer Research and Modeling, 2022, v. 14, no. 4, pp. 899-910The grid-characteristic method is successfully used for solving hyperbolic systems of partial differential equations (for example, transport / acoustic / elastic equations). It allows to construct correctly algorithms on contact boundaries and boundaries of the integration domain, to a certain extent to take into account the physics of the problem (propagation of discontinuities along characteristic curves), and has the property of monotonicity, which is important for considered problems. In the cases of two-dimensional and three-dimensional problems the method makes use of a coordinate splitting technique, which enables us to solve the original equations by solving several one-dimensional ones consecutively. It is common to use up to 3-rd order one-dimensional schemes with simple splitting techniques which do not allow for the convergence order to be higher than two (with respect to time). Significant achievements in the operator splitting theory were done, the existence of higher-order schemes was proved. Its peculiarity is the need to perform a step in the opposite direction in time, which gives rise to difficulties, for example, for parabolic problems.
In this work coordinate splitting of the 3-rd and 4-th order were used for the two-dimensional hyperbolic problem of the linear elasticity. This made it possible to increase the final convergence order of the computational algorithm. The paper empirically estimates the convergence in L1 and L∞ norms using analytical solutions of the system with the sufficient degree of smoothness. To obtain objective results, we considered the cases of longitudinal and transverse plane waves propagating both along the diagonal of the computational cell and not along it. Numerical experiments demonstrated the improved accuracy and convergence order of constructed schemes. These improvements are achieved with the cost of three- or fourfold increase of the computational time (for the 3-rd and 4-th order respectively) and no additional memory requirements. The proposed improvement of the computational algorithm preserves the simplicity of its parallel implementation based on the spatial decomposition of the computational grid.
-
Solution of optimization problem of wood fuel facility location by the thermal energy cost criterion
Computer Research and Modeling, 2012, v. 4, no. 3, pp. 651-659Views (last year): 5. Citations: 2 (RSCI).The paper contains a mathematical model for the optimal location of enterprises producing fuel from renewable wood waste for the regional distributed heating supply system. Optimization is based on total cost minimization of the end product – the thermal energy from wood fuel. A method for solving the problem is based on genetic algorithm. The paper also shows the practical results of the model by example of Udmurt Republic.
-
Interval analysis of vegetation cover dynamics
Computer Research and Modeling, 2020, v. 12, no. 5, pp. 1191-1205In the development of the previously obtained result on modeling the dynamics of vegetation cover, due to variations in the temperature background, a new scheme for the interval analysis of the dynamics of floristic images of formations is presented in the case when the parameter of the response rate of the model of the dynamics of each counting plant species is set by the interval of scatter of its possible values. The detailed description of the functional parameters of macromodels of biodiversity, desired in fundamental research, taking into account the essential reasons for the observed evolutionary processes, may turn out to be a problematic task. The use of more reliable interval estimates of the variability of functional parameters “bypasses” the problem of uncertainty in the primary assessment of the evolution of the phyto-resource potential of the developed controlled territories. The solutions obtained preserve not only a qualitative picture of the dynamics of species diversity, but also give a rigorous, within the framework of the initial assumptions, a quantitative assessment of the degree of presence of each plant species. The practical significance of two-sided estimation schemes based on the construction of equations for the upper and lower boundaries of the trajectories of the scatter of solutions depends on the conditions and measure of proportional correspondence of the intervals of scatter of the initial parameters with the intervals of scatter of solutions. For dynamic systems, the desired proportionality is not always ensured. The given examples demonstrate the acceptable accuracy of interval estimation of evolutionary processes. It is important to note that the constructions of the estimating equations generate vanishing intervals of scatter of solutions for quasi-constant temperature perturbations of the system. In other words, the trajectories of stationary temperature states of the vegetation cover are not roughened by the proposed interval estimation scheme. The rigor of the result of interval estimation of the species composition of the vegetation cover of formations can become a determining factor when choosing a method in the problems of analyzing the dynamics of species diversity and the plant potential of territorial systems of resource-ecological monitoring. The possibilities of the proposed approach are illustrated by geoinformation images of the computational analysis of the dynamics of the vegetation cover of the Yamal Peninsula and by the graphs of the retro-perspective analysis of the floristic variability of the formations of the landscapelithological group “Upper” based on the data of the summer temperature background of the Salehard weather station from 2010 to 1935. The developed indicators of floristic variability and the given graphs characterize the dynamics of species diversity, both on average and individually in the form of intervals of possible states for each species of plant.
-
Additive regularizarion of topic models with fast text vectorizartion
Computer Research and Modeling, 2020, v. 12, no. 6, pp. 1515-1528The probabilistic topic model of a text document collection finds two matrices: a matrix of conditional probabilities of topics in documents and a matrix of conditional probabilities of words in topics. Each document is represented by a multiset of words also called the “bag of words”, thus assuming that the order of words is not important for revealing the latent topics of the document. Under this assumption, the problem is reduced to a low-rank non-negative matrix factorization governed by likelihood maximization. In general, this problem is ill-posed having an infinite set of solutions. In order to regularize the solution, a weighted sum of optimization criteria is added to the log-likelihood. When modeling large text collections, storing the first matrix seems to be impractical, since its size is proportional to the number of documents in the collection. At the same time, the topical vector representation (embedding) of documents is necessary for solving many text analysis tasks, such as information retrieval, clustering, classification, and summarization of texts. In practice, the topical embedding is calculated for a document “on-the-fly”, which may require dozens of iterations over all the words of the document. In this paper, we propose a way to calculate a topical embedding quickly, by one pass over document words. For this, an additional constraint is introduced into the model in the form of an equation, which calculates the first matrix from the second one in linear time. Although formally this constraint is not an optimization criterion, in fact it plays the role of a regularizer and can be used in combination with other regularizers within the additive regularization framework ARTM. Experiments on three text collections have shown that the proposed method improves the model in terms of sparseness, difference, logLift and coherence measures of topic quality. The open source libraries BigARTM and TopicNet were used for the experiments.
-
Personalization of mathematical models in cardiology: obstacles and perspectives
Computer Research and Modeling, 2022, v. 14, no. 4, pp. 911-930Most biomechanical tasks of interest to clinicians can be solved only using personalized mathematical models. Such models allow to formalize and relate key pathophysiological processes, basing on clinically available data evaluate non-measurable parameters that are important for the diagnosis of diseases, predict the result of a therapeutic or surgical intervention. The use of models in clinical practice imposes additional restrictions: clinicians require model validation on clinical cases, the speed and automation of the entire calculated technological chain, from processing input data to obtaining a result. Limitations on the simulation time, determined by the time of making a medical decision (of the order of several minutes), imply the use of reduction methods that correctly describe the processes under study within the framework of reduced models or machine learning tools.
Personalization of models requires patient-oriented parameters, personalized geometry of a computational domain and generation of a computational mesh. Model parameters are estimated by direct measurements, or methods of solving inverse problems, or methods of machine learning. The requirement of personalization imposes severe restrictions on the number of fitted parameters that can be measured under standard clinical conditions. In addition to parameters, the model operates with boundary conditions that must take into account the patient’s characteristics. Methods for setting personalized boundary conditions significantly depend on the clinical setting of the problem and clinical data. Building a personalized computational domain through segmentation of medical images and generation of the computational grid, as a rule, takes a lot of time and effort due to manual or semi-automatic operations. Development of automated methods for setting personalized boundary conditions and segmentation of medical images with the subsequent construction of a computational grid is the key to the widespread use of mathematical modeling in clinical practice.
The aim of this work is to review our solutions for personalization of mathematical models within the framework of three tasks of clinical cardiology: virtual assessment of hemodynamic significance of coronary artery stenosis, calculation of global blood flow after hemodynamic correction of complex heart defects, calculating characteristics of coaptation of reconstructed aortic valve.
Keywords: computational biomechanics, personalized model.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"