All issues
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
A study on the dynamics of pest population with biocontrol using predator, parasite in presence of awareness
Computer Research and Modeling, 2024, v. 16, no. 3, pp. 713-729The coconut tree is often mentioned as the “tree of life” due to its immense benefits to the human community ranging from edible products to building materials. Rugose spiraling whitefly (RSW), a natural enemy seems to be a major threat to farmers in bringing up these coconut trees. A mathematical model to study the dynamics of pest population in the presence of predator and parasite is developed. The biologically feasible equilibrium points are derived. Local asymptotic stability as well as global asymptotic stability is analyzed at the points. Furthermore, in order to educate farmers on pest control, we have added the impact of awareness programs in the model. The conditions of existence and stability properties of all feasible steady states of this model are analyzed. The result reveals that predator and parasite play a major role in reducing the immature pest. It also shows that pest control activities through awareness programs further reduce the mature pest population which decreases the egg laying rate which in turn reduces the immature population.
-
Detection of influence of upper working roll’s vibrayion on thickness of sheet at cold rolling with the help of DEFORM-3D software
Computer Research and Modeling, 2017, v. 9, no. 1, pp. 111-116Views (last year): 12. Citations: 1 (RSCI).Technical diagnosis’ current trends are connected to application of FEM computer simulation, which allows, to some extent, replace real experiments, reduce costs for investigation and minimize risks. Computer simulation, just at the stage of research and development, allows carrying out of diagnostics of equipment to detect permissible fluctuations of parameters of equipment’s work. Peculiarity of diagnosis of rolling equipment is that functioning of rolling equipment is directly tied with manufacturing of product with required quality, including accuracy. At that design of techniques of technical diagnosis and diagnostical modelling is very important. Computer simulation of cold rolling of strip was carried out. At that upper working roll was doing vibrations in horizontal direction according with published data of experiments on continuous 1700 rolling mill. Vibration of working roll in a stand appeared due to gap between roll’s craft and guide in a stand and led to periodical fluctuations of strip’s thickness. After computer simulation with the help of DEFORM software strip with longitudinal and transversal thickness variation was gotten. Visualization of strip’s geometrical parameters, according with simulation data, corresponded to type of inhomogeneity of surface of strip rolled in real. Further analysis of thickness variation was done in order to identify, on the basis of simulation, sources of periodical components of strip’s thickness, whose reasons are malfunctions of equipment. Advantage of computer simulation while searching the sources of forming of thickness variation is that different hypothesis concerning thickness formations may be tested without conducting real experiments and costs of different types may be reduced. Moreover, while simulation, initial strip’s thickness will not have fluctuations as opposed to industrial or laboratorial experiments. On the basis of spectral analysis of random process, it was established that frequency of changing of strip’s thickness after rolling in one stand coincides with frequency of working roll’s vibration. Results of computer simulation correlate with results of the researches for 1700 mill. Therefore, opportunity to apply computer simulation to find reasons of formation of thickness variation of strip on the industrial rolling mill is shown.
-
Physical analysis and mathematical modeling of the parameters of explosion region produced in a rarefied ionosphere
Computer Research and Modeling, 2022, v. 14, no. 4, pp. 817-833The paper presents a physical and numerical analysis of the dynamics and radiation of explosion products formed during the Russian-American experiment in the ionosphere using an explosive generator based on hexogen (RDX) and trinitrotoluene (TNT). The main attention is paid to the radiation of the perturbed region and the dynamics of the products of explosion (PE). The detailed chemical composition of the explosion products is analyzed and the initial concentrations of the most important molecules capable of emitting in the infrared range of the spectrum are determined, and their radiative constants are given. The initial temperature of the explosion products and the adiabatic exponent are determined. The nature of the interpenetration of atoms and molecules of a highly rarefied ionosphere into a spherically expanding cloud of products is analyzed. An approximate mathematical model of the dynamics of explosion products under conditions of mixing rarefied ionospheric air with them has been developed and the main thermodynamic characteristics of the system have been calculated. It is shown that for a time of 0,3–3 sec there is a significant increase in the temperature of the scattering mixture as a result of its deceleration. In the problem under consideration the explosion products and the background gas are separated by a contact boundary. To solve this two-region gas dynamic problem a numerical algorithm based on the Lagrangian approach was developed. It was necessary to fulfill special conditions at the contact boundary during its movement in a stationary gas. In this case there are certain difficulties in describing the parameters of the explosion products near the contact boundary which is associated with a large difference in the size of the mass cells of the explosion products and the background due to a density difference of 13 orders of magnitude. To reduce the calculation time of this problem an irregular calculation grid was used in the area of explosion products. Calculations were performed with different adiabatic exponents. The most important result is temperature. It is in good agreement with the results obtained by the method that approximately takes into account interpenetration. The time behavior of the IR emission coefficients of active molecules in a wide range of the spectrum is obtained. This behavior is qualitatively consistent with experiments for the IR glow of flying explosion products.
-
Numerical-analytical modeling of gravitational lensing of the electromagnetic waves in random-inhomogeneous space plasma
Computer Research and Modeling, 2024, v. 16, no. 2, pp. 433-443Instrument of numerical-analytical modeling of characteristics of propagation of electromagnetic waves in chaotic space plasma with taking into account effects of gravitation is developed for interpretation of data of measurements of astrophysical precision instruments of new education. The task of propagation of waves in curved (Riemann’s) space is solved in Euclid’s space by introducing of the effective index of refraction of vacuum. The gravitational potential can be calculated for various model of distribution of mass of astrophysical objects and at solution of Poisson’s equation. As a result the effective index of refraction of vacuum can be evaluated. Approximate model of the effective index of refraction is suggested with condition that various objects additively contribute in total gravitational field. Calculation of the characteristics of electromagnetic waves in the gravitational field of astrophysical objects is performed by the approximation of geometrical optics with condition that spatial scales of index of refraction a lot more wavelength. Light differential equations in Euler’s form are formed the basis of numerical-analytical instrument of modeling of trajectory characteristic of waves. Chaotic inhomogeneities of space plasma are introduced by model of spatial correlation function of index of refraction. Calculations of refraction scattering of waves are performed by the approximation of geometrical optics. Integral equations for statistic moments of lateral deviations of beams in picture plane of observer are obtained. Integrals for moments are reduced to system of ordinary differential equations the firsts order with using analytical transformations for cooperative numerical calculation of arrange and meansquare deviations of light. Results of numerical-analytical modeling of trajectory picture of propagation of electromagnetic waves in interstellar space with taking into account impact of gravitational fields of space objects and refractive scattering of waves on inhomogeneities of index of refraction of surrounding plasma are shown. Based on the results of modeling quantitative estimation of conditions of stochastic blurring of the effect of gravitational lensing of electromagnetic waves at various frequency ranges is performed. It’s shown that operating frequencies of meter range of wavelengths represent conditional low-frequency limit for observational of the effect of gravitational lensing in stochastic space plasma. The offered instrument of numerical-analytical modeling can be used for analyze of structure of electromagnetic radiation of quasar propagating through group of galactic.
-
Semantic structuring of text documents based on patterns of natural language entities
Computer Research and Modeling, 2022, v. 14, no. 5, pp. 1185-1197The technology of creating patterns from natural language words (concepts) based on text data in the bag of words model is considered. Patterns are used to reduce the dimension of the original space in the description of documents and search for semantically related words by topic. The process of dimensionality reduction is implemented through the formation of patterns of latent features. The variety of structures of document relations is investigated in order to divide them into themes in the latent space.
It is considered that a given set of documents (objects) is divided into two non-overlapping classes, for the analysis of which it is necessary to use a common dictionary. The belonging of words to a common vocabulary is initially unknown. Class objects are considered as opposition to each other. Quantitative parameters of oppositionality are determined through the values of the stability of each feature and generalized assessments of objects according to non-overlapping sets of features.
To calculate the stability, the feature values are divided into non-intersecting intervals, the optimal boundaries of which are determined by a special criterion. The maximum stability is achieved under the condition that the boundaries of each interval contain values of one of the two classes.
The composition of features in sets (patterns of words) is formed from a sequence ordered by stability values. The process of formation of patterns and latent features based on them is implemented according to the rules of hierarchical agglomerative grouping.
A set of latent features is used for cluster analysis of documents using metric grouping algorithms. The analysis applies the coefficient of content authenticity based on the data on the belonging of documents to classes. The coefficient is a numerical characteristic of the dominance of class representatives in groups.
To divide documents into topics, it is proposed to use the union of groups in relation to their centers. As patterns for each topic, a sequence of words ordered by frequency of occurrence from a common dictionary is considered.
The results of a computational experiment on collections of abstracts of scientific dissertations are presented. Sequences of words from the general dictionary on 4 topics are formed.
-
Development of a hybrid simulation model of the assembly shop
Computer Research and Modeling, 2023, v. 15, no. 5, pp. 1359-1379In the presented work, a hybrid optimal simulation model of an assembly shop in the AnyLogic environment has been developed, which allows you to select the parameters of production systems. To build a hybrid model of the investigative approach, discrete-event modeling and aggressive modeling are combined into a single model with an integrating interaction. Within the framework of this work, a mechanism for the development of a production system consisting of several participants-agents is described. An obvious agent corresponds to a class in which a set of agent parameters is specified. In the simulation model, three main groups of operations performed sequentially were taken into account, and the logic for working with rejected sets was determined. The product assembly process is a process that occurs in a multi-phase open-loop system of redundant service with waiting. There are also signs of a closed system — scrap flows for reprocessing. When creating a distribution system in the segment, it is mandatory to use control over the execution of requests in a FIFO queue. For the functional assessment of the production system, the simulation model includes several functional functions that describe the number of finished products, the average time of preparation of products, the number and percentage of rejects, the simulation result for the study, as well as functional variables in which the calculated utilization factors will be used. A series of modeling experiments were carried out in order to study the behavior of the agents of the system in terms of the overall performance indicators of the production system. During the experiment, it was found that the indicator of the average preparation time of the product is greatly influenced by such parameters as: the average speed of the set of products, the average time to complete operations. At a given limitation interval, we managed to select a set of parameters that managed to achieve the largest possible operation of the assembly line. This experiment implements the basic principle of agent-based modeling — decentralized agents make a personal contribution and affect the operation of the entire simulated system as a whole. As a result of the experiments, thanks to the selection of a large set of parameters, it was possible to achieve high performance indicators of the assembly shop, namely: to increase the productivity indicator by 60%; reduce the average assembly time of products by 38%.
-
Adaptive first-order methods for relatively strongly convex optimization problems
Computer Research and Modeling, 2022, v. 14, no. 2, pp. 445-472The article is devoted to first-order adaptive methods for optimization problems with relatively strongly convex functionals. The concept of relatively strong convexity significantly extends the classical concept of convexity by replacing the Euclidean norm in the definition by the distance in a more general sense (more precisely, by Bregman’s divergence). An important feature of the considered classes of problems is the reduced requirements concerting the level of smoothness of objective functionals. More precisely, we consider relatively smooth and relatively Lipschitz-continuous objective functionals, which allows us to apply the proposed techniques for solving many applied problems, such as the intersection of the ellipsoids problem (IEP), the Support Vector Machine (SVM) for a binary classification problem, etc. If the objective functional is convex, the condition of relatively strong convexity can be satisfied using the problem regularization. In this work, we propose adaptive gradient-type methods for optimization problems with relatively strongly convex and relatively Lipschitzcontinuous functionals for the first time. Further, we propose universal methods for relatively strongly convex optimization problems. This technique is based on introducing an artificial inaccuracy into the optimization model, so the proposed methods can be applied both to the case of relatively smooth and relatively Lipschitz-continuous functionals. Additionally, we demonstrate the optimality of the proposed universal gradient-type methods up to the multiplication by a constant for both classes of relatively strongly convex problems. Also, we show how to apply the technique of restarts of the mirror descent algorithm to solve relatively Lipschitz-continuous optimization problems. Moreover, we prove the optimal estimate of the rate of convergence of such a technique. Also, we present the results of numerical experiments to compare the performance of the proposed methods.
-
Reduced mathematical model of blood coagulation taking into account thrombin activity switching as a basis for estimation of hemodynamic effects and its implementation in FlowVision package
Computer Research and Modeling, 2023, v. 15, no. 4, pp. 1039-1067The possibility of numerical 3D simulation of thrombi formation is considered.
The developed up to now detailed mathematical models describing formation of thrombi and clots include a great number of equations. Being implemented in a CFD code, the detailed mathematical models require essential computer resources for simulation of the thrombi growth in a blood flow. A reasonable alternative way is using reduced mathematical models. Two models based on the reduced mathematical model for the thrombin generation are described in the given paper.
The first model describes growth of a thrombus in a great vessel (artery). The artery flows are essentially unsteady. They are characterized by pulse waves. The blood velocity here is high compared to that in the vein tree. The reduced model for the thrombin generation and the thrombus growth in an artery is relatively simple. The processes accompanying the thrombin generation in arteries are well described by the zero-order approximation.
A vein flow is characterized lower velocity value, lower gradients, and lower shear stresses. In order to simulate the thrombin generation in veins, a more complex system of equations has to be solved. The model must allow for all the non-linear terms in the right-hand sides of the equations.
The simulation is carried out in the industrial software FlowVision.
The performed numerical investigations have shown the suitability of the reduced models for simulation of thrombin generation and thrombus growth. The calculations demonstrate formation of the recirculation zone behind a thrombus. The concentration of thrombin and the mass fraction of activated platelets are maximum here. Formation of such a zone causes slow growth of the thrombus downstream. At the upwind part of the thrombus, the concentration of activated platelets is low, and the upstream thrombus growth is negligible.
When the blood flow variation during a hart cycle is taken into account, the thrombus growth proceeds substantially slower compared to the results obtained under the assumption of constant (averaged over a hard cycle) conditions. Thrombin and activated platelets produced during diastole are quickly carried away by the blood flow during systole. Account of non-Newtonian rheology of blood noticeably affects the results.
-
Additive regularizarion of topic models with fast text vectorizartion
Computer Research and Modeling, 2020, v. 12, no. 6, pp. 1515-1528The probabilistic topic model of a text document collection finds two matrices: a matrix of conditional probabilities of topics in documents and a matrix of conditional probabilities of words in topics. Each document is represented by a multiset of words also called the “bag of words”, thus assuming that the order of words is not important for revealing the latent topics of the document. Under this assumption, the problem is reduced to a low-rank non-negative matrix factorization governed by likelihood maximization. In general, this problem is ill-posed having an infinite set of solutions. In order to regularize the solution, a weighted sum of optimization criteria is added to the log-likelihood. When modeling large text collections, storing the first matrix seems to be impractical, since its size is proportional to the number of documents in the collection. At the same time, the topical vector representation (embedding) of documents is necessary for solving many text analysis tasks, such as information retrieval, clustering, classification, and summarization of texts. In practice, the topical embedding is calculated for a document “on-the-fly”, which may require dozens of iterations over all the words of the document. In this paper, we propose a way to calculate a topical embedding quickly, by one pass over document words. For this, an additional constraint is introduced into the model in the form of an equation, which calculates the first matrix from the second one in linear time. Although formally this constraint is not an optimization criterion, in fact it plays the role of a regularizer and can be used in combination with other regularizers within the additive regularization framework ARTM. Experiments on three text collections have shown that the proposed method improves the model in terms of sparseness, difference, logLift and coherence measures of topic quality. The open source libraries BigARTM and TopicNet were used for the experiments.
-
Personalization of mathematical models in cardiology: obstacles and perspectives
Computer Research and Modeling, 2022, v. 14, no. 4, pp. 911-930Most biomechanical tasks of interest to clinicians can be solved only using personalized mathematical models. Such models allow to formalize and relate key pathophysiological processes, basing on clinically available data evaluate non-measurable parameters that are important for the diagnosis of diseases, predict the result of a therapeutic or surgical intervention. The use of models in clinical practice imposes additional restrictions: clinicians require model validation on clinical cases, the speed and automation of the entire calculated technological chain, from processing input data to obtaining a result. Limitations on the simulation time, determined by the time of making a medical decision (of the order of several minutes), imply the use of reduction methods that correctly describe the processes under study within the framework of reduced models or machine learning tools.
Personalization of models requires patient-oriented parameters, personalized geometry of a computational domain and generation of a computational mesh. Model parameters are estimated by direct measurements, or methods of solving inverse problems, or methods of machine learning. The requirement of personalization imposes severe restrictions on the number of fitted parameters that can be measured under standard clinical conditions. In addition to parameters, the model operates with boundary conditions that must take into account the patient’s characteristics. Methods for setting personalized boundary conditions significantly depend on the clinical setting of the problem and clinical data. Building a personalized computational domain through segmentation of medical images and generation of the computational grid, as a rule, takes a lot of time and effort due to manual or semi-automatic operations. Development of automated methods for setting personalized boundary conditions and segmentation of medical images with the subsequent construction of a computational grid is the key to the widespread use of mathematical modeling in clinical practice.
The aim of this work is to review our solutions for personalization of mathematical models within the framework of three tasks of clinical cardiology: virtual assessment of hemodynamic significance of coronary artery stenosis, calculation of global blood flow after hemodynamic correction of complex heart defects, calculating characteristics of coaptation of reconstructed aortic valve.
Keywords: computational biomechanics, personalized model.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"