All issues
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Models of phytoplankton distribution over chlorophyll in various habitat conditions. Estimation of aquatic ecosystem bioproductivity
Computer Research and Modeling, 2021, v. 13, no. 6, pp. 1177-1190A model of the phytoplankton abundance dynamics depending on changes in the content of chlorophyll in phytoplankton under the influence of changing environmental conditions is proposed. The model takes into account the dependence of biomass growth on environmental conditions, as well as on photosynthetic chlorophyll activity. The light and dark stages of photosynthesis have been identified. The processes of chlorophyll consumption during photosynthesis in the light and the growth of chlorophyll mass together with phytoplankton biomass are described. The model takes into account environmental conditions such as mineral nutrients, illumination and water temperature. The model is spatially distributed, the spatial variable corresponds to mass fraction of chlorophyll in phytoplankton. Thereby possible spreads of the chlorophyll contents in phytoplankton are taken into consideration. The model calculates the density distribution of phytoplankton by the proportion of chlorophyll in it. In addition, the rate of production of new phytoplankton biomass is calculated. In parallel, point analogs of the distributed model are considered. The diurnal and seasonal (during the year) dynamics of phytoplankton distribution by chlorophyll fraction are demonstrated. The characteristics of the rate of primary production in daily or seasonally changing environmental conditions are indicated. Model characteristics of the dynamics of phytoplankton biomass growth show that in the light this growth is about twice as large as in the dark. It shows, that illumination significantly affects the rate of production. Seasonal dynamics demonstrates an accelerated growth of biomass in spring and autumn. The spring maximum is associated with warming under the conditions of biogenic substances accumulated in winter, and the autumn, slightly smaller maximum, with the accumulation of nutrients during the summer decline in phytoplankton biomass. And the biomass in summer decreases, again due to a deficiency of nutrients. Thus, in the presence of light, mineral nutrition plays the main role in phytoplankton dynamics.
In general, the model demonstrates the dynamics of phytoplankton biomass, qualitatively similar to classical concepts, under daily and seasonal changes in the environment. The model seems to be suitable for assessing the bioproductivity of aquatic ecosystems. It can be supplemented with equations and terms of equations for a more detailed description of complex processes of photosynthesis. The introduction of variables in the physical habitat space and the conjunction of the model with satellite information on the surface of the reservoir leads to model estimates of the bioproductivity of vast marine areas. Introduction of physical space variables habitat and the interface of the model with satellite information about the surface of the basin leads to model estimates of the bioproductivity of vast marine areas.
-
On the relations of stochastic convex optimization problems with empirical risk minimization problems on $p$-norm balls
Computer Research and Modeling, 2022, v. 14, no. 2, pp. 309-319In this paper, we consider convex stochastic optimization problems arising in machine learning applications (e. g., risk minimization) and mathematical statistics (e. g., maximum likelihood estimation). There are two main approaches to solve such kinds of problems, namely the Stochastic Approximation approach (online approach) and the Sample Average Approximation approach, also known as the Monte Carlo approach, (offline approach). In the offline approach, the problem is replaced by its empirical counterpart (the empirical risk minimization problem). The natural question is how to define the problem sample size, i. e., how many realizations should be sampled so that the quite accurate solution of the empirical problem be the solution of the original problem with the desired precision. This issue is one of the main issues in modern machine learning and optimization. In the last decade, a lot of significant advances were made in these areas to solve convex stochastic optimization problems on the Euclidean balls (or the whole space). In this work, we are based on these advances and study the case of arbitrary balls in the $p$-norms. We also explore the question of how the parameter $p$ affects the estimates of the required number of terms as a function of empirical risk.
In this paper, both convex and saddle point optimization problems are considered. For strongly convex problems, the existing results on the same sample sizes in both approaches (online and offline) were generalized to arbitrary norms. Moreover, it was shown that the strong convexity condition can be weakened: the obtained results are valid for functions satisfying the quadratic growth condition. In the case when this condition is not met, it is proposed to use the regularization of the original problem in an arbitrary norm. In contradistinction to convex problems, saddle point problems are much less studied. For saddle point problems, the sample size was obtained under the condition of $\gamma$-growth of the objective function. When $\gamma = 1$, this condition is the condition of sharp minimum in convex problems. In this article, it was shown that the sample size in the case of a sharp minimum is almost independent of the desired accuracy of the solution of the original problem.
-
Forecasting methods and models of disease spread
Computer Research and Modeling, 2013, v. 5, no. 5, pp. 863-882Views (last year): 71. Citations: 19 (RSCI).The number of papers addressing the forecasting of the infectious disease morbidity is rapidly growing due to accumulation of available statistical data. This article surveys the major approaches for the shortterm and the long-term morbidity forecasting. Their limitations and the practical application possibilities are pointed out. The paper presents the conventional time series analysis methods — regression and autoregressive models; machine learning-based approaches — Bayesian networks and artificial neural networks; case-based reasoning; filtration-based techniques. The most known mathematical models of infectious diseases are mentioned: classical equation-based models (deterministic and stochastic), modern simulation models (network and agent-based).
-
Methods and problems in the kinetic approach for simulating biological structures
Computer Research and Modeling, 2018, v. 10, no. 6, pp. 851-866Views (last year): 31.The biological structure is considered as an open nonequilibrium system which properties can be described on the basis of kinetic equations. New problems with nonequilibrium boundary conditions are introduced. The nonequilibrium distribution tends gradually to an equilibrium state. The region of spatial inhomogeneity has a scale depending on the rate of mass transfer in the open system and the characteristic time of metabolism. In the proposed approximation, the internal energy of the motion of molecules is much less than the energy of translational motion. Or in other terms we can state that the kinetic energy of the average blood velocity is substantially higher than the energy of chaotic motion of the same particles. We state that the relaxation problem models a living system. The flow of entropy to the system decreases in downstream, this corresponds to Shrödinger’s general ideas that the living system “feeds on” negentropy. We introduce a quantity that determines the complexity of the biosystem, more precisely, this is the difference between the nonequilibrium kinetic entropy and the equilibrium entropy at each spatial point integrated over the entire spatial region. Solutions to the problems of spatial relaxation allow us to estimate the size of biosystems as regions of nonequilibrium. The results are compared with empirical data, in particular, for mammals we conclude that the larger the size of animals, the smaller the specific energy of metabolism. This feature is reproduced in our model since the span of the nonequilibrium region is larger in the system where the reaction rate is shorter, or in terms of the kinetic approach, the longer the relaxation time of the interaction between the molecules. The approach is also used for estimation of a part of a living system, namely a green leaf. The problems of aging as degradation of an open nonequilibrium system are considered. The analogy is related to the structure, namely, for a closed system, the equilibrium of the structure is attained for the same molecules while in the open system, a transition occurs to the equilibrium of different particles, which change due to metabolism. Two essentially different time scales are distinguished, the ratio of which is approximately constant for various animal species. Under the assumption of the existence of these two time scales the kinetic equation splits in two equations, describing the metabolic (stationary) and “degradative” (nonstationary) parts of the process.
-
Relaxation oscillations and buckling of thin shells
Computer Research and Modeling, 2020, v. 12, no. 4, pp. 807-820The paper reviews possibilities to predict buckling of thin cylindrical shells with non-destructive techniques during operation. It studies shallow shells made of high strength materials. Such structures are known for surface displacements exceeding the thickness of the elements. In the explored shells relaxation oscillations of significant amplitude can be generated even under relatively low internal stresses. The problem of the cylindrical shell oscillation is mechanically and mathematically modeled in a simplified form by conversion into an ordinary differential equation. To create the model, the researches of many authors were used who studied the geometry of the surface formed after buckling (postbuckling behavior). The nonlinear ordinary differential equation for the oscillating shell matches the well-known Duffing equation. It is important that there is a small parameter before the second time derivative in the Duffing equation. The latter circumstance enables making a detailed analysis of the obtained equation and describing the physical phenomena — relaxation oscillations — that are unique to thin high-strength shells.
It is shown that harmonic oscillations of the shell around the equilibrium position and stable relaxation oscillations are defined by the bifurcation point of the solutions to the Duffing equation. This is the first point in the Feigenbaum sequence to convert the stable periodic motions into dynamic chaos. The amplitude and the period of relaxation oscillations are calculated based on the physical properties and the level of internal stresses within the shell. Two cases of loading are reviewed: compression along generating elements and external pressure.
It is highlighted that if external forces vary in time according to the harmonic law, the periodic oscillation of the shell (nonlinear resonance) is a combination of slow and stick-slip movements. Since the amplitude and the frequency of the oscillations are known, this fact enables proposing an experimental facility for prediction of the shell buckling with non-destructive techniques. The following requirement is set as a safety factor: maximum load combinations must not cause displacements exceeding specified limits. Based on the results of the experimental measurements a formula is obtained to estimate safety against buckling (safety factor) of the structure.
-
Numerical study of the mechanisms of propagation of pulsating gaseous detonation in a non-uniform medium
Computer Research and Modeling, 2023, v. 15, no. 5, pp. 1263-1282In the last few years, significant progress has been observed in the field of rotating detonation engines for aircrafts. Scientific laboratories around the world conduct both fundamental researches related, for example, to the issues of effective mixing of fuel and oxidizer with the separate supply, and applied development of existing prototypes. The paper provides a brief overview of the main results of the most significant recent computational work on the study of propagation of a onedimensional pulsating gaseous detonation wave in a non-uniform medium. The general trends observed by the authors of these works are noted. In these works, it is shown that the presence of parameter perturbations in front of the wave front can lead to regularization and to resonant amplification of pulsations behind the detonation wave front. Thus, there is an appealing opportunity from a practical point of view to influence the stability of the detonation wave and control it. The aim of the present work is to create an instrument to study the gas-dynamic mechanisms of these effects.
The mathematical model is based on one-dimensional Euler equations supplemented by a one-stage model of the kinetics of chemical reactions. The defining system of equations is written in the shock-attached frame that leads to the need to add a shock-change equations. A method for integrating this equation is proposed, taking into account the change in the density of the medium in front of the wave front. So, the numerical algorithm for the simulation of detonation wave propagation in a non-uniform medium is proposed.
Using the developed algorithm, a numerical study of the propagation of stable detonation in a medium with variable density as carried out. A mode with a relatively small oscillation amplitude is investigated, in which the fluctuations of the parameters behind the detonation wave front occur with the frequency of fluctuations in the density of the medium. It is shown the relationship of the oscillation period with the passage time of the characteristics C+ and C0 over the region, which can be conditionally considered an induction zone. The phase shift between the oscillations of the velocity of the detonation wave and the density of the gas before the wave is estimated as the maximum time of passage of the characteristic C+ through the induction zone.
-
Subgradient methods with B.T. Polyak-type step for quasiconvex minimization problems with inequality constraints and analogs of the sharp minimum
Computer Research and Modeling, 2024, v. 16, no. 1, pp. 105-122In this paper, we consider two variants of the concept of sharp minimum for mathematical programming problems with quasiconvex objective function and inequality constraints. It investigated the problem of describing a variant of a simple subgradient method with switching along productive and non-productive steps, for which, on a class of problems with Lipschitz functions, it would be possible to guarantee convergence with the rate of geometric progression to the set of exact solutions or its vicinity. It is important that to implement the proposed method there is no need to know the sharp minimum parameter, which is usually difficult to estimate in practice. To overcome this problem, the authors propose to use a step adjustment procedure similar to that previously proposed by B. T. Polyak. However, in this case, in comparison with the class of problems without constraints, it arises the problem of knowing the exact minimal value of the objective function. The paper describes the conditions for the inexactness of this information, which make it possible to preserve convergence with the rate of geometric progression in the vicinity of the set of minimum points of the problem. Two analogs of the concept of a sharp minimum for problems with inequality constraints are considered. In the first one, the problem of approximation to the exact solution arises only to a pre-selected level of accuracy, for this, it is considered the case when the minimal value of the objective function is unknown; instead, it is given some approximation of this value. We describe conditions on the inexact minimal value of the objective function, under which convergence to the vicinity of the desired set of points with a rate of geometric progression is still preserved. The second considered variant of the sharp minimum does not depend on the desired accuracy of the problem. For this, we propose a slightly different way of checking whether the step is productive, which allows us to guarantee the convergence of the method to the exact solution with the rate of geometric progression in the case of exact information. Convergence estimates are proved under conditions of weak convexity of the constraints and some restrictions on the choice of the initial point, and a corollary is formulated for the convex case when the need for an additional assumption on the choice of the initial point disappears. For both approaches, it has been proven that the distance from the current point to the set of solutions decreases with increasing number of iterations. This, in particular, makes it possible to limit the requirements for the properties of the used functions (Lipschitz-continuous, sharp minimum) only for a bounded set. Some computational experiments are performed, including for the truss topology design problem.
-
Parameter estimation methods for random point fields with local interactions
Computer Research and Modeling, 2016, v. 8, no. 2, pp. 323-332Views (last year): 3.The paper gives an overview of methods for estimating the parameters of random point fields with local interaction between points. It is shown that the conventional method of the maximum pseudo-likelihood is a special case of the family of estimation methods based on the use of the auxiliary Markov process, invariant measure of which is the Gibbs point field with parameters to be estimated. A generalization of this method, resulting in estimating equation that can not be obtained by the the universal Takacs–Fiksel method, is proposed. It is shown by computer simulations that the new method enables to obtain estimates which have better quality than those by a widely used method of the maximum pseudolikelihood.
-
Numerical investigations of mixing non-isothermal streams of sodium coolant in T-branch
Computer Research and Modeling, 2017, v. 9, no. 1, pp. 95-110Views (last year): 3.Numerical investigation of mixing non-isothermal streams of sodium coolant in a T-branch is carried out in the FlowVision CFD software. This study is aimed at argumentation of applicability of different approaches to prediction of oscillating behavior of the flow in the mixing zone and simulation of temperature pulsations. The following approaches are considered: URANS (Unsteady Reynolds Averaged Navier Stokers), LES (Large Eddy Simulation) and quasi-DNS (Direct Numerical Simulation). One of the main tasks of the work is detection of the advantages and drawbacks of the aforementioned approaches.
Numerical investigation of temperature pulsations, arising in the liquid and T-branch walls from the mixing of non-isothermal streams of sodium coolant was carried out within a mathematical model assuming that the flow is turbulent, the fluid density does not depend on pressure, and that heat exchange proceeds between the coolant and T-branch walls. Model LMS designed for modeling turbulent heat transfer was used in the calculations within URANS approach. The model allows calculation of the Prandtl number distribution over the computational domain.
Preliminary study was dedicated to estimation of the influence of computational grid on the development of oscillating flow and character of temperature pulsation within the aforementioned approaches. The study resulted in formulation of criteria for grid generation for each approach.
Then, calculations of three flow regimes have been carried out. The regimes differ by the ratios of the sodium mass flow rates and temperatures at the T-branch inlets. Each regime was calculated with use of the URANS, LES and quasi-DNS approaches.
At the final stage of the work analytical comparison of numerical and experimental data was performed. Advantages and drawbacks of each approach to simulation of mixing non-isothermal streams of sodium coolant in the T-branch are revealed and formulated.
It is shown that the URANS approach predicts the mean temperature distribution with a reasonable accuracy. It requires essentially less computational and time resources compared to the LES and DNS approaches. The drawback of this approach is that it does not reproduce pulsations of velocity, pressure and temperature.
The LES and DNS approaches also predict the mean temperature with a reasonable accuracy. They provide oscillating solutions. The obtained amplitudes of the temperature pulsations exceed the experimental ones. The spectral power densities in the check points inside the sodium flow agree well with the experimental data. However, the expenses of the computational and time resources essentially exceed those for the URANS approach in the performed numerical experiments: 350 times for LES and 1500 times for ·DNS.
-
The analysis of images in control systems of unmanned automobiles on the base of energy features model
Computer Research and Modeling, 2018, v. 10, no. 3, pp. 369-376Views (last year): 31. Citations: 1 (RSCI).The article shows the relevance of research work in the field of creating control systems for unmanned vehicles based on computer vision technologies. Computer vision tools are used to solve a large number of different tasks, including to determine the location of the car, detect obstacles, determine a suitable parking space. These tasks are resource intensive and have to be performed in real time. Therefore, it is important to develop effective models, methods and tools that ensure the achievement of the required time and accuracy for use in unmanned vehicle control systems. In this case, the choice of the image representation model is important. In this paper, we consider a model based on the wavelet transform, which makes it possible to form features characterizing the energy estimates of the image points and reflecting their significance from the point of view of the contribution to the overall image energy. To form a model of energy characteristics, a procedure is performed based on taking into account the dependencies between the wavelet coefficients of various levels and the application of heuristic adjustment factors for strengthening or weakening the influence of boundary and interior points. On the basis of the proposed model, it is possible to construct descriptions of images their characteristic features for isolating and analyzing, including for isolating contours, regions, and singular points. The effectiveness of the proposed approach to image analysis is due to the fact that the objects in question, such as road signs, road markings or car numbers that need to be detected and identified, are characterized by the relevant features. In addition, the use of wavelet transforms allows to perform the same basic operations to solve a set of tasks in onboard unmanned vehicle systems, including for tasks of primary processing, segmentation, description, recognition and compression of images. The such unified approach application will allow to reduce the time for performing all procedures and to reduce the requirements for computing resources of the on-board system of an unmanned vehicle.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"