All issues
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Subgradient methods with B.T. Polyak-type step for quasiconvex minimization problems with inequality constraints and analogs of the sharp minimum
Computer Research and Modeling, 2024, v. 16, no. 1, pp. 105-122In this paper, we consider two variants of the concept of sharp minimum for mathematical programming problems with quasiconvex objective function and inequality constraints. It investigated the problem of describing a variant of a simple subgradient method with switching along productive and non-productive steps, for which, on a class of problems with Lipschitz functions, it would be possible to guarantee convergence with the rate of geometric progression to the set of exact solutions or its vicinity. It is important that to implement the proposed method there is no need to know the sharp minimum parameter, which is usually difficult to estimate in practice. To overcome this problem, the authors propose to use a step adjustment procedure similar to that previously proposed by B. T. Polyak. However, in this case, in comparison with the class of problems without constraints, it arises the problem of knowing the exact minimal value of the objective function. The paper describes the conditions for the inexactness of this information, which make it possible to preserve convergence with the rate of geometric progression in the vicinity of the set of minimum points of the problem. Two analogs of the concept of a sharp minimum for problems with inequality constraints are considered. In the first one, the problem of approximation to the exact solution arises only to a pre-selected level of accuracy, for this, it is considered the case when the minimal value of the objective function is unknown; instead, it is given some approximation of this value. We describe conditions on the inexact minimal value of the objective function, under which convergence to the vicinity of the desired set of points with a rate of geometric progression is still preserved. The second considered variant of the sharp minimum does not depend on the desired accuracy of the problem. For this, we propose a slightly different way of checking whether the step is productive, which allows us to guarantee the convergence of the method to the exact solution with the rate of geometric progression in the case of exact information. Convergence estimates are proved under conditions of weak convexity of the constraints and some restrictions on the choice of the initial point, and a corollary is formulated for the convex case when the need for an additional assumption on the choice of the initial point disappears. For both approaches, it has been proven that the distance from the current point to the set of solutions decreases with increasing number of iterations. This, in particular, makes it possible to limit the requirements for the properties of the used functions (Lipschitz-continuous, sharp minimum) only for a bounded set. Some computational experiments are performed, including for the truss topology design problem.
-
Detection of influence of upper working roll’s vibrayion on thickness of sheet at cold rolling with the help of DEFORM-3D software
Computer Research and Modeling, 2017, v. 9, no. 1, pp. 111-116Views (last year): 12. Citations: 1 (RSCI).Technical diagnosis’ current trends are connected to application of FEM computer simulation, which allows, to some extent, replace real experiments, reduce costs for investigation and minimize risks. Computer simulation, just at the stage of research and development, allows carrying out of diagnostics of equipment to detect permissible fluctuations of parameters of equipment’s work. Peculiarity of diagnosis of rolling equipment is that functioning of rolling equipment is directly tied with manufacturing of product with required quality, including accuracy. At that design of techniques of technical diagnosis and diagnostical modelling is very important. Computer simulation of cold rolling of strip was carried out. At that upper working roll was doing vibrations in horizontal direction according with published data of experiments on continuous 1700 rolling mill. Vibration of working roll in a stand appeared due to gap between roll’s craft and guide in a stand and led to periodical fluctuations of strip’s thickness. After computer simulation with the help of DEFORM software strip with longitudinal and transversal thickness variation was gotten. Visualization of strip’s geometrical parameters, according with simulation data, corresponded to type of inhomogeneity of surface of strip rolled in real. Further analysis of thickness variation was done in order to identify, on the basis of simulation, sources of periodical components of strip’s thickness, whose reasons are malfunctions of equipment. Advantage of computer simulation while searching the sources of forming of thickness variation is that different hypothesis concerning thickness formations may be tested without conducting real experiments and costs of different types may be reduced. Moreover, while simulation, initial strip’s thickness will not have fluctuations as opposed to industrial or laboratorial experiments. On the basis of spectral analysis of random process, it was established that frequency of changing of strip’s thickness after rolling in one stand coincides with frequency of working roll’s vibration. Results of computer simulation correlate with results of the researches for 1700 mill. Therefore, opportunity to apply computer simulation to find reasons of formation of thickness variation of strip on the industrial rolling mill is shown.
-
The model of the rationale for the focus of border security efforts at the state level
Computer Research and Modeling, 2019, v. 11, no. 1, pp. 187-196Views (last year): 26.The most important principle of military science and border security is the principle of concentrating the main efforts on the main directions and tasks. At the tactical level, there are many mathematical models for computing the optimal resource allocation by directions and objects, whereas at the state level there are no corresponding models. Using the statistical data on the results of the protection of the US border, an exponential type border production function parameter is calculated that reflects the organizational and technological capabilities of the border guard. The production function determines the dependence of the probability of detaining offenders from the density of border guards per kilometer of the border. Financial indicators in the production function are not taken into account, as the border maintenance budget and border equipment correlate with the number of border agents. The objective function of the border guards is defined — the total prevented damage from detained violators taking into account their expected danger for the state and society, which is to be maximized. Using Slater's condition, the solution of the problem was found — optimal density of border guard was calculated for the regions of the state. Having a model of resource allocation, the example of the three border regions of the United States has also solved the reverse problem — threats in the regions have been assessed based on the known allocation of resources. The expected danger from an individual offender on the US-Canada border is 2–5 times higher than from an offender on the US-Mexican border. The results of the calculations are consistent with the views of US security experts: illegal migrants are mostly detained on the US-Mexican border, while potential terrorists prefer to use other channels of penetration into the US (including the US-Canadian border), where the risks of being detained are minimal. Also, the results of the calculations are consistent with the established practice of border protection: in 2013 the number of border guards outside the checkpoints on the US-Mexican border increased by 2 times compared with 2001, while on the American-Canadian border — 4 times. The practice of border protection and the views of specialists give grounds for approval of the verification of the model.
-
National security and geopotential of the State: mathematical modeling and forecasting
Computer Research and Modeling, 2015, v. 7, no. 4, pp. 951-969Views (last year): 11.Using mathematical modeling, geopolitical, historical and natural science approach, the model of national security. Security model reflects the dichotomy of values development and conservation, being the product of the corresponding functions. In this paper we evaluated the basic parameters of the model and discusses some of its applications in the field of geopolitics and national security.
-
A modeling approach to estimate the gross and net primary production of forest ecosystems as a function of the fraction of absorbed photosynthetically active radiation
Computer Research and Modeling, 2016, v. 8, no. 2, pp. 345-353Views (last year): 1. Citations: 2 (RSCI).A simple non-linear model allowing to calculate daily and monthly GPP and NPP of forests using parameters characterizing the light-use efficiencies for GPP and NPP, and integral values of absorbed photosynthetically active radiation, obtained using field measurements and remotes sensing data was suggested. Daily and monthly GPP, NPP of the forest ecosystems were derived from the field measurements of the net ecosystem exchange of CO2 in the spruce and tropical rain forests using a process-based Mixfor-SVAT model.
-
Impact of the non-market advantage on equilibrium in A Hotelling model
Computer Research and Modeling, 2016, v. 8, no. 3, pp. 573-581The principle of minimal differentiation, based on the Hotelling model, is well known in the economy. It is applicable to horizontal differentiated goods of almost any nature. The Hotelling approach to modeling competition of oligopolies corresponds to a modern description of monopolistic competition with increasing returns to scale and imperfect competition. We develop a modification of the Hotelling model that endows a firm with a non-market advantage, which is introduced alike the valence advantage known in problems of political economy. The nonmarket (valence) advantage can be interpreted as advertisement (brand awareness of firms). Problem statement. Consider two firms competing with prices and location. Homogeneous consumers vary with its location on a segment. They minimize their costs, which additively includes the price of the product and the distance from them to the product. The utility function is linear with respect to the price and quadratic with respect to the distance. It is also expected that one of the firms (for certainty, firm № 1) has a market advantage d. The consumers are assumed to take into account the sum of the distance to the product and the market advantage of firm 1. Thus, the strategy of the firms and the consumers depend on two parameters: the unit t of the transport costs and the non-market advantage d. I explore characteristics of the equilibrium in the model as a function of the non-market advantage for different fixed t. The aim of the research is to assess the impact of the non-market advantage on the equlibrium. We prove that the Nash equilibrium exists and it is unique under additive consumers' preferences de-pending on the square of the distance between consumers and firms. This equilibrium is ‘richer’ than that in the original Hotelling model. In particular, non-market advantage can be excessive and inefficient to use.
-
Cytokines as indicators of the state of the organism in infectious diseases. Experimental data analysis
Computer Research and Modeling, 2020, v. 12, no. 6, pp. 1409-1426When person`s diseases is result of bacterial infection, various characteristics of the organism are used for observation the course of the disease. Currently, one of these indicators is dynamics of cytokine concentrations are produced, mainly by cells of the immune system. There are many types of these low molecular weight proteins in human body and many species of animals. The study of cytokines is important for the interpretation of functional disorders of the body's immune system, assessment of the severity, monitoring the effectiveness of therapy, predicting of the course and outcome of treatment. Cytokine response of the body indicating characteristics of course of disease. For research regularities of such indication, experiments were conducted on laboratory mice. Experimental data are analyzed on the development of pneumonia and treatment with several drugs for bacterial infection of mice. As drugs used immunomodulatory drugs “Roncoleukin”, “Leikinferon” and “Tinrostim”. The data are presented by two types cytokines` concentration in lung tissue and animal blood. Multy-sided statistical ana non statistical analysis of the data allowed us to find common patterns of changes in the “cytokine profile” of the body and to link them with the properties of therapeutic preparations. The studies cytokine “Interleukin-10” (IL-10) and “Interferon Gamma” (IFN$\gamma$) in infected mice deviate from the normal level of infact animals indicating the development of the disease. Changes in cytokine concentrations in groups of treated mice are compared with those in a group of healthy (not infected) mice and a group of infected untreated mice. The comparison is made for groups of individuals, since the concentrations of cytokines are individual and differ significantly in different individuals. Under these conditions, only groups of individuals can indicate the regularities of the processes of the course of the disease. These groups of mice were being observed for two weeks. The dynamics of cytokine concentrations indicates characteristics of the disease course and efficiency of used therapeutic drugs. The effect of a medicinal product on organisms is monitored by the location of these groups of individuals in the space of cytokine concentrations. The Hausdorff distance between the sets of vectors of cytokine concentrations of individuals is used in this space. This is based on the Euclidean distance between the elements of these sets. It was found that the drug “Roncoleukin” and “Leukinferon” have a generally similar and different from the drug “Tinrostim” effect on the course of the disease.
Keywords: data processing, experiment, cytokine, immune system, pneumonia, statistics, approximation, Hausdorff distance. -
Subgradient methods for weakly convex and relatively weakly convex problems with a sharp minimum
Computer Research and Modeling, 2023, v. 15, no. 2, pp. 393-412The work is devoted to the study of subgradient methods with different variations of the Polyak stepsize for minimization functions from the class of weakly convex and relatively weakly convex functions that have the corresponding analogue of a sharp minimum. It turns out that, under certain assumptions about the starting point, such an approach can make it possible to justify the convergence of the subgradient method with the speed of a geometric progression. For the subgradient method with the Polyak stepsize, a refined estimate for the rate of convergence is proved for minimization problems for weakly convex functions with a sharp minimum. The feature of this estimate is an additional consideration of the decrease of the distance from the current point of the method to the set of solutions with the increase in the number of iterations. The results of numerical experiments for the phase reconstruction problem (which is weakly convex and has a sharp minimum) are presented, demonstrating the effectiveness of the proposed approach to estimating the rate of convergence compared to the known one. Next, we propose a variation of the subgradient method with switching over productive and non-productive steps for weakly convex problems with inequality constraints and obtain the corresponding analog of the result on convergence with the rate of geometric progression. For the subgradient method with the corresponding variation of the Polyak stepsize on the class of relatively Lipschitz and relatively weakly convex functions with a relative analogue of a sharp minimum, it was obtained conditions that guarantee the convergence of such a subgradient method at the rate of a geometric progression. Finally, a theoretical result is obtained that describes the influence of the error of the information about the (sub)gradient available by the subgradient method and the objective function on the estimation of the quality of the obtained approximate solution. It is proved that for a sufficiently small error $\delta > 0$, one can guarantee that the accuracy of the solution is comparable to $\delta$.
-
The integrated model of eco-economic system on the example of the Republic of Armenia
Computer Research and Modeling, 2014, v. 6, no. 4, pp. 621-631Views (last year): 14. Citations: 7 (RSCI).This article presents an integrated dynamic model of eco-economic system of the Republic of Armenia (RA). This model is constructed using system dynamics methods, which allow to consider the major feedback related to key characteristics of eco-economic system. Such model is a two-objective optimization problem where as target functions the level of air pollution and gross profit of national economy are considered. The air pollution is minimized due to modernization of stationary and mobile sources of pollution at simultaneous maximization of gross profit of national economy. At the same time considered eco-economic system is characterized by the presence of internal constraints that must be accounted at acceptance of strategic decisions. As a result, we proposed a systematic approach that allows forming sustainable solutions for the development of the production sector of RA while minimizing the impact on the environment. With the proposed approach, in particular, we can form a plan for optimal enterprise modernization and predict long-term dynamics of harmful emissions into the atmosphere.
-
Hypergeometric functions in model of General equilibrium of multisector economy with monopolistic competition
Computer Research and Modeling, 2017, v. 9, no. 5, pp. 825-836Views (last year): 10.We show that basic properties of some models of monopolistic competition are described using families of hypergeometric functions. The results obtained by building a general equilibrium model in a multisector economy producing a differentiated good in $n$ high-tech sectors in which single-product firms compete monopolistically using the same technology. Homogeneous (traditional) sector is characterized by perfect competition. Workers are motivated to find a job in high-tech sectors as wages are higher there. However, they are at risk to remain unemployed. Unemployment persists in equilibrium by labor market imperfections. Wages are set by firms in high-tech sectors as a result of negotiations with employees. It is assumed that individuals are homogeneous consumers with identical preferences that are given the separable utility function of general form. In the paper the conditions are found such that the general equilibrium in the model exists and is unique. The conditions are formulated in terms of the elasticity of substitution $\mathfrak{S}$ between varieties of the differentiated good which is averaged over all consumers. The equilibrium found is symmetrical with respect to the varieties of differentiated good. The equilibrium variables can be represented as implicit functions which properties are associated elasticity $\mathfrak{S}$ introduced by the authors. A complete analytical description of the equilibrium variables is possible for known special cases of the utility function of consumers, for example, in the case of degree functions, which are incorrect to describe the response of the economy to changes in the size of the markets. To simplify the implicit function, we introduce a utility function defined by two one-parameter families of hypergeometric functions. One of the families describes the pro-competitive, and the other — anti-competitive response of prices to an increase in the size of the economy. A parameter change of each of the families corresponds to all possible values of the elasticity $\mathfrak{S}$. In this sense, the hypergeometric function exhaust natural utility function. It is established that with the increase in the elasticity of substitution between the varieties of the differentiated good the difference between the high-tech and homogeneous sectors is erased. It is shown that in the case of large size of the economy in equilibrium individuals consume a small amount of each product as in the case of degree preferences. This fact allows to approximate the hypergeometric functions by the sum of degree functions in a neighborhood of the equilibrium values of the argument. Thus, the change of degree utility functions by hypergeometric ones approximated by the sum of two power functions, on the one hand, retains all the ability to configure parameters and, on the other hand, allows to describe the effects of change the size of the sectors of the economy.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"