Результаты поиска по 'number of iterations':
Найдено статей: 18
  1. Kalmykov L.V., Kalmykov V.L.
    Investigation of individual-based mechanisms of single-species population dynamics by logical deterministic cellular automata
    Computer Research and Modeling, 2015, v. 7, no. 6, pp. 1279-1293

    Investigation of logical deterministic cellular automata models of population dynamics allows to reveal detailed individual-based mechanisms. The search for such mechanisms is important in connection with ecological problems caused by overexploitation of natural resources, environmental pollution and climate change. Classical models of population dynamics have the phenomenological nature, as they are “black boxes”. Phenomenological models fundamentally complicate research of detailed mechanisms of ecosystem functioning. We have investigated the role of fecundity and duration of resources regeneration in mechanisms of population growth using four models of ecosystem with one species. These models are logical deterministic cellular automata and are based on physical axiomatics of excitable medium with regeneration. We have modeled catastrophic death of population arising from increasing of resources regeneration duration. It has been shown that greater fecundity accelerates population extinction. The investigated mechanisms are important for understanding mechanisms of sustainability of ecosystems and biodiversity conservation. Prospects of the presented modeling approach as a method of transparent multilevel modeling of complex systems are discussed.

    Views (last year): 16. Citations: 3 (RSCI).
  2. Shibkov A.A., Kochegarov S.S.
    Computer and physical-chemical modeling of the evolution of a fractal corrosion front
    Computer Research and Modeling, 2021, v. 13, no. 1, pp. 105-124

    Corrosion damage to metals and alloys is one of the main problems of strength and durability of metal structures and products operated in contact with chemically aggressive environments. Recently, there has been a growing interest in computer modeling of the evolution of corrosion damage, especially pitting corrosion, for a deeper understanding of the corrosion process, its impact on the morphology, physical and chemical properties of the surface and mechanical strength of the material. This is mainly due to the complexity of analytical and high cost of experimental in situ studies of real corrosion processes. However, the computing power of modern computers allows you to calculate corrosion with high accuracy only on relatively small areas of the surface. Therefore, the development of new mathematical models that allow calculating large areas for predicting the evolution of corrosion damage to metals is currently an urgent problem.

    In this paper, the evolution of the corrosion front in the interaction of a polycrystalline metal surface with a liquid aggressive medium was studied using a computer model based on a cellular automat. A distinctive feature of the model is the specification of the solid body structure in the form of Voronoi polygons used for modeling polycrystalline alloys. Corrosion destruction was performed by setting the probability function of the transition between cells of the cellular automaton. It was taken into account that the corrosion strength of the grains varies due to crystallographic anisotropy. It is shown that this leads to the formation of a rough phase boundary during the corrosion process. Reducing the concentration of active particles in a solution of an aggressive medium during a chemical reaction leads to corrosion attenuation in a finite number of calculation iterations. It is established that the final morphology of the phase boundary has a fractal structure with a dimension of 1.323 ± 0.002 close to the dimension of the gradient percolation front, which is in good agreement with the fractal dimension of the etching front of a polycrystalline aluminum-magnesium alloy AlMg6 with a concentrated solution of hydrochloric acid. It is shown that corrosion of a polycrystalline metal in a liquid aggressive medium is a new example of a topochemical process, the kinetics of which is described by the Kolmogorov–Johnson– Meil–Avrami theory.

  3. Puchinin S.M., Korolkov E.R., Stonyakin F.S., Alkousa M.S., Vyguzov A.A.
    Subgradient methods with B.T. Polyak-type step for quasiconvex minimization problems with inequality constraints and analogs of the sharp minimum
    Computer Research and Modeling, 2024, v. 16, no. 1, pp. 105-122

    In this paper, we consider two variants of the concept of sharp minimum for mathematical programming problems with quasiconvex objective function and inequality constraints. It investigated the problem of describing a variant of a simple subgradient method with switching along productive and non-productive steps, for which, on a class of problems with Lipschitz functions, it would be possible to guarantee convergence with the rate of geometric progression to the set of exact solutions or its vicinity. It is important that to implement the proposed method there is no need to know the sharp minimum parameter, which is usually difficult to estimate in practice. To overcome this problem, the authors propose to use a step adjustment procedure similar to that previously proposed by B. T. Polyak. However, in this case, in comparison with the class of problems without constraints, it arises the problem of knowing the exact minimal value of the objective function. The paper describes the conditions for the inexactness of this information, which make it possible to preserve convergence with the rate of geometric progression in the vicinity of the set of minimum points of the problem. Two analogs of the concept of a sharp minimum for problems with inequality constraints are considered. In the first one, the problem of approximation to the exact solution arises only to a pre-selected level of accuracy, for this, it is considered the case when the minimal value of the objective function is unknown; instead, it is given some approximation of this value. We describe conditions on the inexact minimal value of the objective function, under which convergence to the vicinity of the desired set of points with a rate of geometric progression is still preserved. The second considered variant of the sharp minimum does not depend on the desired accuracy of the problem. For this, we propose a slightly different way of checking whether the step is productive, which allows us to guarantee the convergence of the method to the exact solution with the rate of geometric progression in the case of exact information. Convergence estimates are proved under conditions of weak convexity of the constraints and some restrictions on the choice of the initial point, and a corollary is formulated for the convex case when the need for an additional assumption on the choice of the initial point disappears. For both approaches, it has been proven that the distance from the current point to the set of solutions decreases with increasing number of iterations. This, in particular, makes it possible to limit the requirements for the properties of the used functions (Lipschitz-continuous, sharp minimum) only for a bounded set. Some computational experiments are performed, including for the truss topology design problem.

  4. Silaeva V.A., Silaeva M.V., Silaev A.M.
    Estimation of models parameters for time series with Markov switching regimes
    Computer Research and Modeling, 2018, v. 10, no. 6, pp. 903-918

    The paper considers the problem of estimating the parameters of time series described by regression models with Markov switching of two regimes at random instants of time with independent Gaussian noise. For the solution, we propose a variant of the EM algorithm based on the iterative procedure, during which an estimation of the regression parameters is performed for a given sequence of regime switching and an evaluation of the switching sequence for the given parameters of the regression models. In contrast to the well-known methods of estimating regression parameters in the models with Markov switching, which are based on the calculation of a posteriori probabilities of discrete states of the switching sequence, in the paper the estimates are calculated of the switching sequence, which are optimal by the criterion of the maximum of a posteriori probability. As a result, the proposed algorithm turns out to be simpler and requires less calculations. Computer modeling allows to reveal the factors influencing accuracy of estimation. Such factors include the number of observations, the number of unknown regression parameters, the degree of their difference in different modes of operation, and the signal-to-noise ratio which is associated with the coefficient of determination in regression models. The proposed algorithm is applied to the problem of estimating parameters in regression models for the rate of daily return of the RTS index, depending on the returns of the S&P 500 index and Gazprom shares for the period from 2013 to 2018. Comparison of the estimates of the parameters found using the proposed algorithm is carried out with the estimates that are formed using the EViews econometric package and with estimates of the ordinary least squares method without taking into account regimes switching. The account of regimes switching allows to receive more exact representation about structure of a statistical dependence of investigated variables. In switching models, the increase in the signal-to-noise ratio leads to the fact that the differences in the estimates produced by the proposed algorithm and using the EViews program are reduced.

    Views (last year): 36.
  5. Stonyakin F.S., Ablaev S.S., Baran I.V., Alkousa M.S.
    Subgradient methods for weakly convex and relatively weakly convex problems with a sharp minimum
    Computer Research and Modeling, 2023, v. 15, no. 2, pp. 393-412

    The work is devoted to the study of subgradient methods with different variations of the Polyak stepsize for minimization functions from the class of weakly convex and relatively weakly convex functions that have the corresponding analogue of a sharp minimum. It turns out that, under certain assumptions about the starting point, such an approach can make it possible to justify the convergence of the subgradient method with the speed of a geometric progression. For the subgradient method with the Polyak stepsize, a refined estimate for the rate of convergence is proved for minimization problems for weakly convex functions with a sharp minimum. The feature of this estimate is an additional consideration of the decrease of the distance from the current point of the method to the set of solutions with the increase in the number of iterations. The results of numerical experiments for the phase reconstruction problem (which is weakly convex and has a sharp minimum) are presented, demonstrating the effectiveness of the proposed approach to estimating the rate of convergence compared to the known one. Next, we propose a variation of the subgradient method with switching over productive and non-productive steps for weakly convex problems with inequality constraints and obtain the corresponding analog of the result on convergence with the rate of geometric progression. For the subgradient method with the corresponding variation of the Polyak stepsize on the class of relatively Lipschitz and relatively weakly convex functions with a relative analogue of a sharp minimum, it was obtained conditions that guarantee the convergence of such a subgradient method at the rate of a geometric progression. Finally, a theoretical result is obtained that describes the influence of the error of the information about the (sub)gradient available by the subgradient method and the objective function on the estimation of the quality of the obtained approximate solution. It is proved that for a sufficiently small error $\delta > 0$, one can guarantee that the accuracy of the solution is comparable to $\delta$.

  6. Aronov I.Z., Maksimova O.V.
    Modeling consensus building in conditions of dominance in a social group
    Computer Research and Modeling, 2021, v. 13, no. 5, pp. 1067-1078

    In many social groups, for example, in technical committees for standardization, at the international, regional and national levels, in European communities, managers of ecovillages, social movements (occupy), international organizations, decision-making is based on the consensus of the group members. Instead of voting, where the majority wins over the minority, consensus allows for a solution that each member of the group supports, or at least considers acceptable. This approach ensures that all group members’ opinions, ideas and needs are taken into account. At the same time, it is noted that reaching consensus takes a long time, since it is necessary to ensure agreement within the group, regardless of its size. It was shown that in some situations the number of iterations (agreements, negotiations) is very significant. Moreover, in the decision-making process, there is always a risk of blocking the decision by the minority in the group, which not only delays the decisionmaking time, but makes it impossible. Typically, such a minority is one or two odious people in the group. At the same time, such a member of the group tries to dominate in the discussion, always remaining in his opinion, ignoring the position of other colleagues. This leads to a delay in the decision-making process, on the one hand, and a deterioration in the quality of consensus, on the other, since only the opinion of the dominant member of the group has to be taken into account. To overcome the crisis in this situation, it was proposed to make a decision on the principle of «consensus minus one» or «consensus minus two», that is, do not take into account the opinion of one or two odious members of the group.

    The article, based on modeling consensus using the model of regular Markov chains, examines the question of how much the decision-making time according to the «consensus minus one» rule is reduced, when the position of the dominant member of the group is not taken into account.

    The general conclusion that follows from the simulation results is that the rule of thumb for making decisions on the principle of «consensus minus one» has a corresponding mathematical justification. The simulation results showed that the application of the «consensus minus one» rule can reduce the time to reach consensus in the group by 76–95%, which is important for practice.

    The average number of agreements hyperbolically depends on the average authoritarianism of the group members (excluding the authoritarian one), which means the possibility of delaying the agreement process at high values of the authoritarianism of the group members.

  7. Ablaev S.S., Makarenko D.V., Stonyakin F.S., Alkousa M.S., Baran I.V.
    Subgradient methods for non-smooth optimization problems with some relaxation of sharp minimum
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 473-495

    Non-smooth optimization often arises in many applied problems. The issues of developing efficient computational procedures for such problems in high-dimensional spaces are very topical. First-order methods (subgradient methods) are well applicable here, but in fairly general situations they lead to low speed guarantees for large-scale problems. One of the approaches to this type of problem can be to identify a subclass of non-smooth problems that allow relatively optimistic results on the rate of convergence. For example, one of the options for additional assumptions can be the condition of a sharp minimum, proposed in the late 1960s by B. T. Polyak. In the case of the availability of information about the minimal value of the function for Lipschitz-continuous problems with a sharp minimum, it turned out to be possible to propose a subgradient method with a Polyak step-size, which guarantees a linear rate of convergence in the argument. This approach made it possible to cover a number of important applied problems (for example, the problem of projecting onto a convex compact set). However, both the condition of the availability of the minimal value of the function and the condition of a sharp minimum itself look rather restrictive. In this regard, in this paper, we propose a generalized condition for a sharp minimum, somewhat similar to the inexact oracle proposed recently by Devolder – Glineur – Nesterov. The proposed approach makes it possible to extend the class of applicability of subgradient methods with the Polyak step-size, to the situation of inexact information about the value of the minimum, as well as the unknown Lipschitz constant of the objective function. Moreover, the use of local analogs of the global characteristics of the objective function makes it possible to apply the results of this type to wider classes of problems. We show the possibility of applying the proposed approach to strongly convex nonsmooth problems, also, we make an experimental comparison with the known optimal subgradient method for such a class of problems. Moreover, there were obtained some results connected to the applicability of the proposed technique to some types of problems with convexity relaxations: the recently proposed notion of weak $\beta$-quasi-convexity and ordinary quasiconvexity. Also in the paper, we study a generalization of the described technique to the situation with the assumption that the $\delta$-subgradient of the objective function is available instead of the usual subgradient. For one of the considered methods, conditions are found under which, in practice, it is possible to escape the projection of the considered iterative sequence onto the feasible set of the problem.

  8. Irkhin I.A., Bulatov V.G., Vorontsov K.V.
    Additive regularizarion of topic models with fast text vectorizartion
    Computer Research and Modeling, 2020, v. 12, no. 6, pp. 1515-1528

    The probabilistic topic model of a text document collection finds two matrices: a matrix of conditional probabilities of topics in documents and a matrix of conditional probabilities of words in topics. Each document is represented by a multiset of words also called the “bag of words”, thus assuming that the order of words is not important for revealing the latent topics of the document. Under this assumption, the problem is reduced to a low-rank non-negative matrix factorization governed by likelihood maximization. In general, this problem is ill-posed having an infinite set of solutions. In order to regularize the solution, a weighted sum of optimization criteria is added to the log-likelihood. When modeling large text collections, storing the first matrix seems to be impractical, since its size is proportional to the number of documents in the collection. At the same time, the topical vector representation (embedding) of documents is necessary for solving many text analysis tasks, such as information retrieval, clustering, classification, and summarization of texts. In practice, the topical embedding is calculated for a document “on-the-fly”, which may require dozens of iterations over all the words of the document. In this paper, we propose a way to calculate a topical embedding quickly, by one pass over document words. For this, an additional constraint is introduced into the model in the form of an equation, which calculates the first matrix from the second one in linear time. Although formally this constraint is not an optimization criterion, in fact it plays the role of a regularizer and can be used in combination with other regularizers within the additive regularization framework ARTM. Experiments on three text collections have shown that the proposed method improves the model in terms of sparseness, difference, logLift and coherence measures of topic quality. The open source libraries BigARTM and TopicNet were used for the experiments.

Pages: previous

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"