All issues
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Platelet transport and adhesion in shear blood flow: the role of erythrocytes
Computer Research and Modeling, 2012, v. 4, no. 1, pp. 185-200Views (last year): 3. Citations: 8 (RSCI).Hemostatic system serves the organism for urgent repairs of damaged blood vessel walls. Its main components – platelets, the smallest blood cells, – are constantly contained in blood and quickly adhere to the site of injury. Platelet migration across blood flow and their hit with the wall are governed by blood flow conditions and, in particular, by the physical presence of other blood cells – erythrocytes. In this review we consider the main regularities of this influence, available mathematical models of platelet migration across blood flow and adhesion based on "convection-diffusion" PDEs, and discuss recent advances in this field. Understanding of the mechanisms of these processes is necessary for building of adequate mathematical models of hemostatic system functioning in blood flow in normal and pathological conditions.
-
Adaptive first-order methods for relatively strongly convex optimization problems
Computer Research and Modeling, 2022, v. 14, no. 2, pp. 445-472The article is devoted to first-order adaptive methods for optimization problems with relatively strongly convex functionals. The concept of relatively strong convexity significantly extends the classical concept of convexity by replacing the Euclidean norm in the definition by the distance in a more general sense (more precisely, by Bregman’s divergence). An important feature of the considered classes of problems is the reduced requirements concerting the level of smoothness of objective functionals. More precisely, we consider relatively smooth and relatively Lipschitz-continuous objective functionals, which allows us to apply the proposed techniques for solving many applied problems, such as the intersection of the ellipsoids problem (IEP), the Support Vector Machine (SVM) for a binary classification problem, etc. If the objective functional is convex, the condition of relatively strong convexity can be satisfied using the problem regularization. In this work, we propose adaptive gradient-type methods for optimization problems with relatively strongly convex and relatively Lipschitzcontinuous functionals for the first time. Further, we propose universal methods for relatively strongly convex optimization problems. This technique is based on introducing an artificial inaccuracy into the optimization model, so the proposed methods can be applied both to the case of relatively smooth and relatively Lipschitz-continuous functionals. Additionally, we demonstrate the optimality of the proposed universal gradient-type methods up to the multiplication by a constant for both classes of relatively strongly convex problems. Also, we show how to apply the technique of restarts of the mirror descent algorithm to solve relatively Lipschitz-continuous optimization problems. Moreover, we prove the optimal estimate of the rate of convergence of such a technique. Also, we present the results of numerical experiments to compare the performance of the proposed methods.
-
Nonsmooth Distributed Min-Max Optimization Using the Smoothing Technique
Computer Research and Modeling, 2023, v. 15, no. 2, pp. 469-480Distributed saddle point problems (SPPs) have numerous applications in optimization, matrix games and machine learning. For example, the training of generated adversarial networks is represented as a min-max optimization problem, and training regularized linear models can be reformulated as an SPP as well. This paper studies distributed nonsmooth SPPs with Lipschitz-continuous objective functions. The objective function is represented as a sum of several components that are distributed between groups of computational nodes. The nodes, or agents, exchange information through some communication network that may be centralized or decentralized. A centralized network has a universal information aggregator (a server, or master node) that directly communicates to each of the agents and therefore can coordinate the optimization process. In a decentralized network, all the nodes are equal, the server node is not present, and each agent only communicates to its immediate neighbors.
We assume that each of the nodes locally holds its objective and can compute its value at given points, i. e. has access to zero-order oracle. Zero-order information is used when the gradient of the function is costly, not possible to compute or when the function is not differentiable. For example, in reinforcement learning one needs to generate a trajectory to evaluate the current policy. This policy evaluation process can be interpreted as the computation of the function value. We propose an approach that uses a smoothing technique, i. e., applies a first-order method to the smoothed version of the initial function. It can be shown that the stochastic gradient of the smoothed function can be viewed as a random two-point gradient approximation of the initial function. Smoothing approaches have been studied for distributed zero-order minimization, and our paper generalizes the smoothing technique on SPPs.
Keywords: convex optimization, distributed optimization. -
Mathematical consensus model of loyal experts based on regular Markov chains
Computer Research and Modeling, 2023, v. 15, no. 5, pp. 1381-1393The theoretical study of consensus makes it possible to analyze the various situations that social groups that make decisions in this way have to face in real life, abstracting from the specific characteristics of the groups. It is relevant for practice to study the dynamics of a social group consisting of loyal experts who, in the process of seeking consensus, yield to each other. In this case, psychological “traps” such as false consensus or groupthink are possible, which can sometimes lead to managerial decisions with dire consequences.
The article builds a mathematical consensus model for a group of loyal experts based on modeling using regular Markov chains. Analysis of the model showed that with an increase in the loyalty (decrease in authoritarianism) of group members, the time to reach consensus increases exponentially (the number of agreements increases), which is apparently due to the lack of desire among experts to take part of the responsibility for the decision being made. An increase in the size of such a group leads (ceteris paribus):
– to reduce the number of approvals to consensus in the conditions of striving for absolute loyalty of members, i. e. each additional loyal member adds less and less “strength” to the group;
– to a logarithmic increase in the number of approvals in the context of an increase in the average authoritarianism of members. It is shown that in a small group (two people), the time for reaching consensus can increase by more than 10 times compared to a group of 5 or more members), in the group there is a transfer of responsibility for making decisions.
It is proved that in the case of a group of two absolutely loyal members, consensus is unattainable.
A reasonable conclusion is made that consensus in a group of loyal experts is a special (special) case of consensus, since the dependence of the time until consensus is reached on the authoritarianism of experts and their number in the group is described by different curves than in the case of a regular group of experts.
Keywords: consensus, false consensus, group think, social groups, Markov chains, time to reach consensus. -
The use of syntax trees in order to automate the correction of LaTeX documents
Computer Research and Modeling, 2012, v. 4, no. 4, pp. 871-883Citations: 5 (RSCI).The problem is to automate the correction of LaTeX documents. Each document is represented as a parse tree. The modified Zhang-Shasha algorithm is used to construct a mapping of tree vertices of the original document to the tree vertices of the edited document, which corresponds to the minimum editing distance. Vertex to vertex maps form the training set, which is used to generate rules for automatic correction. The statistics of the applicability to the edited documents is collected for each rule. It is used for quality assessment and improvement of the rules.
-
Additive regularizarion of topic models with fast text vectorizartion
Computer Research and Modeling, 2020, v. 12, no. 6, pp. 1515-1528The probabilistic topic model of a text document collection finds two matrices: a matrix of conditional probabilities of topics in documents and a matrix of conditional probabilities of words in topics. Each document is represented by a multiset of words also called the “bag of words”, thus assuming that the order of words is not important for revealing the latent topics of the document. Under this assumption, the problem is reduced to a low-rank non-negative matrix factorization governed by likelihood maximization. In general, this problem is ill-posed having an infinite set of solutions. In order to regularize the solution, a weighted sum of optimization criteria is added to the log-likelihood. When modeling large text collections, storing the first matrix seems to be impractical, since its size is proportional to the number of documents in the collection. At the same time, the topical vector representation (embedding) of documents is necessary for solving many text analysis tasks, such as information retrieval, clustering, classification, and summarization of texts. In practice, the topical embedding is calculated for a document “on-the-fly”, which may require dozens of iterations over all the words of the document. In this paper, we propose a way to calculate a topical embedding quickly, by one pass over document words. For this, an additional constraint is introduced into the model in the form of an equation, which calculates the first matrix from the second one in linear time. Although formally this constraint is not an optimization criterion, in fact it plays the role of a regularizer and can be used in combination with other regularizers within the additive regularization framework ARTM. Experiments on three text collections have shown that the proposed method improves the model in terms of sparseness, difference, logLift and coherence measures of topic quality. The open source libraries BigARTM and TopicNet were used for the experiments.
-
Modeling the dynamics of plankton community considering the trophic characteristics of zooplankton
Computer Research and Modeling, 2024, v. 16, no. 2, pp. 525-554We propose a four-component model of a plankton community with discrete time. The model considers the competitive relationships of phytoplankton groups exhibited between each other and the trophic characteristics zooplankton displays: it considers the division of zooplankton into predatory and non-predatory components. The model explicitly represents the consumption of non-predatory zooplankton by predatory. Non-predatory zooplankton feeds on phytoplankton, which includes two competing components: toxic and non-toxic types, with the latter being suitable for zooplankton food. A model of two coupled Ricker equations, focused on describing the dynamics of a competitive community, describes the interaction of two phytoplanktons and allows implicitly taking into account the limitation of each of the competing components of biomass growth by the availability of external resources. The model describes the prey consumption by their predators using a Holling type II trophic function, considering predator saturation.
The analysis of scenarios for the transition from stationary dynamics to fluctuations in the population size of community members showed that the community loses the stability of the non-trivial equilibrium corresponding to the coexistence of the complete community both through a cascade of period-doubling bifurcations and through a Neimark – Sacker bifurcation leading to the emergence of quasi-periodic oscillations. Although quite simple, the model proposed in this work demonstrates dynamics of comunity similar to that natural systems and experiments observe: with a lag of predator oscillations relative to the prey by about a quarter of the period, long-period antiphase cycles of predator and prey, as well as hidden cycles in which the prey density remains almost constant, and the predator density fluctuates, demonstrating the influence fast evolution exhibits that masks the trophic interaction. At the same time, the variation of intra-population parameters of phytoplankton or zooplankton can lead to pronounced changes the community experiences in the dynamic mode: sharp transitions from regular to quasi-periodic dynamics and further to exact cycles with a small period or even stationary dynamics. Quasi-periodic dynamics can arise at sufficiently small phytoplankton growth rates corresponding to stable or regular community dynamics. The change of the dynamic mode in this area (the transition from stable dynamics to quasi-periodic and vice versa) can occur due to the variation of initial conditions or external influence that changes the current abundances of components and shifts the system to the basin of attraction of another dynamic mode.
-
Theoretical modeling consensus building in the work of standardization technical committees in coalitions based on regular Markov chains
Computer Research and Modeling, 2020, v. 12, no. 5, pp. 1247-1256Often decisions in social groups are made by consensus. This applies, for example, to the examination in the technical committee for standardization (TC) before the approval of the national standard by Rosstandart. The standard is approved if and only if the secured consensus in the TC. The same approach to standards development was adopted in almost all countries and at the regional and international level. Previously published works of authors dedicated to the construction of a mathematical model of time to reach consensus in technical committees for standardization in terms of variation in the number of TC members and their level of authoritarianism. The present study is a continuation of these works for the case of the formation of coalitions that are often formed during the consideration of the draft standard to the TC. In the article the mathematical model is constructed to ensure consensus on the work of technical standardization committees in terms of coalitions. In the framework of the model it is shown that in the presence of coalitions consensus is not achievable. However, the coalition, as a rule, are overcome during the negotiation process, otherwise the number of the adopted standards would be extremely small. This paper analyzes the factors that influence the bridging coalitions: the value of the assignment and an index of the effect of the coalition. On the basis of statistical modelling of regular Markov chains is investigated their effects on the time to ensure consensus in the technical Committee. It is proved that the time to reach consensus significantly depends on the value of unilateral concessions coalition and weakly depends on the size of coalitions. Built regression model of dependence of the average number of approvals from the value of the assignment. It was revealed that even a small concession leads to the onset of consensus, increasing the size of the assignment results (with other factors being equal) to a sharp decline in time before the consensus. It is shown that the assignment of a larger coalition against small coalitions takes on average more time before consensus. The result has practical value for all organizational structures, where the emergence of coalitions entails the inability of decision-making in the framework of consensus and requires the consideration of various methods for reaching a consensus decision.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"