All issues
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Describing processes in photosynthetic reaction center ensembles using a Monte Carlo kinetic model
Computer Research and Modeling, 2020, v. 12, no. 5, pp. 1207-1221Photosynthetic apparatus of a plant cell consists of multiple photosynthetic electron transport chains (ETC). Each ETC is capable of capturing and utilizing light quanta, that drive electron transport along the chain. Light assimilation efficiency depends on the plant’s current physiological state. The energy of the part of quanta that cannot be utilized, dissipates into heat, or is emitted as fluorescence. Under high light conditions fluorescence levels gradually rise to the maximum level. The curve describing that rise is called fluorescence rise (FR). It has a complex shape and that shape changes depending on the photosynthetic apparatus state. This gives one the opportunity to investigate that state only using the non invasive measuring of the FR.
When measuring fluorescence in experimental conditions, we get a response from millions of photosynthetic units at a time. In order to reproduce the probabilistic nature of the processes in a photosynthetic ETC, we created a Monte Carlo model of this chain. This model describes an ETC as a sequence of electron carriers in a thylakoid membrane, connected with each other. Those carriers have certain probabilities of capturing light photons, transferring excited states, or reducing each other, depending on the current ETC state. The events that take place in each of the model photosynthetic ETCs are registered, accumulated and used to create fluorescence rise and electron carrier redox states accumulation kinetics. This paper describes the model structure, the principles of its operation and the relations between certain model parameters and the resulting kinetic curves shape. Model curves include photosystem II reaction center fluorescence rise and photosystem I reaction center redox state change kinetics under different conditions.
-
Using extended ODE systems to investigate the mathematical model of the blood coagulation
Computer Research and Modeling, 2022, v. 14, no. 4, pp. 931-951Many properties of ordinary differential equations systems solutions are determined by the properties of the equations in variations. An ODE system, which includes both the original nonlinear system and the equations in variations, will be called an extended system further. When studying the properties of the Cauchy problem for the systems of ordinary differential equations, the transition to extended systems allows one to study many subtle properties of solutions. For example, the transition to the extended system allows one to increase the order of approximation for numerical methods, gives the approaches to constructing a sensitivity function without using numerical differentiation procedures, allows to use methods of increased convergence order for the inverse problem solution. Authors used the Broyden method belonging to the class of quasi-Newtonian methods. The Rosenbroke method with complex coefficients was used to solve the stiff systems of the ordinary differential equations. In our case, it is equivalent to the second order approximation method for the extended system.
As an example of the proposed approach, several related mathematical models of the blood coagulation process were considered. Based on the analysis of the numerical calculations results, the conclusion was drawn that it is necessary to include a description of the factor XI positive feedback loop in the model equations system. Estimates of some reaction constants based on the numerical inverse problem solution were given.
Effect of factor V release on platelet activation was considered. The modification of the mathematical model allowed to achieve quantitative correspondence in the dynamics of the thrombin production with experimental data for an artificial system. Based on the sensitivity analysis, the hypothesis tested that there is no influence of the lipid membrane composition (the number of sites for various factors of the clotting system, except for thrombin sites) on the dynamics of the process.
-
Connection between discrete financial models and continuous models with Wiener and Poisson processes
Computer Research and Modeling, 2023, v. 15, no. 3, pp. 781-795The paper is devoted to the study of relationships between discrete and continuous models financial processes and their probabilistic characteristics. First, a connection is established between the price processes of stocks, hedging portfolio and options in the models conditioned by binomial perturbations and their limit perturbations of the Brownian motion type. Secondly, analogues in the coefficients of stochastic equations with various random processes, continuous and jumpwise, and in the coefficients corresponding deterministic equations for their probabilistic characteristics. Statement of the results on the connections and finding analogies, obtained in this paper, led to the need for an adequate presentation of preliminary information and results from financial mathematics, as well as descriptions of related objects of stochastic analysis. In this paper, partially new and known results are presented in an accessible form for those who are not specialists in financial mathematics and stochastic analysis, and for whom these results are important from the point of view of applications. Specifically, the following sections are presented.
• In one- and n-period binomial models, it is proposed a unified approach to determining on the probability space a risk-neutral measure with which the discounted option price becomes a martingale. The resulting martingale formula for the option price is suitable for numerical simulation. In the following sections, the risk-neutral measures approach is applied to study financial processes in continuous-time models.
• In continuous time, models of the price of shares, hedging portfolios and options are considered in the form of stochastic equations with the Ito integral over Brownian motion and over a compensated Poisson process. The study of the properties of these processes in this section is based on one of the central objects of stochastic analysis — the Ito formula. Special attention is given to the methods of its application.
• The famous Black – Scholes formula is presented, which gives a solution to the partial differential equation for the function $v(t, x)$, which, when $x = S (t)$ is substituted, where $S(t)$ is the stock price at the moment time $t$, gives the price of the option in the model with continuous perturbation by Brownian motion.
• The analogue of the Black – Scholes formula for the case of the model with a jump-like perturbation by the Poisson process is suggested. The derivation of this formula is based on the technique of risk-neutral measures and the independence lemma.
-
Multicriterial metric data analysis in human capital modelling
Computer Research and Modeling, 2020, v. 12, no. 5, pp. 1223-1245The article describes a model of a human in the informational economy and demonstrates the multicriteria optimizational approach to the metric analysis of model-generated data. The traditional approach using the identification and study involves the model’s identification by time series and its further prediction. However, this is not possible when some variables are not explicitly observed and only some typical borders or population features are known, which is often the case in the social sciences, making some models pure theoretical. To avoid this problem, we propose a method of metric data analysis (MMDA) for identification and study of such models, based on the construction and analysis of the Kolmogorov – Shannon metric nets of the general population in a multidimensional space of social characteristics. Using this method, the coefficients of the model are identified and the features of its phase trajectories are studied. In this paper, we are describing human according to his role in information processing, considering his awareness and cognitive abilities. We construct two lifetime indices of human capital: creative individual (generalizing cognitive abilities) and productive (generalizing the amount of information mastered by a person) and formulate the problem of their multi-criteria (two-criteria) optimization taking into account life expectancy. This approach allows us to identify and economically justify the new requirements for the education system and the information environment of human existence. It is shown that the Pareto-frontier exists in the optimization problem, and its type depends on the mortality rates: at high life expectancy there is one dominant solution, while for lower life expectancy there are different types of Paretofrontier. In particular, the Pareto-principle applies to Russia: a significant increase in the creative human capital of an individual (summarizing his cognitive abilities) is possible due to a small decrease in the creative human capital (summarizing awareness). It is shown that the increase in life expectancy makes competence approach (focused on the development of cognitive abilities) being optimal, while for low life expectancy the knowledge approach is preferable.
-
On the question of choosing the structure of a multivariate regression model on the example of the analysis of burnout factors of artists
Computer Research and Modeling, 2021, v. 13, no. 1, pp. 265-274The article discusses the problem of the influence of the research goals on the structure of the multivariate model of regression analysis (in particular, on the implementation of the procedure for reducing the dimension of the model). It is shown how bringing the specification of the multiple regression model in line with the research objectives affects the choice of modeling methods. Two schemes for constructing a model are compared: the first does not allow taking into account the typology of primary predictors and the nature of their influence on the performance characteristics, the second scheme implies a stage of preliminary division of the initial predictors into groups, in accordance with the objectives of the study. Using the example of solving the problem of analyzing the causes of burnout of creative workers, the importance of the stage of qualitative analysis and systematization of a priori selected factors is shown, which is implemented not by computing means, but by attracting the knowledge and experience of specialists in the studied subject area. The presented example of the implementation of the approach to determining the specification of the regression model combines formalized mathematical and statistical procedures and the preceding stage of the classification of primary factors. The presence of this stage makes it possible to explain the scheme of managing (corrective) actions (softening the leadership style and increasing approval lead to a decrease in the manifestations of anxiety and stress, which, in turn, reduces the severity of the emotional exhaustion of the team members). Preclassification also allows avoiding the combination in one main component of controlled and uncontrolled, regulatory and controlled feature factors, which could worsen the interpretability of the synthesized predictors. On the example of a specific problem, it is shown that the selection of factors-regressors is a process that requires an individual solution. In the case under consideration, the following were consistently used: systematization of features, correlation analysis, principal component analysis, regression analysis. The first three methods made it possible to significantly reduce the dimension of the problem, which did not affect the achievement of the goal for which this task was posed: significant measures of controlling influence on the team were shown. allowing to reduce the degree of emotional burnout of its participants.
-
On Tollmien – Schlichting instability in numerical solutions of the Navier – Stokes equations obtained with 16th-order multioperators-based scheme
Computer Research and Modeling, 2022, v. 14, no. 4, pp. 953-967The paper presents the results of applying a scheme of very high accuracy and resolution to obtain numerical solutions of the Navier – Stokes equations of a compressible gas describing the occurrence and development of instability of a two-dimensional laminar boundary layer on a flat plate. The peculiarity of the conducted studies is the absence of commonly used artificial exciters of instability in the implementation of direct numerical modeling. The multioperator scheme used made it possible to observe the subtle effects of the birth of unstable modes and the complex nature of their development caused presumably by its small approximation errors. A brief description of the scheme design and its main properties is given. The formulation of the problem and the method of obtaining initial data are described, which makes it possible to observe the established non-stationary regime fairly quickly. A technique is given that allows detecting flow fluctuations with amplitudes many orders of magnitude smaller than its average values. A time-dependent picture of the appearance of packets of Tollmien – Schlichting waves with varying intensity in the vicinity of the leading edge of the plate and their downstream propagation is presented. The presented amplitude spectra with expanding peak values in the downstream regions indicate the excitation of new unstable modes other than those occurring in the vicinity of the leading edge. The analysis of the evolution of instability waves in time and space showed agreement with the main conclusions of the linear theory. The numerical solutions obtained seem to describe for the first time the complete scenario of the possible development of Tollmien – Schlichting instability, which often plays an essential role at the initial stage of the laminar-turbulent transition. They open up the possibilities of full-scale numerical modeling of this process, which is extremely important for practice, with a similar study of the spatial boundary layer.
-
Retail forecasting on high-frequency depersonalized data
Computer Research and Modeling, 2023, v. 15, no. 6, pp. 1713-1734Technological development determines the emergence of highly detailed data in time and space, which expands the possibilities of analysis, allowing us to consider consumer decisions and the competitive behavior of enterprises in all their diversity, taking into account the context of the territory and the characteristics of time periods. Despite the promise of such studies, they are currently limited in the scientific literature. This is due to the range of problems, the solution of which is considered in this paper. The article draws attention to the complexity of the analysis of depersonalized high-frequency data and the possibility of modeling consumption changes in time and space based on them. The features of the new type of data are considered on the example of real depersonalized data received from the fiscal data operator “First OFD” (JSC “Energy Systems and Communications”). It is shown that along with the spectrum of problems inherent in high-frequency data, there are disadvantages associated with the process of generating data on the side of the sellers, which requires a wider use of data mining tools. A series of statistical tests were carried out on the data under consideration, including a Unit-Root Test, test for unobserved individual effects, test for serial correlation and for cross-sectional dependence in panels, etc. The presence of spatial autocorrelation of the data was tested using modified tests of Lagrange multipliers. The tests carried out showed the presence of a consistent correlation and spatial dependence of the data, which determine the expediency of applying the methods of panel and spatial analysis in relation to high-frequency data accumulated by fiscal operators. The constructed models made it possible to substantiate the spatial relationship of sales growth and its dependence on the day of the week. The limitation for increasing the predictive ability of the constructed models and their subsequent complication, due to the inclusion of explanatory factors, was the lack of open access statistics grouped in the required detail in time and space, which determines the relevance of the formation of high-frequency geographically structured data bases.
-
Theoretical modeling consensus building in the work of standardization technical committees in coalitions based on regular Markov chains
Computer Research and Modeling, 2020, v. 12, no. 5, pp. 1247-1256Often decisions in social groups are made by consensus. This applies, for example, to the examination in the technical committee for standardization (TC) before the approval of the national standard by Rosstandart. The standard is approved if and only if the secured consensus in the TC. The same approach to standards development was adopted in almost all countries and at the regional and international level. Previously published works of authors dedicated to the construction of a mathematical model of time to reach consensus in technical committees for standardization in terms of variation in the number of TC members and their level of authoritarianism. The present study is a continuation of these works for the case of the formation of coalitions that are often formed during the consideration of the draft standard to the TC. In the article the mathematical model is constructed to ensure consensus on the work of technical standardization committees in terms of coalitions. In the framework of the model it is shown that in the presence of coalitions consensus is not achievable. However, the coalition, as a rule, are overcome during the negotiation process, otherwise the number of the adopted standards would be extremely small. This paper analyzes the factors that influence the bridging coalitions: the value of the assignment and an index of the effect of the coalition. On the basis of statistical modelling of regular Markov chains is investigated their effects on the time to ensure consensus in the technical Committee. It is proved that the time to reach consensus significantly depends on the value of unilateral concessions coalition and weakly depends on the size of coalitions. Built regression model of dependence of the average number of approvals from the value of the assignment. It was revealed that even a small concession leads to the onset of consensus, increasing the size of the assignment results (with other factors being equal) to a sharp decline in time before the consensus. It is shown that the assignment of a larger coalition against small coalitions takes on average more time before consensus. The result has practical value for all organizational structures, where the emergence of coalitions entails the inability of decision-making in the framework of consensus and requires the consideration of various methods for reaching a consensus decision.
-
Reduced model of photosystem II and its use to evaluate the photosynthetic apparatus characteristics according to the fluorescence induction curves
Computer Research and Modeling, 2012, v. 4, no. 4, pp. 943-958Views (last year): 3. Citations: 2 (RSCI).The approach for the analysis of some large-scale biological systems, on the base of quasiequilibrium stages is proposed. The approach allows us to reduce the detailed large-scaled models and obtain the simplified model with an analytical solution. This makes it possible to reproduce the experimental curves with a good accuracy. This approach has been applied to a detailed model of the primary processes of photosynthesis in the reaction center of photosystem II. The resulting simplified model of photosystem II describes the experimental fluorescence induction curves for higher and lower plants, obtained under different light intensities. Derived relationships between variables and parameters of detailed and simplified models, allow us to use parameters of simplified model to describe the dynamics of various states of photosystem II detailed model.
-
Decomposition of the modeling task of some objects of archeological research for processing in a distributed computer system
Computer Research and Modeling, 2015, v. 7, no. 3, pp. 533-537Views (last year): 1. Citations: 2 (RSCI).Although each task of recreating artifacts is truly unique, the modeling process for façades, foundations and building elements can be parametrized. This paper is focused on a complex of the existing programming libraries and solutions that need to be united into a single computer system to solve such a task. An algorithm of generating 3D filling of objects under reconstruction is presented. The solution architecture necessary for the system's adaptation for a cloud environment is studied.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"