All issues
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Digital modeling geometrical and macrorough parameters of a highway
Computer Research and Modeling, 2012, v. 4, no. 4, pp. 837-844Views (last year): 1. Citations: 1 (RSCI).Original representation of statistical digital model of measurement of a macroroughness on a local site (to 15) consisting of determined (bias), correlated (standard periodic making and periodic deviations from flatness) and actually casual making (values of a macroroughness) Is offered.
-
Modeling consensus building in conditions of dominance in a social group
Computer Research and Modeling, 2021, v. 13, no. 5, pp. 1067-1078In many social groups, for example, in technical committees for standardization, at the international, regional and national levels, in European communities, managers of ecovillages, social movements (occupy), international organizations, decision-making is based on the consensus of the group members. Instead of voting, where the majority wins over the minority, consensus allows for a solution that each member of the group supports, or at least considers acceptable. This approach ensures that all group members’ opinions, ideas and needs are taken into account. At the same time, it is noted that reaching consensus takes a long time, since it is necessary to ensure agreement within the group, regardless of its size. It was shown that in some situations the number of iterations (agreements, negotiations) is very significant. Moreover, in the decision-making process, there is always a risk of blocking the decision by the minority in the group, which not only delays the decisionmaking time, but makes it impossible. Typically, such a minority is one or two odious people in the group. At the same time, such a member of the group tries to dominate in the discussion, always remaining in his opinion, ignoring the position of other colleagues. This leads to a delay in the decision-making process, on the one hand, and a deterioration in the quality of consensus, on the other, since only the opinion of the dominant member of the group has to be taken into account. To overcome the crisis in this situation, it was proposed to make a decision on the principle of «consensus minus one» or «consensus minus two», that is, do not take into account the opinion of one or two odious members of the group.
The article, based on modeling consensus using the model of regular Markov chains, examines the question of how much the decision-making time according to the «consensus minus one» rule is reduced, when the position of the dominant member of the group is not taken into account.
The general conclusion that follows from the simulation results is that the rule of thumb for making decisions on the principle of «consensus minus one» has a corresponding mathematical justification. The simulation results showed that the application of the «consensus minus one» rule can reduce the time to reach consensus in the group by 76–95%, which is important for practice.
The average number of agreements hyperbolically depends on the average authoritarianism of the group members (excluding the authoritarian one), which means the possibility of delaying the agreement process at high values of the authoritarianism of the group members.
-
Application of gradient optimization methods to solve the Cauchy problem for the Helmholtz equation
Computer Research and Modeling, 2022, v. 14, no. 2, pp. 417-444The article is devoted to studying the application of convex optimization methods to solve the Cauchy problem for the Helmholtz equation, which is ill-posed since the equation belongs to the elliptic type. The Cauchy problem is formulated as an inverse problem and is reduced to a convex optimization problem in a Hilbert space. The functional to be optimized and its gradient are calculated using the solution of boundary value problems, which, in turn, are well-posed and can be approximately solved by standard numerical methods, such as finite-difference schemes and Fourier series expansions. The convergence of the applied fast gradient method and the quality of the solution obtained in this way are experimentally investigated. The experiment shows that the accelerated gradient method — the Similar Triangle Method — converges faster than the non-accelerated method. Theorems on the computational complexity of the resulting algorithms are formulated and proved. It is found that Fourier’s series expansions are better than finite-difference schemes in terms of the speed of calculations and improve the quality of the solution obtained. An attempt was made to use restarts of the Similar Triangle Method after halving the residual of the functional. In this case, the convergence does not improve, which confirms the absence of strong convexity. The experiments show that the inaccuracy of the calculations is more adequately described by the additive concept of the noise in the first-order oracle. This factor limits the achievable quality of the solution, but the error does not accumulate. According to the results obtained, the use of accelerated gradient optimization methods can be the way to solve inverse problems effectively.
-
Valuation of machines at the random process of their degradation and premature sales
Computer Research and Modeling, 2024, v. 16, no. 3, pp. 797-815The model of the process of using machinery and equipment is considered, which takes into account the probabilistic nature of the process of their operation and sale. It takes into account the possibility of random hidden failures, after which the condition of the machine deteriorates abruptly, as well as the randomly arising need for premature (before the end of its service life) sale of the machine, which requires, generally speaking, random time. The model is focused on assessing the market value and service life of machines in accordance with International Valuation Standards. Strictly speaking, the market value of a used machine depends on its technical condition, but in practice, appraisers only take into account its age, since generally accepted measures of the technical condition of machines do not yet exist. As a result, the market value of a used machine is assumed to be equal to the average market value of similar machines of the corresponding age. For these purposes, appraisers use coefficients that reflect the influence of the age of machines on their market value. Such coefficients are not always justified and do not take into account either the degradation of the machine or the probabilistic nature of the process of its use. The proposed model is based on the anticipation of benefits principle. In it, we characterize the state of the machine by the intensity of the benefits it brings. The machine is subjected to a complex Poisson failure process, and after failure its condition abruptly worsens and may even reach its limit. Situations also arise that preclude further use of the machine by its owner. In such situations, the owner puts the machine up for sale before the end of its service life (prematurely), and the sale requires a random timing. The model allows us to take into account the influence of such situations and construct an analytical relationship linking the market value of a machine with its condition, and calculate the average coefficients of change in the market value of machines with age. At the same time, it is also possible to take into account the influence of inflation and the scrap cost of the machine. We have found that the rate of prematurely sales has a significant impact on the cost of new and used machines. The model also allows us to take into account the influence of inflation and the scrap value of the machine. We have found that the rate of premature sales has a significant impact on the service life and market value of new and used machines. At the same time, the dependence of the market value of machines on age is largely determined by the coefficient of variation of the service life of the machines. The results obtained allow us to obtain more reasonable estimates of the market value of machines, including for the purposes of the system of national accounts.
-
Statistically fair price for the European call options according to the discreet mean/variance model
Computer Research and Modeling, 2014, v. 6, no. 5, pp. 861-874Views (last year): 1.We consider a portfolio with call option and the corresponding underlying asset under the standard assumption that stock-market price represents a random variable with lognormal distribution. Minimizing the variance hedging risk of the portfolio on the date of maturity of the call option we find a fraction of the asset per unit call option. As a direct consequence we derive the statistically fair lookback call option price in explicit form. In contrast to the famous Black–Scholes theory, any portfolio cannot be regarded as risk-free because no additional transactions are supposed to be conducted over the life of the contract, but the sequence of independent portfolios will reduce risk to zero asymptotically. This property is illustrated in the experimental section using a dataset of daily stock prices of 37 leading US-based companies for the period from April 2006 to January 2013.
-
Application of correlation adaptometry technique to sports and biomedical research
Computer Research and Modeling, 2017, v. 9, no. 2, pp. 345-354Views (last year): 10.The paper outlines the approaches to mathematical modeling correlation adaptometry techniques widely used in biology and medicine. The analysis is based on models employed in descriptions of structured biological systems. It is assumed that the distribution density of the biological population numbers satisfies the equation of Kolmogorov-Fokker-Planck. Using this technique evaluated the effectiveness of treatment of patients with obesity. All patients depending on the obesity degree and the comorbidity nature were divided into three groups. Shows a decrease in weight of the correlation graph computed from the measured in the patients of the indicators that characterizes the effectiveness of the treatment for all studied groups. This technique was also used to assess the intensity of the training loads in academic rowing three age groups. It was shown that with the highest voltage worked with athletes for youth group. Also, using the technique of correlation adaptometry evaluated the effectiveness of the treatment of hormone replacement therapy in women. All the patients depending on the assigned drug were divided into four groups. In the standard analysis of the dynamics of mean values of indicators, it was shown that in the course of the treatment were observed normalization of the averages for all groups of patients. However, using the technique of correlation adaptometry it was found that during the first six months the weight of the correlation graph was decreasing and during the second six months the weight increased for all study groups. This indicates the excessive length of the annual course of hormone replacement therapy and the practicality of transition to a semiannual rate.
-
Identification of the author of the text by segmentation method
Computer Research and Modeling, 2022, v. 14, no. 5, pp. 1199-1210The paper describes a method for recognizing authors of literary texts by the proximity of fragments into which a separate text is divided to the standard of the author. The standard is the empirical frequency distribution of letter combinations, built on a training sample, which included expertly selected reliably known works of this author. A set of standards of different authors forms a library, within which the problem of identifying the author of an unknown text is solved. The proximity between texts is understood in the sense of the norm in L1 for the frequency vector of letter combinations, which is constructed for each fragment and for the text as a whole. The author of an unknown text is assigned the one whose standard is most often chosen as the closest for the set of fragments into which the text is divided. The length of the fragment is optimized based on the principle of the maximum difference in distances from fragments to standards in the problem of recognition of «friend–foe». The method was tested on the corpus of domestic and foreign (translated) authors. 1783 texts of 100 authors with a total volume of about 700 million characters were collected. In order to exclude the bias in the selection of authors, authors whose surnames began with the same letter were considered. In particular, for the letter L, the identification error was 12%. Along with a fairly high accuracy, this method has another important property: it allows you to estimate the probability that the standard of the author of the text in question is missing in the library. This probability can be estimated based on the results of the statistics of the nearest standards for small fragments of text. The paper also examines statistical digital portraits of writers: these are joint empirical distributions of the probability that a certain proportion of the text is identified at a given level of trust. The practical importance of these statistics is that the carriers of the corresponding distributions practically do not overlap for their own and other people’s standards, which makes it possible to recognize the reference distribution of letter combinations at a high level of confidence.
-
Numerical simulation of fluid flow in a blood pump in the FlowVision software package
Computer Research and Modeling, 2023, v. 15, no. 4, pp. 1025-1038A numerical simulation of fluid flow in a blood pump was performed using the FlowVision software package. This test problem, provided by the Center for Devices and Radiological Health of the US. Food and Drug Administration, involved considering fluid flow according to several design modes. At the same time for each case of calculation a certain value of liquid flow rate and rotor speed was set. Necessary data for calculations in the form of exact geometry, flow conditions and fluid characteristics were provided to all research participants, who used different software packages for modeling. Numerical simulations were performed in FlowVision for six calculation modes with the Newtonian fluid and standard $k-\varepsilon$ turbulence model, in addition, the fifth mode with the $k-\omega$ SST turbulence model and with the Caro rheological fluid model were performed. In the first stage of the numerical simulation, the convergence over the mesh was investigated, on the basis of which a final mesh with a number of cells of the order of 6 million was chosen. Due to the large number of cells, in order to accelerate the study, part of the calculations was performed on the Lomonosov-2 cluster. As a result of numerical simulation, we obtained and analyzed values of pressure difference between inlet and outlet of the pump, velocity between rotor blades and in the area of diffuser, and also, we carried out visualization of velocity distribution in certain cross-sections. For all design modes there was compared the pressure difference received numerically with the experimental data, and for the fifth calculation mode there was also compared with the experiment by speed distribution between rotor blades and in the area of diffuser. Data analysis has shown good correlation of calculation results in FlowVision with experimental results and numerical simulation in other software packages. The results obtained in FlowVision for solving the US FDA test suggest that FlowVision software package can be used for solving a wide range of hemodynamic problems.
-
Comparative analysis of statistical methods of scientific publications classification in medicine
Computer Research and Modeling, 2020, v. 12, no. 4, pp. 921-933In this paper the various methods of machine classification of scientific texts by thematic sections on the example of publications in specialized medical journals published by Springer are compared. The corpus of texts was studied in five sections: pharmacology/toxicology, cardiology, immunology, neurology and oncology. We considered both classification methods based on the analysis of annotations and keywords, and classification methods based on the processing of actual texts. Methods of Bayesian classification, reference vectors, and reference letter combinations were applied. It is shown that the method of classification with the best accuracy is based on creating a library of standards of letter trigrams that correspond to texts of a certain subject. It is turned out that for this corpus the Bayesian method gives an error of about 20%, the support vector machine has error of order 10%, and the proximity of the distribution of three-letter text to the standard theme gives an error of about 5%, which allows to rank these methods to the use of artificial intelligence in the task of text classification by industry specialties. It is important that the support vector method provides the same accuracy when analyzing annotations as when analyzing full texts, which is important for reducing the number of operations for large text corpus.
-
Personalization of mathematical models in cardiology: obstacles and perspectives
Computer Research and Modeling, 2022, v. 14, no. 4, pp. 911-930Most biomechanical tasks of interest to clinicians can be solved only using personalized mathematical models. Such models allow to formalize and relate key pathophysiological processes, basing on clinically available data evaluate non-measurable parameters that are important for the diagnosis of diseases, predict the result of a therapeutic or surgical intervention. The use of models in clinical practice imposes additional restrictions: clinicians require model validation on clinical cases, the speed and automation of the entire calculated technological chain, from processing input data to obtaining a result. Limitations on the simulation time, determined by the time of making a medical decision (of the order of several minutes), imply the use of reduction methods that correctly describe the processes under study within the framework of reduced models or machine learning tools.
Personalization of models requires patient-oriented parameters, personalized geometry of a computational domain and generation of a computational mesh. Model parameters are estimated by direct measurements, or methods of solving inverse problems, or methods of machine learning. The requirement of personalization imposes severe restrictions on the number of fitted parameters that can be measured under standard clinical conditions. In addition to parameters, the model operates with boundary conditions that must take into account the patient’s characteristics. Methods for setting personalized boundary conditions significantly depend on the clinical setting of the problem and clinical data. Building a personalized computational domain through segmentation of medical images and generation of the computational grid, as a rule, takes a lot of time and effort due to manual or semi-automatic operations. Development of automated methods for setting personalized boundary conditions and segmentation of medical images with the subsequent construction of a computational grid is the key to the widespread use of mathematical modeling in clinical practice.
The aim of this work is to review our solutions for personalization of mathematical models within the framework of three tasks of clinical cardiology: virtual assessment of hemodynamic significance of coronary artery stenosis, calculation of global blood flow after hemodynamic correction of complex heart defects, calculating characteristics of coaptation of reconstructed aortic valve.
Keywords: computational biomechanics, personalized model.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"