All issues
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Personalization of mathematical models in cardiology: obstacles and perspectives
Computer Research and Modeling, 2022, v. 14, no. 4, pp. 911-930Most biomechanical tasks of interest to clinicians can be solved only using personalized mathematical models. Such models allow to formalize and relate key pathophysiological processes, basing on clinically available data evaluate non-measurable parameters that are important for the diagnosis of diseases, predict the result of a therapeutic or surgical intervention. The use of models in clinical practice imposes additional restrictions: clinicians require model validation on clinical cases, the speed and automation of the entire calculated technological chain, from processing input data to obtaining a result. Limitations on the simulation time, determined by the time of making a medical decision (of the order of several minutes), imply the use of reduction methods that correctly describe the processes under study within the framework of reduced models or machine learning tools.
Personalization of models requires patient-oriented parameters, personalized geometry of a computational domain and generation of a computational mesh. Model parameters are estimated by direct measurements, or methods of solving inverse problems, or methods of machine learning. The requirement of personalization imposes severe restrictions on the number of fitted parameters that can be measured under standard clinical conditions. In addition to parameters, the model operates with boundary conditions that must take into account the patient’s characteristics. Methods for setting personalized boundary conditions significantly depend on the clinical setting of the problem and clinical data. Building a personalized computational domain through segmentation of medical images and generation of the computational grid, as a rule, takes a lot of time and effort due to manual or semi-automatic operations. Development of automated methods for setting personalized boundary conditions and segmentation of medical images with the subsequent construction of a computational grid is the key to the widespread use of mathematical modeling in clinical practice.
The aim of this work is to review our solutions for personalization of mathematical models within the framework of three tasks of clinical cardiology: virtual assessment of hemodynamic significance of coronary artery stenosis, calculation of global blood flow after hemodynamic correction of complex heart defects, calculating characteristics of coaptation of reconstructed aortic valve.
Keywords: computational biomechanics, personalized model. -
Schools on mathematical biology 1973–1992
Computer Research and Modeling, 2016, v. 8, no. 2, pp. 411-422Views (last year): 2.This is a brief review of the subjects, and an impression of some talks, which were given at the Schools on modelling complex biological systems. Those Schools reflected a logical progress in this way of thinking in our country and provided a place for collective “brain-storming” inspired by prominent scientists of the last century, such as A. A. Lyapunov, N. V. Timofeeff-Ressovsky, A. M. Molchanov. At the Schools, general issues of methodology of mathematical modeling in biology and ecology were raised in the form of heated debates, the fundamental principles for how the structure of matter is organized and how complex biological systems function and evolve were discussed. The Schools served as an important sample of interdisciplinary actions by the scientists of distinct perceptions of the World, or distinct approaches and modes to reach the boundaries of the Unknown, rather than of different specializations. What was bringing together the mathematicians and biologists attending the Schools was the common understanding that the alliance should be fruitful. Reported in the issues of School proceedings, the presentations, discussions, and reflections have not yet lost their relevance so far and might serve as certain guidance for the new generation of scientists.
-
Describing processes in photosynthetic reaction center ensembles using a Monte Carlo kinetic model
Computer Research and Modeling, 2020, v. 12, no. 5, pp. 1207-1221Photosynthetic apparatus of a plant cell consists of multiple photosynthetic electron transport chains (ETC). Each ETC is capable of capturing and utilizing light quanta, that drive electron transport along the chain. Light assimilation efficiency depends on the plant’s current physiological state. The energy of the part of quanta that cannot be utilized, dissipates into heat, or is emitted as fluorescence. Under high light conditions fluorescence levels gradually rise to the maximum level. The curve describing that rise is called fluorescence rise (FR). It has a complex shape and that shape changes depending on the photosynthetic apparatus state. This gives one the opportunity to investigate that state only using the non invasive measuring of the FR.
When measuring fluorescence in experimental conditions, we get a response from millions of photosynthetic units at a time. In order to reproduce the probabilistic nature of the processes in a photosynthetic ETC, we created a Monte Carlo model of this chain. This model describes an ETC as a sequence of electron carriers in a thylakoid membrane, connected with each other. Those carriers have certain probabilities of capturing light photons, transferring excited states, or reducing each other, depending on the current ETC state. The events that take place in each of the model photosynthetic ETCs are registered, accumulated and used to create fluorescence rise and electron carrier redox states accumulation kinetics. This paper describes the model structure, the principles of its operation and the relations between certain model parameters and the resulting kinetic curves shape. Model curves include photosystem II reaction center fluorescence rise and photosystem I reaction center redox state change kinetics under different conditions.
-
Modeling the number of employed, unemployed and economically inactive population in the Russian Far East
Computer Research and Modeling, 2021, v. 13, no. 1, pp. 251-264Studies of the crisis socio-demographic situation in the Russian Far East require not only the use of traditional statistical methods, but also a conceptual analysis of possible development scenarios based on the synergy principles. The article is devoted to the analysis and modeling of the number of employed, unemployed and economically inactive population using nonlinear autonomous differential equations. We studied a basic mathematical model that takes into account the principle of pair interactions, which is a special case of the model for the struggle between conditional information of D. S. Chernavsky. The point estimates for the parameters are found using least squares method adapted for this model. The average approximation error was no more than 5.17%. The calculated parameter values correspond to the unstable focus and the oscillations with increasing amplitude of population number in the asymptotic case, which indicates a gradual increase in disparities between the employed, unemployed and economically inactive population and a collapse of their dynamics. We found that in the parametric space, not far from the inertial scenario, there are domains of blow-up and chaotic regimes complicating the ability to effectively manage. The numerical study showed that a change in only one model parameter (e.g. migration) without complex structural socio-economic changes can only delay the collapse of the dynamics in the long term or leads to the emergence of unpredictable chaotic regimes. We found an additional set of the model parameters corresponding to sustainable dynamics (stable focus) which approximates well the time series of the considered population groups. In the mathematical model, the bifurcation parameters are the outflow rate of the able-bodied population, the fertility (“rejuvenation of the population”), as well as the migration inflow rate of the unemployed. We found that the transition to stable regimes is possible with the simultaneous impact on several parameters which requires a comprehensive set of measures to consolidate the population in the Russian Far East and increase the level of income in terms of compensation for infrastructure sparseness. Further economic and sociological research is required to develop specific state policy measures.
-
Using extended ODE systems to investigate the mathematical model of the blood coagulation
Computer Research and Modeling, 2022, v. 14, no. 4, pp. 931-951Many properties of ordinary differential equations systems solutions are determined by the properties of the equations in variations. An ODE system, which includes both the original nonlinear system and the equations in variations, will be called an extended system further. When studying the properties of the Cauchy problem for the systems of ordinary differential equations, the transition to extended systems allows one to study many subtle properties of solutions. For example, the transition to the extended system allows one to increase the order of approximation for numerical methods, gives the approaches to constructing a sensitivity function without using numerical differentiation procedures, allows to use methods of increased convergence order for the inverse problem solution. Authors used the Broyden method belonging to the class of quasi-Newtonian methods. The Rosenbroke method with complex coefficients was used to solve the stiff systems of the ordinary differential equations. In our case, it is equivalent to the second order approximation method for the extended system.
As an example of the proposed approach, several related mathematical models of the blood coagulation process were considered. Based on the analysis of the numerical calculations results, the conclusion was drawn that it is necessary to include a description of the factor XI positive feedback loop in the model equations system. Estimates of some reaction constants based on the numerical inverse problem solution were given.
Effect of factor V release on platelet activation was considered. The modification of the mathematical model allowed to achieve quantitative correspondence in the dynamics of the thrombin production with experimental data for an artificial system. Based on the sensitivity analysis, the hypothesis tested that there is no influence of the lipid membrane composition (the number of sites for various factors of the clotting system, except for thrombin sites) on the dynamics of the process.
-
Fast and accurate x86 disassembly using a graph convolutional network model
Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1779-1792Disassembly of stripped x86 binaries is an important yet non-trivial task. Disassembly is difficult to perform correctly without debug information, especially on x86 architecture, which has variablesized instructions interleaved with data. Moreover, the presence of indirect jumps in binary code adds another layer of complexity. Indirect jumps impede the ability of recursive traversal, a common disassembly technique, to successfully identify all instructions within the code. Consequently, disassembling such code becomes even more intricate and demanding, further highlighting the challenges faced in this field. Many tools, including commercial ones such as IDA Pro, struggle with accurate x86 disassembly. As such, there has been some interest in developing a better solution using machine learning (ML) techniques. ML can potentially capture underlying compiler-independent patterns inherent for the compiler-generated assembly. Researchers in this area have shown that it is possible for ML approaches to outperform the classical tools. They also can be less timeconsuming to develop compared to manual heuristics, shifting most of the burden onto collecting a big representative dataset of executables with debug information. Following this line of work, we propose an improvement of an existing RGCN-based architecture, which builds control and flow graph on superset disassembly. The enhancement comes from augmenting the graph with data flow information. In particular, in the embedding we add Jump Control Flow and Register Dependency edges, inspired by Probabilistic Disassembly. We also create an open-source x86 instruction identification dataset, based on a combination of ByteWeight dataset and a selection open-source Debian packages. Compared to IDA Pro, a state of the art commercial tool, our approach yields better accuracy, while maintaining great performance on our benchmarks. It also fares well against existing machine learning approaches such as DeepDi.
-
On Tollmien – Schlichting instability in numerical solutions of the Navier – Stokes equations obtained with 16th-order multioperators-based scheme
Computer Research and Modeling, 2022, v. 14, no. 4, pp. 953-967The paper presents the results of applying a scheme of very high accuracy and resolution to obtain numerical solutions of the Navier – Stokes equations of a compressible gas describing the occurrence and development of instability of a two-dimensional laminar boundary layer on a flat plate. The peculiarity of the conducted studies is the absence of commonly used artificial exciters of instability in the implementation of direct numerical modeling. The multioperator scheme used made it possible to observe the subtle effects of the birth of unstable modes and the complex nature of their development caused presumably by its small approximation errors. A brief description of the scheme design and its main properties is given. The formulation of the problem and the method of obtaining initial data are described, which makes it possible to observe the established non-stationary regime fairly quickly. A technique is given that allows detecting flow fluctuations with amplitudes many orders of magnitude smaller than its average values. A time-dependent picture of the appearance of packets of Tollmien – Schlichting waves with varying intensity in the vicinity of the leading edge of the plate and their downstream propagation is presented. The presented amplitude spectra with expanding peak values in the downstream regions indicate the excitation of new unstable modes other than those occurring in the vicinity of the leading edge. The analysis of the evolution of instability waves in time and space showed agreement with the main conclusions of the linear theory. The numerical solutions obtained seem to describe for the first time the complete scenario of the possible development of Tollmien – Schlichting instability, which often plays an essential role at the initial stage of the laminar-turbulent transition. They open up the possibilities of full-scale numerical modeling of this process, which is extremely important for practice, with a similar study of the spatial boundary layer.
-
Retail forecasting on high-frequency depersonalized data
Computer Research and Modeling, 2023, v. 15, no. 6, pp. 1713-1734Technological development determines the emergence of highly detailed data in time and space, which expands the possibilities of analysis, allowing us to consider consumer decisions and the competitive behavior of enterprises in all their diversity, taking into account the context of the territory and the characteristics of time periods. Despite the promise of such studies, they are currently limited in the scientific literature. This is due to the range of problems, the solution of which is considered in this paper. The article draws attention to the complexity of the analysis of depersonalized high-frequency data and the possibility of modeling consumption changes in time and space based on them. The features of the new type of data are considered on the example of real depersonalized data received from the fiscal data operator “First OFD” (JSC “Energy Systems and Communications”). It is shown that along with the spectrum of problems inherent in high-frequency data, there are disadvantages associated with the process of generating data on the side of the sellers, which requires a wider use of data mining tools. A series of statistical tests were carried out on the data under consideration, including a Unit-Root Test, test for unobserved individual effects, test for serial correlation and for cross-sectional dependence in panels, etc. The presence of spatial autocorrelation of the data was tested using modified tests of Lagrange multipliers. The tests carried out showed the presence of a consistent correlation and spatial dependence of the data, which determine the expediency of applying the methods of panel and spatial analysis in relation to high-frequency data accumulated by fiscal operators. The constructed models made it possible to substantiate the spatial relationship of sales growth and its dependence on the day of the week. The limitation for increasing the predictive ability of the constructed models and their subsequent complication, due to the inclusion of explanatory factors, was the lack of open access statistics grouped in the required detail in time and space, which determines the relevance of the formation of high-frequency geographically structured data bases.
-
Decomposition of the modeling task of some objects of archeological research for processing in a distributed computer system
Computer Research and Modeling, 2015, v. 7, no. 3, pp. 533-537Views (last year): 1. Citations: 2 (RSCI).Although each task of recreating artifacts is truly unique, the modeling process for façades, foundations and building elements can be parametrized. This paper is focused on a complex of the existing programming libraries and solutions that need to be united into a single computer system to solve such a task. An algorithm of generating 3D filling of objects under reconstruction is presented. The solution architecture necessary for the system's adaptation for a cloud environment is studied.
-
Regularization and acceleration of Gauss – Newton method
Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1829-1840We propose a family of Gauss –Newton methods for solving optimization problems and systems of nonlinear equations based on the ideas of using the upper estimate of the norm of the residual of the system of nonlinear equations and quadratic regularization. The paper presents a development of the «Three Squares Method» scheme with the addition of a momentum term to the update rule of the sought parameters in the problem to be solved. The resulting scheme has several remarkable properties. First, the paper algorithmically describes a whole parametric family of methods that minimize functionals of a special kind: compositions of the residual of a nonlinear equation and an unimodal functional. Such a functional, entirely consistent with the «gray box» paradigm in the problem description, combines a large number of solvable problems related to applications in machine learning, with the regression problems. Secondly, the obtained family of methods is described as a generalization of several forms of the Levenberg –Marquardt algorithm, allowing implementation in non-Euclidean spaces as well. The algorithm describing the parametric family of Gauss –Newton methods uses an iterative procedure that performs an inexact parametrized proximal mapping and shift using a momentum term. The paper contains a detailed analysis of the efficiency of the proposed family of Gauss – Newton methods; the derived estimates take into account the number of external iterations of the algorithm for solving the main problem, the accuracy and computational complexity of the local model representation and oracle computation. Sublinear and linear convergence conditions based on the Polak – Lojasiewicz inequality are derived for the family of methods. In both observed convergence regimes, the Lipschitz property of the residual of the nonlinear system of equations is locally assumed. In addition to the theoretical analysis of the scheme, the paper studies the issues of its practical implementation. In particular, in the experiments conducted for the suboptimal step, the schemes of effective calculation of the approximation of the best step are given, which makes it possible to improve the convergence of the method in practice in comparison with the original «Three Square Method». The proposed scheme combines several existing and frequently used in practice modifications of the Gauss –Newton method, in addition, the paper proposes a monotone momentum modification of the family of developed methods, which does not slow down the search for a solution in the worst case and demonstrates in practice an improvement in the convergence of the method.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"