Результаты поиска по 'computer models':
Найдено статей: 260
  1. Chen J., Lobanov A.V., Rogozin A.V.
    Nonsmooth Distributed Min-Max Optimization Using the Smoothing Technique
    Computer Research and Modeling, 2023, v. 15, no. 2, pp. 469-480

    Distributed saddle point problems (SPPs) have numerous applications in optimization, matrix games and machine learning. For example, the training of generated adversarial networks is represented as a min-max optimization problem, and training regularized linear models can be reformulated as an SPP as well. This paper studies distributed nonsmooth SPPs with Lipschitz-continuous objective functions. The objective function is represented as a sum of several components that are distributed between groups of computational nodes. The nodes, or agents, exchange information through some communication network that may be centralized or decentralized. A centralized network has a universal information aggregator (a server, or master node) that directly communicates to each of the agents and therefore can coordinate the optimization process. In a decentralized network, all the nodes are equal, the server node is not present, and each agent only communicates to its immediate neighbors.

    We assume that each of the nodes locally holds its objective and can compute its value at given points, i. e. has access to zero-order oracle. Zero-order information is used when the gradient of the function is costly, not possible to compute or when the function is not differentiable. For example, in reinforcement learning one needs to generate a trajectory to evaluate the current policy. This policy evaluation process can be interpreted as the computation of the function value. We propose an approach that uses a smoothing technique, i. e., applies a first-order method to the smoothed version of the initial function. It can be shown that the stochastic gradient of the smoothed function can be viewed as a random two-point gradient approximation of the initial function. Smoothing approaches have been studied for distributed zero-order minimization, and our paper generalizes the smoothing technique on SPPs.

  2. Golubev V.I., Shevchenko A.V., Petrov I.B.
    Raising convergence order of grid-characteristic schemes for 2D linear elasticity problems using operator splitting
    Computer Research and Modeling, 2022, v. 14, no. 4, pp. 899-910

    The grid-characteristic method is successfully used for solving hyperbolic systems of partial differential equations (for example, transport / acoustic / elastic equations). It allows to construct correctly algorithms on contact boundaries and boundaries of the integration domain, to a certain extent to take into account the physics of the problem (propagation of discontinuities along characteristic curves), and has the property of monotonicity, which is important for considered problems. In the cases of two-dimensional and three-dimensional problems the method makes use of a coordinate splitting technique, which enables us to solve the original equations by solving several one-dimensional ones consecutively. It is common to use up to 3-rd order one-dimensional schemes with simple splitting techniques which do not allow for the convergence order to be higher than two (with respect to time). Significant achievements in the operator splitting theory were done, the existence of higher-order schemes was proved. Its peculiarity is the need to perform a step in the opposite direction in time, which gives rise to difficulties, for example, for parabolic problems.

    In this work coordinate splitting of the 3-rd and 4-th order were used for the two-dimensional hyperbolic problem of the linear elasticity. This made it possible to increase the final convergence order of the computational algorithm. The paper empirically estimates the convergence in L1 and L∞ norms using analytical solutions of the system with the sufficient degree of smoothness. To obtain objective results, we considered the cases of longitudinal and transverse plane waves propagating both along the diagonal of the computational cell and not along it. Numerical experiments demonstrated the improved accuracy and convergence order of constructed schemes. These improvements are achieved with the cost of three- or fourfold increase of the computational time (for the 3-rd and 4-th order respectively) and no additional memory requirements. The proposed improvement of the computational algorithm preserves the simplicity of its parallel implementation based on the spatial decomposition of the computational grid.

  3. Zavodskikh R.K., Efanov N.N.
    Performance prediction for chosen types of loops over one-dimensional arrays with embedding-driven intermediate representations analysis
    Computer Research and Modeling, 2023, v. 15, no. 1, pp. 211-224

    The method for mapping of intermediate representations (IR) set of C, C++ programs to vector embedding space is considered to create an empirical estimation framework for static performance prediction using LLVM compiler infrastructure. The usage of embeddings makes programs easier to compare due to avoiding Control Flow Graphs (CFG) and Data Flow Graphs (DFG) direct comparison. This method is based on transformation series of the initial IR such as: instrumentation — injection of artificial instructions in an instrumentation compiler’s pass depending on load offset delta in the current instruction compared to the previous one, mapping of instrumented IR into multidimensional vector with IR2Vec and dimension reduction with t-SNE (t-distributed stochastic neighbor embedding) method. The D1 cache miss ratio measured with perf stat tool is considered as performance metric. A heuristic criterion of programs having more or less cache miss ratio is given. This criterion is based on embeddings of programs in 2D-space. The instrumentation compiler’s pass developed in this work is described: how it generates and injects artificial instructions into IR within the used memory model. The software pipeline that implements the performance estimation based on LLVM compiler infrastructure is given. Computational experiments are performed on synthetic tests which are the sets of programs with the same CFGs but with different sequences of offsets used when accessing the one-dimensional array of a given size. The correlation coefficient between performance metric and distance to the worst program’s embedding is measured and proved to be negative regardless of t-SNE initialization. This fact proves the heuristic criterion to be true. The process of such synthetic tests generation is also considered. Moreover, the variety of performance metric in programs set in such a test is proposed as a metric to be improved with exploration of more tests generators.

  4. Aksenov A.A., Zhluktov S.V., Kalugina M.D., Kashirin V.S., Lobanov A.I., Shaurman D.V.
    Reduced mathematical model of blood coagulation taking into account thrombin activity switching as a basis for estimation of hemodynamic effects and its implementation in FlowVision package
    Computer Research and Modeling, 2023, v. 15, no. 4, pp. 1039-1067

    The possibility of numerical 3D simulation of thrombi formation is considered.

    The developed up to now detailed mathematical models describing formation of thrombi and clots include a great number of equations. Being implemented in a CFD code, the detailed mathematical models require essential computer resources for simulation of the thrombi growth in a blood flow. A reasonable alternative way is using reduced mathematical models. Two models based on the reduced mathematical model for the thrombin generation are described in the given paper.

    The first model describes growth of a thrombus in a great vessel (artery). The artery flows are essentially unsteady. They are characterized by pulse waves. The blood velocity here is high compared to that in the vein tree. The reduced model for the thrombin generation and the thrombus growth in an artery is relatively simple. The processes accompanying the thrombin generation in arteries are well described by the zero-order approximation.

    A vein flow is characterized lower velocity value, lower gradients, and lower shear stresses. In order to simulate the thrombin generation in veins, a more complex system of equations has to be solved. The model must allow for all the non-linear terms in the right-hand sides of the equations.

    The simulation is carried out in the industrial software FlowVision.

    The performed numerical investigations have shown the suitability of the reduced models for simulation of thrombin generation and thrombus growth. The calculations demonstrate formation of the recirculation zone behind a thrombus. The concentration of thrombin and the mass fraction of activated platelets are maximum here. Formation of such a zone causes slow growth of the thrombus downstream. At the upwind part of the thrombus, the concentration of activated platelets is low, and the upstream thrombus growth is negligible.

    When the blood flow variation during a hart cycle is taken into account, the thrombus growth proceeds substantially slower compared to the results obtained under the assumption of constant (averaged over a hard cycle) conditions. Thrombin and activated platelets produced during diastole are quickly carried away by the blood flow during systole. Account of non-Newtonian rheology of blood noticeably affects the results.

  5. Poddubny V.V., Romanovich O.V.
    Mathematical modeling of the optimal market of competing goods in conditions of deliveries lags
    Computer Research and Modeling, 2012, v. 4, no. 2, pp. 431-450

    The nonlinear restrictive (with restrictions of the inequalities type) dynamic mathematical model of the committed competition vacant market of many goods in conditions of the goods deliveries time-lag and of the linear dependency of the demand vector from the prices vector is offered. The problem of finding of prices and deliveries of goods into the market which are optimal (from seller’s profit standpoint) is formulated. It is shown the seller’s total profit maximum is expressing by the continuous piecewise smooth function of vector of volumes of deliveries with breakup of the derivative on borders of zones of the goods deficit, of the overstocking and of the dynamic balance of demand and offer of each of goods. With use of the predicate functions technique the computing algorithm of optimization of the goods deliveries into the market is built.

    Views (last year): 1. Citations: 3 (RSCI).
  6. Zhmurov A.A., Alekseenko A.E., Barsegov V.A., Kononova O.G., Kholodov Y.A.
    Phase transition from α-helices to β-sheets in supercoils of fibrillar proteins
    Computer Research and Modeling, 2013, v. 5, no. 4, pp. 705-725

    The transition from α-helices to β-strands under external mechanical force in fibrin molecule containing coiled-coils is studied and free energy landscape is resolved. The detailed theoretical modeling of each stage of coiled-coils fragment pulling process was performed. The plots of force (F) as a function of molecule expansion (X) for two symmetrical fibrin coiled-coils (each ∼17 nm in length) show three distinct modes of mechanical behaviour: (1) linear (elastic) mode when coiled-coils behave like entropic springs (F<100−125 pN and X<7−8 nm), (2) viscous (plastic) mode when molecule resistance force does not increase with increase in elongation length (F≈150 pN and X≈10−35 nm) and (3) nonlinear mode (F>175−200 pN and X>40−50 nm). In linear mode the coiled-coils unwind at 2π radian angle, but no structural transition occurs. Viscous mode is characterized by the phase transition from the triple α-spirals to three-stranded parallel β-sheet. The critical tension of α-helices is 0.25 nm per turn, and the characteristic energy change is equal to 4.9 kcal/mol. Changes in internal energy Δu, entropy Δs and force capacity cf per one helical turn for phase transition were also computed. The observed dynamic behavior of α-helices and phase transition from α-helices to β-sheets under tension might represent a universal mechanism of regulation of fibrillar protein structures subject to mechanical stresses due to biological forces.

    Views (last year): 6. Citations: 1 (RSCI).
  7. Govorkov D.A., Novikov V.P., Solovyev I.G., Tsibulsky V.R.
    Interval analysis of vegetation cover dynamics
    Computer Research and Modeling, 2020, v. 12, no. 5, pp. 1191-1205

    In the development of the previously obtained result on modeling the dynamics of vegetation cover, due to variations in the temperature background, a new scheme for the interval analysis of the dynamics of floristic images of formations is presented in the case when the parameter of the response rate of the model of the dynamics of each counting plant species is set by the interval of scatter of its possible values. The detailed description of the functional parameters of macromodels of biodiversity, desired in fundamental research, taking into account the essential reasons for the observed evolutionary processes, may turn out to be a problematic task. The use of more reliable interval estimates of the variability of functional parameters “bypasses” the problem of uncertainty in the primary assessment of the evolution of the phyto-resource potential of the developed controlled territories. The solutions obtained preserve not only a qualitative picture of the dynamics of species diversity, but also give a rigorous, within the framework of the initial assumptions, a quantitative assessment of the degree of presence of each plant species. The practical significance of two-sided estimation schemes based on the construction of equations for the upper and lower boundaries of the trajectories of the scatter of solutions depends on the conditions and measure of proportional correspondence of the intervals of scatter of the initial parameters with the intervals of scatter of solutions. For dynamic systems, the desired proportionality is not always ensured. The given examples demonstrate the acceptable accuracy of interval estimation of evolutionary processes. It is important to note that the constructions of the estimating equations generate vanishing intervals of scatter of solutions for quasi-constant temperature perturbations of the system. In other words, the trajectories of stationary temperature states of the vegetation cover are not roughened by the proposed interval estimation scheme. The rigor of the result of interval estimation of the species composition of the vegetation cover of formations can become a determining factor when choosing a method in the problems of analyzing the dynamics of species diversity and the plant potential of territorial systems of resource-ecological monitoring. The possibilities of the proposed approach are illustrated by geoinformation images of the computational analysis of the dynamics of the vegetation cover of the Yamal Peninsula and by the graphs of the retro-perspective analysis of the floristic variability of the formations of the landscapelithological group “Upper” based on the data of the summer temperature background of the Salehard weather station from 2010 to 1935. The developed indicators of floristic variability and the given graphs characterize the dynamics of species diversity, both on average and individually in the form of intervals of possible states for each species of plant.

  8. Vassilevski Y.V., Simakov S.S., Gamilov T.M., Salamatova V.Yu., Dobroserdova T.K., Kopytov G.V., Bogdanov O.N., Danilov A.A., Dergachev M.A., Dobrovolskii D.D., Kosukhin O.N., Larina E.V., Meleshkina A.V., Mychka E.Yu., Kharin V.Yu., Chesnokova K.V., Shipilov A.A.
    Personalization of mathematical models in cardiology: obstacles and perspectives
    Computer Research and Modeling, 2022, v. 14, no. 4, pp. 911-930

    Most biomechanical tasks of interest to clinicians can be solved only using personalized mathematical models. Such models allow to formalize and relate key pathophysiological processes, basing on clinically available data evaluate non-measurable parameters that are important for the diagnosis of diseases, predict the result of a therapeutic or surgical intervention. The use of models in clinical practice imposes additional restrictions: clinicians require model validation on clinical cases, the speed and automation of the entire calculated technological chain, from processing input data to obtaining a result. Limitations on the simulation time, determined by the time of making a medical decision (of the order of several minutes), imply the use of reduction methods that correctly describe the processes under study within the framework of reduced models or machine learning tools.

    Personalization of models requires patient-oriented parameters, personalized geometry of a computational domain and generation of a computational mesh. Model parameters are estimated by direct measurements, or methods of solving inverse problems, or methods of machine learning. The requirement of personalization imposes severe restrictions on the number of fitted parameters that can be measured under standard clinical conditions. In addition to parameters, the model operates with boundary conditions that must take into account the patient’s characteristics. Methods for setting personalized boundary conditions significantly depend on the clinical setting of the problem and clinical data. Building a personalized computational domain through segmentation of medical images and generation of the computational grid, as a rule, takes a lot of time and effort due to manual or semi-automatic operations. Development of automated methods for setting personalized boundary conditions and segmentation of medical images with the subsequent construction of a computational grid is the key to the widespread use of mathematical modeling in clinical practice.

    The aim of this work is to review our solutions for personalization of mathematical models within the framework of three tasks of clinical cardiology: virtual assessment of hemodynamic significance of coronary artery stenosis, calculation of global blood flow after hemodynamic correction of complex heart defects, calculating characteristics of coaptation of reconstructed aortic valve.

  9. The article discusses the problem of the influence of the research goals on the structure of the multivariate model of regression analysis (in particular, on the implementation of the procedure for reducing the dimension of the model). It is shown how bringing the specification of the multiple regression model in line with the research objectives affects the choice of modeling methods. Two schemes for constructing a model are compared: the first does not allow taking into account the typology of primary predictors and the nature of their influence on the performance characteristics, the second scheme implies a stage of preliminary division of the initial predictors into groups, in accordance with the objectives of the study. Using the example of solving the problem of analyzing the causes of burnout of creative workers, the importance of the stage of qualitative analysis and systematization of a priori selected factors is shown, which is implemented not by computing means, but by attracting the knowledge and experience of specialists in the studied subject area. The presented example of the implementation of the approach to determining the specification of the regression model combines formalized mathematical and statistical procedures and the preceding stage of the classification of primary factors. The presence of this stage makes it possible to explain the scheme of managing (corrective) actions (softening the leadership style and increasing approval lead to a decrease in the manifestations of anxiety and stress, which, in turn, reduces the severity of the emotional exhaustion of the team members). Preclassification also allows avoiding the combination in one main component of controlled and uncontrolled, regulatory and controlled feature factors, which could worsen the interpretability of the synthesized predictors. On the example of a specific problem, it is shown that the selection of factors-regressors is a process that requires an individual solution. In the case under consideration, the following were consistently used: systematization of features, correlation analysis, principal component analysis, regression analysis. The first three methods made it possible to significantly reduce the dimension of the problem, which did not affect the achievement of the goal for which this task was posed: significant measures of controlling influence on the team were shown. allowing to reduce the degree of emotional burnout of its participants.

  10. Pogorelova E.A., Lobanov A.I.
    High Performance Computing for Blood Modeling
    Computer Research and Modeling, 2012, v. 4, no. 4, pp. 917-941

    Methods for modeling blood flow and its rheological properties are reviewed. Blood is considered as a particle suspencion. The methods are boundary integral equation method (BIEM), lattice Boltzmann (LBM), finite elements on dynamic mesh, dissipative particle dynamics (DPD) and agent based modeling. The analysis of these methods’ applications on high-performance systems with various architectures is presented.

    Views (last year): 2. Citations: 3 (RSCI).
Pages: « first previous next

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"