Результаты поиска по 'model reduction':
Найдено статей: 33
  1. Podlipnova I.V., Dorn Y.V., Sklonin I.A.
    Cloud interpretation of the entropy model for calculating the trip matrix
    Computer Research and Modeling, 2024, v. 16, no. 1, pp. 89-103

    As the population of cities grows, the need to plan for the development of transport infrastructure becomes more acute. For this purpose, transport modeling packages are created. These packages usually contain a set of convex optimization problems, the iterative solution of which leads to the desired equilibrium distribution of flows along the paths. One of the directions for the development of transport modeling is the construction of more accurate generalized models that take into account different types of passengers, their travel purposes, as well as the specifics of personal and public modes of transport that agents can use. Another important direction of transport models development is to improve the efficiency of the calculations performed. Since, due to the large dimension of modern transport networks, the search for a numerical solution to the problem of equilibrium distribution of flows along the paths is quite expensive. The iterative nature of the entire solution process only makes this worse. One of the approaches leading to a reduction in the number of calculations performed is the construction of consistent models that allow to combine the blocks of a 4-stage model into a single optimization problem. This makes it possible to eliminate the iterative running of blocks, moving from solving a separate optimization problem at each stage to some general problem. Early work has proven that such approaches provide equivalent solutions. However, it is worth considering the validity and interpretability of these methods. The purpose of this article is to substantiate a single problem, that combines both the calculation of the trip matrix and the modal choice, for the generalized case when there are different layers of demand, types of agents and classes of vehicles in the transport network. The article provides possible interpretations for the gauge parameters used in the problem, as well as for the dual factors associated with the balance constraints. The authors of the article also show the possibility of combining the considered problem with a block for determining network load into a single optimization problem.

  2. Potapov I.I., Potapov D.I.
    Model of steady river flow in the cross section of a curved channel
    Computer Research and Modeling, 2024, v. 16, no. 5, pp. 1163-1178

    Modeling of channel processes in the study of coastal channel deformations requires the calculation of hydrodynamic flow parameters that take into account the existence of secondary transverse currents formed at channel curvature. Three-dimensional modeling of such processes is currently possible only for small model channels; for real river flows, reduced-dimensional models are needed. At the same time, the reduction of the problem from a three-dimensional model of the river flow movement to a two-dimensional flow model in the cross-section assumes that the hydrodynamic flow under consideration is quasi-stationary and the hypotheses about the asymptotic behavior of the flow along the flow coordinate of the cross-section are fulfilled for it. Taking into account these restrictions, a mathematical model of the problem of the a stationary turbulent calm river flow movement in a channel cross-section is formulated. The problem is formulated in a mixed formulation of velocity — “vortex – stream function”. As additional conditions for problem reducing, it is necessary to specify boundary conditions on the flow free surface for the velocity field, determined in the normal and tangential direction to the cross-section axis. It is assumed that the values of these velocities should be determined from the solution of auxiliary problems or obtained from field or experimental measurement data.

    To solve the formulated problem, the finite element method in the Petrov – Galerkin formulation is used. Discrete analogue of the problem is obtained and an algorithm for solving it is proposed. Numerical studies have shown that, in general, the results obtained are in good agreement with known experimental data. The authors associate the obtained errors with the need to more accurately determine the circulation velocities field at crosssection of the flow by selecting and calibrating a more appropriate model for calculating turbulent viscosity and boundary conditions at the free boundary of the cross-section.

  3. Pavlov E.A., Osipov G.V.
    Synchronization and chaos in networks of coupled maps in application to modeling of cardiac dynamics
    Computer Research and Modeling, 2011, v. 3, no. 4, pp. 439-453

    The dynamics of coupled elements’ ensembles are investigated in the context of description of spatio-temporal processes in the myocardium. Basic element is map-based model constructed by simplification and reduction of Luo-Rudy model. In particular, capabilities of the model in replication of different regimes of cardiac activity are shown, including excitable and oscillatory regimes. The dynamics of 1D and 2D lattices of coupled oscillatory elements with a random distribution of individual frequencies are considered. Effects of cluster synchronization and transition to global synchronization by increasing of coupling strength are discussed. Impulse propagation in the chain of excitable cells has been observed. Analysis of 2D lattice of excitable elements with target and spiral waves have been made. The characteristics of the spiral wave has been analyzed in depending on the individual parameters of the map and coupling strength between elements of the lattice. A study of mixed ensembles consisting of excitable and oscillatory elements with a gradient changing of the properties have been made, including the task for description of normal and pathological activity of the sinoatrial node.

    Citations: 3 (RSCI).
  4. Ameenuddin M., Anand M.
    CFD analysis of hemodynamics in idealized abdominal aorta-renal artery junction: preliminary study to locate atherosclerotic plaque
    Computer Research and Modeling, 2019, v. 11, no. 4, pp. 695-706

    Atherosclerotic diseases such as carotid artery diseases (CAD) and chronic kidney diseases (CKD) are the major causes of death worldwide. The onset of these atherosclerotic diseases in the arteries are governed by complex blood flow dynamics and hemodynamic parameters. Atherosclerosis in renal arteries leads to reduction in arterial efficiency, which ultimately leads to Reno-vascular hypertension. This work attempts to identify the localization of atherosclerotic plaque in human abdominal aorta — renal artery junction using Computational fluid dynamics (CFD).

    The atherosclerosis prone regions in an idealized human abdominal aorta-renal artery junction are identified by calculating relevant hemodynamic indicators from computational simulations using the rheologically accurate shear-thinning Yeleswarapu model for human blood. Blood flow is numerically simulated in a 3-D model of the artery junction using ANSYS FLUENT v18.2.

    Hemodynamic indicators calculated are average wall shear stress (AWSS), oscillatory shear index (OSI), and relative residence time (RRT). Simulations of pulsatile flow (f=1.25 Hz, Re = 1000) show that low AWSS, and high OSI manifest in the regions of renal artery downstream of the junction and on the infrarenal section of the abdominal aorta lateral to the junction. High RRT, which is a relative index and dependent on AWSS and OSI, is found to overlap with the low AWSS and high OSI at the cranial surface of renal artery proximal to the junction and on the surface of the abdominal aorta lateral to the bifurcation: this indicates that these regions of the junction are prone to atherosclerosis. The results match qualitatively with the findings reported in literature and serve as initial step to illustrate utility of CFD for the location of atherosclerotic plaque.

    Views (last year): 3.
  5. Minkevich I.G.
    The stoichiometry of metabolic pathways in the dynamics of cellular populations
    Computer Research and Modeling, 2011, v. 3, no. 4, pp. 455-475

    The problem has been considered, to what extent the kinetic models of cellular metabolism fit the matter which they describe. Foundations of stoichiometry of the whole metabolism and its large regions have been stated. A bioenergetic representation of stoichiometry based on a universal unit of chemical compound reductivity, viz., redoxon, has been described. Equations of mass-energy balance (bioenergetic variant of stoichiometry) have been derived for metabolic flows including those of protons possessing high electrochemical potential μH+, and high-energy compounds. Interrelations have been obtained which determine the biomass yield, rate of uptake of energy source for cell growth and other important physiological quantities as functions of biochemical characteristics of cellular energetics. The maximum biomass energy yield values have been calculated for different energy sources utilized by cells. These values coincide with those measured experimentally.

    Views (last year): 5. Citations: 1 (RSCI).
  6. Usanov M.S., Kulberg N.S., Yakovleva T.V., Morozov S.P.
    Determination of CT dose by means of noise analysis
    Computer Research and Modeling, 2018, v. 10, no. 4, pp. 525-533

    The article deals with the process of creating an effective algorithm for determining the amount of emitted quanta from an X-ray tube in computer tomography (CT) studies. An analysis of domestic and foreign literature showed that most of the work in the field of radiometry and radiography takes the tabulated values of X-ray absorption coefficients into account, while individual dose factors are not taken into account at all since many studies are lacking the Dose Report. Instead, an average value is used to simplify the calculation of statistics. In this regard, it was decided to develop a method to detect the amount of ionizing quanta by analyzing the noise of CT data. As the basis of the algorithm, we used Poisson and Gauss distribution mathematical model of owns’ design of logarithmic value. The resulting mathematical model was tested on the CT data of a calibration phantom consisting of three plastic cylinders filled with water, the X-ray absorption coefficient of which is known from the table values. The data were obtained from several CT devices from different manufacturers (Siemens, Toshiba, GE, Phillips). The developed algorithm made it possible to calculate the number of emitted X-ray quanta per unit time. These data, taking into account the noise level and the radiuses of the cylinders, were converted to X-ray absorption values, after which a comparison was made with tabulated values. As a result of this operation, the algorithm used with CT data of various configurations, experimental data were obtained, consistent with the theoretical part and the mathematical model. The results showed good accuracy of the algorithm and mathematical apparatus, which shows reliability of the obtained data. This mathematical model is already used in the noise reduction program of the CT of own design, where it participates as a method of creating a dynamic threshold of noise reduction. At the moment, the algorithm is being processed to work with real data from computer tomography of patients.

    Views (last year): 23. Citations: 1 (RSCI).
  7. Kalitin K.Y., Nevzorov A.A., Spasov A.A., Mukha O.Y.
    Deep learning analysis of intracranial EEG for recognizing drug effects and mechanisms of action
    Computer Research and Modeling, 2024, v. 16, no. 3, pp. 755-772

    Predicting novel drug properties is fundamental to polypharmacology, repositioning, and the study of biologically active substances during the preclinical phase. The use of machine learning, including deep learning methods, for the identification of drug – target interactions has gained increasing popularity in recent years.

    The objective of this study was to develop a method for recognizing psychotropic effects and drug mechanisms of action (drug – target interactions) based on an analysis of the bioelectrical activity of the brain using artificial intelligence technologies.

    Intracranial electroencephalographic (EEG) signals from rats were recorded (4 channels at a sampling frequency of 500 Hz) after the administration of psychotropic drugs (gabapentin, diazepam, carbamazepine, pregabalin, eslicarbazepine, phenazepam, arecoline, pentylenetetrazole, picrotoxin, pilocarpine, chloral hydrate). The signals were divided into 2-second epochs, then converted into $2000\times 4$ images and input into an autoencoder. The output of the bottleneck layer was subjected to classification and clustering using t-SNE, and then the distances between resulting clusters were calculated. As an alternative, an approach based on feature extraction with dimensionality reduction using principal component analysis and kernel support vector machine (kSVM) classification was used. Models were validated using 5-fold cross-validation.

    The classification accuracy obtained for 11 drugs during cross-validation was $0.580 \pm 0.021$, which is significantly higher than the accuracy of the random classifier $(0.091 \pm 0.045, p < 0.0001)$ and the kSVM $(0.441 \pm 0.035, p < 0.05)$. t-SNE maps were generated from the bottleneck parameters of intracranial EEG signals. The relative proximity of the signal clusters in the parametric space was assessed.

    The present study introduces an original method for biopotential-mediated prediction of effects and mechanism of action (drug – target interaction). This method employs convolutional neural networks in conjunction with a modified selective parameter reduction algorithm. Post-treatment EEGs were compressed into a unified parameter space. Using a neural network classifier and clustering, we were able to recognize the patterns of neuronal response to the administration of various psychotropic drugs.

  8. Khruschev S.S., Fursova P.V., Plusnina T.Yu., Riznichenko G.Yu., Rubin A.B.
    Analysis of the rate of electron transport through photosynthetic cytochrome $b_6 f$ complex
    Computer Research and Modeling, 2024, v. 16, no. 4, pp. 997-1022

    We consider an approach based on linear algebra methods to analyze the rate of electron transport through the cytochrome $b_6 f$ complex. In the proposed approach, the dependence of the quasi-stationary electron flux through the complex on the degree of reduction of pools of mobile electron carriers is considered a response function characterizing this process. We have developed software in the Python programming language that allows us to construct the master equation for the complex according to the scheme of elementary reactions and calculate quasi-stationary electron transport rates through the complex and the dynamics of their changes during the transition process. The calculations are performed in multithreaded mode, which makes it possible to efficiently use the resources of modern computing systems and to obtain data on the functioning of the complex in a wide range of parameters in a relatively short time. The proposed approach can be easily adapted for the analysis of electron transport in other components of the photosynthetic and respiratory electron-transport chain, as well as other processes in multienzyme complexes containing several reaction centers. Cryo-electron microscopy and redox titration data were used to parameterize the model of cytochrome $b_6 f$ complex. We obtained dependences of the quasi-stationary rate of plastocyanin reduction and plastoquinone oxidation on the degree of reduction of pools of mobile electron carriers and analyzed the dynamics of rate changes in response to changes in the redox state of the plastoquinone pool. The modeling results are in good agreement with the available experimental data.

  9. Ignatev N.A., Tuliev U.Y.
    Semantic structuring of text documents based on patterns of natural language entities
    Computer Research and Modeling, 2022, v. 14, no. 5, pp. 1185-1197

    The technology of creating patterns from natural language words (concepts) based on text data in the bag of words model is considered. Patterns are used to reduce the dimension of the original space in the description of documents and search for semantically related words by topic. The process of dimensionality reduction is implemented through the formation of patterns of latent features. The variety of structures of document relations is investigated in order to divide them into themes in the latent space.

    It is considered that a given set of documents (objects) is divided into two non-overlapping classes, for the analysis of which it is necessary to use a common dictionary. The belonging of words to a common vocabulary is initially unknown. Class objects are considered as opposition to each other. Quantitative parameters of oppositionality are determined through the values of the stability of each feature and generalized assessments of objects according to non-overlapping sets of features.

    To calculate the stability, the feature values are divided into non-intersecting intervals, the optimal boundaries of which are determined by a special criterion. The maximum stability is achieved under the condition that the boundaries of each interval contain values of one of the two classes.

    The composition of features in sets (patterns of words) is formed from a sequence ordered by stability values. The process of formation of patterns and latent features based on them is implemented according to the rules of hierarchical agglomerative grouping.

    A set of latent features is used for cluster analysis of documents using metric grouping algorithms. The analysis applies the coefficient of content authenticity based on the data on the belonging of documents to classes. The coefficient is a numerical characteristic of the dominance of class representatives in groups.

    To divide documents into topics, it is proposed to use the union of groups in relation to their centers. As patterns for each topic, a sequence of words ordered by frequency of occurrence from a common dictionary is considered.

    The results of a computational experiment on collections of abstracts of scientific dissertations are presented. Sequences of words from the general dictionary on 4 topics are formed.

  10. Zavodskikh R.K., Efanov N.N.
    Performance prediction for chosen types of loops over one-dimensional arrays with embedding-driven intermediate representations analysis
    Computer Research and Modeling, 2023, v. 15, no. 1, pp. 211-224

    The method for mapping of intermediate representations (IR) set of C, C++ programs to vector embedding space is considered to create an empirical estimation framework for static performance prediction using LLVM compiler infrastructure. The usage of embeddings makes programs easier to compare due to avoiding Control Flow Graphs (CFG) and Data Flow Graphs (DFG) direct comparison. This method is based on transformation series of the initial IR such as: instrumentation — injection of artificial instructions in an instrumentation compiler’s pass depending on load offset delta in the current instruction compared to the previous one, mapping of instrumented IR into multidimensional vector with IR2Vec and dimension reduction with t-SNE (t-distributed stochastic neighbor embedding) method. The D1 cache miss ratio measured with perf stat tool is considered as performance metric. A heuristic criterion of programs having more or less cache miss ratio is given. This criterion is based on embeddings of programs in 2D-space. The instrumentation compiler’s pass developed in this work is described: how it generates and injects artificial instructions into IR within the used memory model. The software pipeline that implements the performance estimation based on LLVM compiler infrastructure is given. Computational experiments are performed on synthetic tests which are the sets of programs with the same CFGs but with different sequences of offsets used when accessing the one-dimensional array of a given size. The correlation coefficient between performance metric and distance to the worst program’s embedding is measured and proved to be negative regardless of t-SNE initialization. This fact proves the heuristic criterion to be true. The process of such synthetic tests generation is also considered. Moreover, the variety of performance metric in programs set in such a test is proposed as a metric to be improved with exploration of more tests generators.

Pages: « first previous next

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"