Результаты поиска по 'cross-validation':
Найдено статей: 8
  1. Simakov S.S.
    Modern methods of mathematical modeling of blood flow using reduced order methods
    Computer Research and Modeling, 2018, v. 10, no. 5, pp. 581-604

    The study of the physiological and pathophysiological processes in the cardiovascular system is one of the important contemporary issues, which is addressed in many works. In this work, several approaches to the mathematical modelling of the blood flow are considered. They are based on the spatial order reduction and/or use a steady-state approach. Attention is paid to the discussion of the assumptions and suggestions, which are limiting the scope of such models. Some typical mathematical formulations are considered together with the brief review of their numerical implementation. In the first part, we discuss the models, which are based on the full spatial order reduction and/or use a steady-state approach. One of the most popular approaches exploits the analogy between the flow of the viscous fluid in the elastic tubes and the current in the electrical circuit. Such models can be used as an individual tool. They also used for the formulation of the boundary conditions in the models using one dimensional (1D) and three dimensional (3D) spatial coordinates. The use of the dynamical compartment models allows describing haemodynamics over an extended period (by order of tens of cardiac cycles and more). Then, the steady-state models are considered. They may use either total spatial reduction or two dimensional (2D) spatial coordinates. This approach is used for simulation the blood flow in the region of microcirculation. In the second part, we discuss the models, which are based on the spatial order reduction to the 1D coordinate. The models of this type require relatively small computational power relative to the 3D models. Within the scope of this approach, it is also possible to include all large vessels of the organism. The 1D models allow simulation of the haemodynamic parameters in every vessel, which is included in the model network. The structure and the parameters of such a network can be set according to the literature data. It also exists methods of medical data segmentation. The 1D models may be derived from the 3D Navier – Stokes equations either by asymptotic analysis or by integrating them over a volume. The major assumptions are symmetric flow and constant shape of the velocity profile over a cross-section. These assumptions are somewhat restrictive and arguable. Some of the current works paying attention to the 1D model’s validation, to the comparing different 1D models and the comparing 1D models with clinical data. The obtained results reveal acceptable accuracy. It allows concluding, that the 1D approach can be used in medical applications. 1D models allow describing several dynamical processes, such as pulse wave propagation, Korotkov’s tones. Some physiological conditions may be included in the 1D models: gravity force, muscles contraction force, regulation and autoregulation.

    Views (last year): 62. Citations: 2 (RSCI).
  2. Bakhvalov Y.N., Kopylov I.V.
    Training and assessment the generalization ability of interpolation methods
    Computer Research and Modeling, 2015, v. 7, no. 5, pp. 1023-1031

    We investigate machine learning methods with a certain kind of decision rule. In particular, inverse-distance method of interpolation, method of interpolation by radial basis functions, the method of multidimensional interpolation and approximation, based on the theory of random functions, the last method of interpolation is kriging. This paper shows a method of rapid retraining “model” when adding new data to the existing ones. The term “model” means interpolating or approximating function constructed from the training data. This approach reduces the computational complexity of constructing an updated “model” from $O(n^3)$ to $O(n^2)$. We also investigate the possibility of a rapid assessment of generalizing opportunities “model” on the training set using the method of cross-validation leave-one-out cross-validation, eliminating the major drawback of this approach — the necessity to build a new “model” for each element which is removed from the training set.

    Views (last year): 7. Citations: 5 (RSCI).
  3. This article explores a method of machine learning based on the theory of random functions. One of the main problems of this method is that decision rule of a model becomes more complicated as the number of training dataset examples increases. The decision rule of the model is the most probable realization of a random function and it's represented as a polynomial with the number of terms equal to the number of training examples. In this article we will show the quick way of the number of training dataset examples reduction and, accordingly, the complexity of the decision rule. Reducing the number of examples of training dataset is due to the search and removal of weak elements that have little effect on the final form of the decision function, and noise sampling elements. For each $(x_i,y_i)$-th element sample was introduced the concept of value, which is expressed by the deviation of the estimated value of the decision function of the model at the point $x_i$, built without the $i$-th element, from the true value $y_i$. Also we show the possibility of indirect using weak elements in the process of training model without increasing the number of terms in the decision function. At the experimental part of the article, we show how changed amount of data affects to the ability of the method of generalizing in the classification task.

    Views (last year): 5.
  4. Kozhevnikov V.S., Matyushkin I.V., Chernyaev N.V.
    Analysis of the basic equation of the physical and statistical approach within reliability theory of technical systems
    Computer Research and Modeling, 2020, v. 12, no. 4, pp. 721-735

    Verification of the physical-statistical approach within reliability theory for the simplest cases was carried out, which showed its validity. An analytical solution of the one-dimensional basic equation of the physicalstatistical approach is presented under the assumption of a stationary degradation rate. From a mathematical point of view this equation is the well-known continuity equation, where the role of density is played by the density distribution function of goods in its characteristics phase space, and the role of fluid velocity is played by intensity (rate) degradation processes. The latter connects the general formalism with the specifics of degradation mechanisms. The cases of coordinate constant, linear and quadratic degradation rates are analyzed using the characteristics method. In the first two cases, the results correspond to physical intuition. At a constant rate of degradation, the shape of the initial distribution is preserved, and the distribution itself moves equably from the zero. At a linear rate of degradation, the distribution either narrows down to a narrow peak (in the singular limit), or expands, with the maximum shifting to the periphery at an exponentially increasing rate. The distribution form is also saved up to the parameters. For the initial normal distribution, the coordinates of the largest value of the distribution maximum for its return motion are obtained analytically.

    In the quadratic case, the formal solution demonstrates counterintuitive behavior. It consists in the fact that the solution is uniquely defined only on a part of an infinite half-plane, vanishes along with all derivatives on the boundary, and is ambiguous when crossing the boundary. If you continue it to another area in accordance with the analytical solution, it has a two-humped appearance, retains the amount of substance and, which is devoid of physical meaning, periodically over time. If you continue it with zero, then the conservativeness property is violated. The anomaly of the quadratic case is explained, though not strictly, by the analogy of the motion of a material point with an acceleration proportional to the square of velocity. Here we are dealing with a mathematical curiosity. Numerical calculations are given for all cases. Additionally, the entropy of the probability distribution and the reliability function are calculated, and their correlation is traced.

  5. Sokolov A.V., Mamkin V.V., Avilov V.K., Tarasov D.L., Kurbatova Y.A., Olchev A.V.
    Application of a balanced identification method for gap-filling in CO2 flux data in a sphagnum peat bog
    Computer Research and Modeling, 2019, v. 11, no. 1, pp. 153-171

    The method of balanced identification was used to describe the response of Net Ecosystem Exchange of CO2 (NEE) to change of environmental factors, and to fill the gaps in continuous CO2 flux measurements in a sphagnum peat bog in the Tver region. The measurements were provided in the peat bog by the eddy covariance method from August to November of 2017. Due to rainy weather conditions and recurrent periods with low atmospheric turbulence the gap proportion in measured CO2 fluxes at our experimental site during the entire period of measurements exceeded 40%. The model developed for the gap filling in long-term experimental data considers the NEE as a difference between Ecosystem Respiration (RE) and Gross Primary Production (GPP), i.e. key processes of ecosystem functioning, and their dependence on incoming solar radiation (Q), soil temperature (T), water vapor pressure deficit (VPD) and ground water level (WL). Applied for this purpose the balanced identification method is based on the search for the optimal ratio between the model simplicity and the data fitting accuracy — the ratio providing the minimum of the modeling error estimated by the cross validation method. The obtained numerical solutions are characterized by minimum necessary nonlinearity (curvature) that provides sufficient interpolation and extrapolation characteristics of the developed models. It is particularly important to fill the missing values in NEE measurements. Reviewing the temporary variability of NEE and key environmental factors allowed to reveal a statistically significant dependence of GPP on Q, T, and VPD, and RE — on T and WL, respectively. At the same time, the inaccuracy of applied method for simulation of the mean daily NEE, was less than 10%, and the error in NEE estimates by the method was higher than by the REddyProc model considering the influence on NEE of fewer number of environmental parameters. Analyzing the gap-filled time series of NEE allowed to derive the diurnal and inter-daily variability of NEE and to obtain cumulative CO2 fluxs in the peat bog for selected summer-autumn period. It was shown, that the rate of CO2 fixation by peat bog vegetation in August was significantly higher than the rate of ecosystem respiration, while since September due to strong decrease of GPP the peat bog was turned into a consistent source of CO2 for the atmosphere.

    Views (last year): 19.
  6. Tinkov O.V., Polishchuk P.G., Khachatryan D.S., Kolotaev A.V., Balaev A.N., Osipov V.N., Grigorev B.Y.
    Quantitative analysis of “structure – anticancer activity” and rational molecular design of bi-functional VEGFR-2/HDAC-inhibitors
    Computer Research and Modeling, 2019, v. 11, no. 5, pp. 911-930

    Inhibitors of histone deacetylases (HDACi) have considered as a promising class of drugs for the treatment of cancers because of their effects on cell growth, differentiation, and apoptosis. Angiogenesis play an important role in the growth of most solid tumors and the progression of metastasis. The vascular endothelial growth factor (VEGF) is a key angiogenic agent, which is secreted by malignant tumors, which induces the proliferation and the migration of vascular endothelial cells. Currently, the most promising strategy in the fight against cancer is the creation of hybrid drugs that simultaneously act on several physiological targets. In this work, a series of hybrids bearing N-phenylquinazolin-4-amine and hydroxamic acid moieties were studied as dual VEGFR-2/HDAC inhibitors using simplex representation of the molecular structure and Support Vector Machine (SVM). The total sample of 42 compounds was divided into training and test sets. Five-fold cross-validation (5-fold) was used for internal validation. Satisfactory quantitative structure—activity relationship (QSAR) models were constructed (R2test = 0.64–0.87) for inhibitors of HDAC, VEGFR-2 and human breast cancer cell line MCF-7. The interpretation of the obtained QSAR models was carried out. The coordinated effect of different molecular fragments on the increase of antitumor activity of the studied compounds was estimated. Among the substituents of the N-phenyl fragment, the positive contribution of para bromine for all three types of activity can be distinguished. The results of the interpretation were used for molecular design of potential dual VEGFR-2/HDAC inhibitors. For comparative QSAR research we used physicochemical descriptors calculated by the program HYBOT, the method of Random Forest (RF), and on-line version of the expert system OCHEM (https://ochem.eu). In the modeling of OCHEM PyDescriptor descriptors and extreme gradient boosting was chosen. In addition, the models obtained with the help of the expert system OCHEM were used for virtual screening of 300 compounds to select promising VEGFR-2/HDAC inhibitors for further synthesis and testing.

  7. Sabirov A.I., Katasev A.S., Dagaeva M.V.
    A neural network model for traffic signs recognition in intelligent transport systems
    Computer Research and Modeling, 2021, v. 13, no. 2, pp. 429-435

    This work analyzes the problem of traffic signs recognition in intelligent transport systems. The basic concepts of computer vision and image recognition tasks are considered. The most effective approach for solving the problem of analyzing and recognizing images now is the neural network method. Among all kinds of neural networks, the convolutional neural network has proven itself best. Activation functions such as Relu and SoftMax are used to solve the classification problem when recognizing traffic signs. This article proposes a technology for recognizing traffic signs. The choice of an approach for solving the problem based on a convolutional neural network due to the ability to effectively solve the problem of identifying essential features and classification. The initial data for the neural network model were prepared and a training sample was formed. The Google Colaboratory cloud service with the external libraries for deep learning TensorFlow and Keras was used as a platform for the intelligent system development. The convolutional part of the network is designed to highlight characteristic features in the image. The first layer includes 512 neurons with the Relu activation function. Then there is the Dropout layer, which is used to reduce the effect of overfitting the network. The output fully connected layer includes four neurons, which corresponds to the problem of recognizing four types of traffic signs. An intelligent traffic sign recognition system has been developed and tested. The used convolutional neural network included four stages of convolution and subsampling. Evaluation of the efficiency of the traffic sign recognition system using the three-block cross-validation method showed that the error of the neural network model is minimal, therefore, in most cases, new images will be recognized correctly. In addition, the model has no errors of the first kind, and the error of the second kind has a low value and only when the input image is very noisy.

  8. Aksenov A.A., Kalugina M.D., Lobanov A.I., Kashirin V.S.
    Numerical simulation of fluid flow in a blood pump in the FlowVision software package
    Computer Research and Modeling, 2023, v. 15, no. 4, pp. 1025-1038

    A numerical simulation of fluid flow in a blood pump was performed using the FlowVision software package. This test problem, provided by the Center for Devices and Radiological Health of the US. Food and Drug Administration, involved considering fluid flow according to several design modes. At the same time for each case of calculation a certain value of liquid flow rate and rotor speed was set. Necessary data for calculations in the form of exact geometry, flow conditions and fluid characteristics were provided to all research participants, who used different software packages for modeling. Numerical simulations were performed in FlowVision for six calculation modes with the Newtonian fluid and standard $k-\varepsilon$ turbulence model, in addition, the fifth mode with the $k-\omega$ SST turbulence model and with the Caro rheological fluid model were performed. In the first stage of the numerical simulation, the convergence over the mesh was investigated, on the basis of which a final mesh with a number of cells of the order of 6 million was chosen. Due to the large number of cells, in order to accelerate the study, part of the calculations was performed on the Lomonosov-2 cluster. As a result of numerical simulation, we obtained and analyzed values of pressure difference between inlet and outlet of the pump, velocity between rotor blades and in the area of diffuser, and also, we carried out visualization of velocity distribution in certain cross-sections. For all design modes there was compared the pressure difference received numerically with the experimental data, and for the fifth calculation mode there was also compared with the experiment by speed distribution between rotor blades and in the area of diffuser. Data analysis has shown good correlation of calculation results in FlowVision with experimental results and numerical simulation in other software packages. The results obtained in FlowVision for solving the US FDA test suggest that FlowVision software package can be used for solving a wide range of hemodynamic problems.

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"