Результаты поиска по 'attention':
Найдено статей: 45
  1. Kozhanov D.A.
    Modeling of deformation processes in structure of flexible woven composites
    Computer Research and Modeling, 2020, v. 12, no. 3, pp. 547-557

    Flexible woven composites are classified as high-tech innovative materials. Due to the combination of various components of the filler and reinforcement elements, such materials are used in construction, in the defense industry, in shipbuilding and aircraft construction, etc. In the domestic literature, insufficient attention is paid to woven composites that change their geometric structure of the reinforcing layer during deformation. This paper presents an analysis of the previously proposed complex approach to modeling the behavior of flexible woven composites under static uniaxial tension for further generalization of the approach to biaxial tension. The work is aimed at qualitative and quantitative description of mechanical deformation processes occurring in the structure of the studied materials under tension, which include straightening the strands of the reinforcing layer and increasing the value of mutual pressure of the cross-lying reinforcement strands. At the beginning of the deformation process, the straightening of the threads and the increase in mutual pressure of the threads are most intense. With the increase in the level of load, the change of these parameters slows down. For example, the bending of the reinforcement strands goes into the Central tension, and the value of the load from the mutual pressure is no longer increased (tends to constant). To simulate the described processes, the basic geometrical and mechanical parameters of the material affecting the process of forming are introduced, the necessary terminology and description of the characteristics are given. Due to the high geometric nonlinearity of the all processes described in the increments, as in the initial load values there is a significant deformation of the reinforcing layer. For the quantitative and qualitative description of mechanical deformation processes occurring in the reinforcing layer, analytical dependences are derived to determine the increment of the angle of straightening of reinforcement filaments and the load caused by the mutual pressure of the cross-lying filaments at each step of the load increment. For testing of obtained dependencies shows an example of their application for flexible woven composites brands VP4126, VP6131 and VP6545. The simulation results confirmed the assumptions about the processes of straightening the threads and slowing the increase in mutual pressure of the threads. The results and dependences presented in this paper are directly related to the further generalization of the previously proposed analytical models for biaxial tension, since stretching in two directions will significantly reduce the straightening of the threads and increase the amount of mutual pressure under similar loads.

  2. Gladin E.L., Borodich E.D.
    Variance reduction for minimax problems with a small dimension of one of the variables
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 257-275

    The paper is devoted to convex-concave saddle point problems where the objective is a sum of a large number of functions. Such problems attract considerable attention of the mathematical community due to the variety of applications in machine learning, including adversarial learning, adversarial attacks and robust reinforcement learning, to name a few. The individual functions in the sum usually represent losses related to examples from a data set. Additionally, the formulation admits a possibly nonsmooth composite term. Such terms often reflect regularization in machine learning problems. We assume that the dimension of one of the variable groups is relatively small (about a hundred or less), and the other one is large. This case arises, for example, when one considers the dual formulation for a minimization problem with a moderate number of constraints. The proposed approach is based on using Vaidya’s cutting plane method to minimize with respect to the outer block of variables. This optimization algorithm is especially effective when the dimension of the problem is not very large. An inexact oracle for Vaidya’s method is calculated via an approximate solution of the inner maximization problem, which is solved by the accelerated variance reduced algorithm Katyusha. Thus, we leverage the structure of the problem to achieve fast convergence. Separate complexity bounds for gradients of different components with respect to different variables are obtained in the study. The proposed approach is imposing very mild assumptions about the objective. In particular, neither strong convexity nor smoothness is required with respect to the low-dimensional variable group. The number of steps of the proposed algorithm as well as the arithmetic complexity of each step explicitly depend on the dimensionality of the outer variable, hence the assumption that it is relatively small.

  3. Ignashin I.N., Yarmoshik D.V.
    Modifications of the Frank –Wolfe algorithm in the problem of finding the equilibrium distribution of traffic flows
    Computer Research and Modeling, 2024, v. 16, no. 1, pp. 53-68

    The paper presents various modifications of the Frank–Wolfe algorithm in the equilibrium traffic assignment problem. The Beckman model is used as a model for experiments. In this article, first of all, attention is paid to the choice of the direction of the basic step of the Frank–Wolfe algorithm. Algorithms will be presented: Conjugate Frank–Wolfe (CFW), Bi-conjugate Frank–Wolfe (BFW), Fukushima Frank –Wolfe (FFW). Each modification corresponds to different approaches to the choice of this direction. Some of these modifications are described in previous works of the authors. In this article, following algorithms will be proposed: N-conjugate Frank–Wolfe (NFW), Weighted Fukushima Frank–Wolfe (WFFW). These algorithms are some ideological continuation of the BFW and FFW algorithms. Thus, if the first algorithm used at each iteration the last two directions of the previous iterations to select the next direction conjugate to them, then the proposed algorithm NFW is using more than $N$ previous directions. In the case of Fukushima Frank–Wolfe, the average of several previous directions is taken as the next direction. According to this algorithm, a modification WFFW is proposed, which uses a exponential smoothing from previous directions. For comparative analysis, experiments with various modifications were carried out on several data sets representing urban structures and taken from publicly available sources. The relative gap value was taken as the quality metric. The experimental results showed the advantage of algorithms using the previous directions for step selection over the classic Frank–Wolfe algorithm. In addition, an improvement in efficiency was revealed when using more than two conjugate directions. For example, on various datasets, the modification 3FW showed the best convergence. In addition, the proposed modification WFFW often overtook FFW and CFW, although performed worse than NFW.

  4. The work is devoted to the problem of creating a model with stationary parameters using historical data under conditions of unknown disturbances. The case is considered when a representative sample of object states can be formed using historical data accumulated only over a significant period of time. It is assumed that unknown disturbances can act in a wide frequency range and may have low-frequency and trend components. In such a situation, including data from different time periods in the sample can lead to inconsistencies and greatly reduce the accuracy of the model. The paper provides an overview of approaches and methods for data harmonization. In this case, the main attention is paid to data sampling. An assessment is made of the applicability of various data sampling options as a tool for reducing the level of uncertainty. We propose a method for identifying a self-leveling object model using data accumulated over a significant period of time under conditions of unknown disturbances with a wide frequency range. The method is focused on creating a model with stationary parameters that does not require periodic reconfiguration to new conditions. The method is based on the combined use of sampling and presentation of data from individual periods of time in the form of increments relative to the initial point in time for the period. This makes it possible to reduce the number of parameters that characterize unknown disturbances with a minimum of assumptions that limit the application of the method. As a result, the dimensionality of the search problem is reduced and the computational costs associated with setting up the model are minimized. It is possible to configure both linear and, in some cases, nonlinear models. The method was used to develop a model of closed cooling of steel on a unit for continuous hot-dip galvanizing of steel strip. The model can be used for predictive control of thermal processes and for selecting strip speed. It is shown that the method makes it possible to develop a model of thermal processes from a closed cooling section under conditions of unknown disturbances, including low-frequency components.

  5. Succi G., Ivanov V.V.
    Comparison of mobile operating systems based on models of growth reliability of the software
    Computer Research and Modeling, 2018, v. 10, no. 3, pp. 325-334

    Evaluation of software reliability is an important part of the process of developing modern software. Many studies are aimed at improving models for measuring and predicting the reliability of software products. However, little attention is paid to approaches to comparing existing systems in terms of software reliability. Despite the enormous importance for practice (and for managing software development), a complete and proven comparison methodology does not exist. In this article, we propose a software reliability comparison methodology in which software reliability growth models are widely used. The proposed methodology has the following features: it provides certain level of flexibility and abstraction while keeping objectivity, i.e. providing measurable comparison criteria. Also, given the comparison methodology with a set of SRGMs and evaluation criteria it becomes much easier to disseminate information about reliability of wide range of software systems. The methodology was evaluated on the example of three mobile operating systems with open source: Sailfish, Tizen, CyanogenMod.

    A byproduct of our study is a comparison of the three analyzed Open Source mobile operating systems. The goal of this research is to determine which OS is stronger in terms of reliability. To this end we have performed a GQM analysis and we have identified 3 questions and 8 metrics. Considering the comparison of metrics, it appears that Sailfish is in most case the best performing OS. However, it is also the OS that performs the worst in most cases. On the contrary, Tizen scores the best in 3 cases out of 8, but the worst only in one case out of 8.

    Views (last year): 29.
  6. Burago N.G., Nikitin I.S.
    Algorithms of through calculation for damage processes
    Computer Research and Modeling, 2018, v. 10, no. 5, pp. 645-666

    The paper reviews the existing approaches to calculating the destruction of solids. The main attention is paid to algorithms using a unified approach to the calculation of deformation both for nondestructive and for the destroyed states of the material. The thermodynamic derivation of the unified rheological relationships taking into account the elastic, viscous and plastic properties of materials and describing the loss of the deformation resistance ability with the accumulation of microdamages is presented. It is shown that the mathematical model under consideration provides a continuous dependence of the solution on input parameters (parameters of the material medium, initial and boundary conditions, discretization parameters) with softening of the material.

    Explicit and implicit non-matrix algorithms for calculating the evolution of deformation and fracture development are presented. Non-explicit schemes are implemented using iterations of the conjugate gradient method, with the calculation of each iteration exactly coinciding with the calculation of the time step for two-layer explicit schemes. So, the solution algorithms are very simple.

    The results of solving typical problems of destruction of solid deformable bodies for slow (quasistatic) and fast (dynamic) deformation processes are presented. Based on the experience of calculations, recommendations are given for modeling the processes of destruction and ensuring the reliability of numerical solutions.

    Views (last year): 24.
  7. Danilova M.Y., Malinovskiy G.S.
    Averaged heavy-ball method
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 277-308

    First-order optimization methods are workhorses in a wide range of modern applications in economics, physics, biology, machine learning, control, and other fields. Among other first-order methods accelerated and momentum ones obtain special attention because of their practical efficiency. The heavy-ball method (HB) is one of the first momentum methods. The method was proposed in 1964 and the first analysis was conducted for quadratic strongly convex functions. Since then a number of variations of HB have been proposed and analyzed. In particular, HB is known for its simplicity in implementation and its performance on nonconvex problems. However, as other momentum methods, it has nonmonotone behavior, and for optimal parameters, the method suffers from the so-called peak effect. To address this issue, in this paper, we consider an averaged version of the heavy-ball method (AHB). We show that for quadratic problems AHB has a smaller maximal deviation from the solution than HB. Moreover, for general convex and strongly convex functions, we prove non-accelerated rates of global convergence of AHB, its weighted version WAHB, and for AHB with restarts R-AHB. To the best of our knowledge, such guarantees for HB with averaging were not explicitly proven for strongly convex problems in the existing works. Finally, we conduct several numerical experiments on minimizing quadratic and nonquadratic functions to demonstrate the advantages of using averaging for HB. Moreover, we also tested one more modification of AHB called the tail-averaged heavy-ball method (TAHB). In the experiments, we observed that HB with a properly adjusted averaging scheme converges faster than HB without averaging and has smaller oscillations.

  8. Minkevich I.G.
    Stoichiometric synthesis of metabolic pathways
    Computer Research and Modeling, 2015, v. 7, no. 6, pp. 1241-1267

    A vector-matrix approach to the theoretical design of metabolic pathways converting chemical compounds, viz., preset substrates, into desirable products is described. It is a mathematical basis for computer–aided generation of alternative biochemical reaction sets executing the given substrate–product conversion. The pathways are retrieved from the used database of biochemical reactions and utilize the reaction stoichiometry and restrictions based on the irreversibility of a part of them. Particular attention is paid to the analysis of restriction interrelations. It is shown that the number of restrictions can be notably reduced due to the existence of families of parallel restricting planes in the space of reaction flows. Coinciding planes of contradirectional restrictions result in the existence of fixed reaction flow values. The problem of exclusion of so called futile cycles is also considered. Utilization of these factors allows essential lowering of the problem complexity and necessary computational resources. An example of alternative biochemical pathway computation for conversion of glucose and glycerol into succinic acid is given. It is found that for a preset “substrate–product” pair many pathways have the same high-energy bond balance.

    Views (last year): 6. Citations: 3 (RSCI).
  9. Buglak A.A., Pomogaev V.A., Kononov A.I.
    Calculation of absorption spectra of silver-thiolate complexes
    Computer Research and Modeling, 2019, v. 11, no. 2, pp. 275-286

    Ligand protected metal nanoclusters (NCs) have gained much attention due to their unique physicochemical properties and potential applications in material science. Noble metal NCs protected with thiolate ligands have been of interest because of their long-term stability. The detailed structures of most of the ligandstabilized metal NCs remain unknown due to the absence of crystal structure data for them. Theoretical calculations using quantum chemistry techniques appear as one of the most promising tools for determining the structure and electronic properties of NCs. That is why finding a cost-effective strategy for calculations is such an important and challenging task. In this work, we compare the performance of different theoretical methods of geometry optimization and absorption spectra calculation for silver-thiolate complexes. We show that second order Moller–Plesset perturbation theory reproduces nicely the geometries obtained at a higher level of theory, in particular, with RI-CC2 method. We compare the absorption spectra of silver-thiolate complexes simulated with different methods: EOM-CCSD, RI-CC2, ADC(2) and TDDFT. We show that the absorption spectra calculated with the ADC(2) method are consistent with the spectra obtained with the EOM-CCSD and RI-CC2 methods. CAM-B3LYP functional fails to reproduce the absorption spectra of the silver-thiolate complexes. However, M062X global hybrid meta-GGA functional seems to be a nice compromise regarding its low computational costs. In our previous study, we have already demonstrated that M062X functional shows good accuracy as compared to ADC(2) ab initio method predicting the excitation spectra of silver nanocluster complexes with nucleobases.

    Views (last year): 14.
  10. Mezentsev Y.A., Razumnikova O.M., Estraykh I.V., Tarasova I.V., Trubnikova O.A.
    Tasks and algorithms for optimal clustering of multidimensional objects by a variety of heterogeneous indicators and their applications in medicine
    Computer Research and Modeling, 2024, v. 16, no. 3, pp. 673-693

    The work is devoted to the description of the author’s formal statements of the clustering problem for a given number of clusters, algorithms for their solution, as well as the results of using this toolkit in medicine.

    The solution of the formulated problems by exact algorithms of implementations of even relatively low dimensions before proving optimality is impossible in a finite time due to their belonging to the NP class.

    In this regard, we have proposed a hybrid algorithm that combines the advantages of precise methods based on clustering in paired distances at the initial stage with the speed of methods for solving simplified problems of splitting by cluster centers at the final stage. In the development of this direction, a sequential hybrid clustering algorithm using random search in the paradigm of swarm intelligence has been developed. The article describes it and presents the results of calculations of applied clustering problems.

    To determine the effectiveness of the developed tools for optimal clustering of multidimensional objects according to a variety of heterogeneous indicators, a number of computational experiments were performed using data sets including socio-demographic, clinical anamnestic, electroencephalographic and psychometric data on the cognitive status of patients of the cardiology clinic. An experimental proof of the effectiveness of using local search algorithms in the paradigm of swarm intelligence within the framework of a hybrid algorithm for solving optimal clustering problems has been obtained.

    The results of the calculations indicate the actual resolution of the main problem of using the discrete optimization apparatus — limiting the available dimensions of task implementations. We have shown that this problem is eliminated while maintaining an acceptable proximity of the clustering results to the optimal ones. The applied significance of the obtained clustering results is also due to the fact that the developed optimal clustering toolkit is supplemented by an assessment of the stability of the formed clusters, which allows for known factors (the presence of stenosis or older age) to additionally identify those patients whose cognitive resources are insufficient to overcome the influence of surgical anesthesia, as a result of which there is a unidirectional effect of postoperative deterioration of complex visual-motor reaction, attention and memory. This effect indicates the possibility of differentiating the classification of patients using the proposed tools.

Pages: previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"