Processing math: 100%
Результаты поиска по 'risks':
Найдено статей: 30
  1. Malinetsky G.G.
    Image of the teacher. Ten years afterward
    Computer Research and Modeling, 2015, v. 7, no. 4, pp. 789-811

    The work outlines the key ideas of Kurdyumov S.P., an outstanding specialist in applied mathematics, self-organization theory, transdisciplinary research. It considers the development of his scientific ideas in the last decade and formulates a set of open problems in synergetics which will probably stimulate the development of this approach. The article is an engaged version of the report made at Xth Kurdyumov readings held in Tver State University in 2015.

    Views (last year): 4.
  2. Dushkin R.V.
    Review of Modern State of Quantum Technologies
    Computer Research and Modeling, 2018, v. 10, no. 2, pp. 165-179

    At present modern quantum technologies can get a new twist of development, which will certainly give an opportunity to obtain solutions for numerous problems that previously could not be solved in the framework of “traditional” paradigms and computational models. All mankind stands at the threshold of the so-called “second quantum revolution”, and its short-term and long-term consequences will affect virtually all spheres of life of a global society. Such directions and branches of science and technology as materials science, nanotechnology, pharmacology and biochemistry in general, modeling of chaotic dynamic processes (nuclear explosions, turbulent flows, weather and long-term climatic phenomena), etc. will be directly developed, as well as the solution of any problems, which reduce to the multiplication of matrices of large dimensions (in particular, the modeling of quantum systems). However, along with extraordinary opportunities, quantum technologies carry with them certain risks and threats, in particular, the scrapping of all information systems based on modern achievements in cryptography, which will entail almost complete destruction of secrecy, the global financial crisis due to the destruction of the banking sector and compromise of all communication channels. Even in spite of the fact that methods of so-called “post-quantum” cryptography are already being developed today, some risks still need to be realized, since not all long-term consequences can be calculated. At the same time, one should be prepared to all of the above, including by training specialists working in the field of quantum technologies and understanding all their aspects, new opportunities, risks and threats. In this connection, this article briefly describes the current state of quantum technologies, namely, quantum sensorics, information transfer using quantum protocols, a universal quantum computer (hardware), and quantum computations based on quantum algorithms (software). For all of the above, forecasts are given for the development of the impact on various areas of human civilization.

    Views (last year): 56.
  3. Bozhko A.N., Livantsov V.E.
    Optimization of geometric analysis strategy in CAD-systems
    Computer Research and Modeling, 2024, v. 16, no. 4, pp. 825-840

    Computer-aided assembly planning for complex products is an important engineering and scientific problem. The assembly sequence and content of assembly operations largely depend on the mechanical structure and geometric properties of a product. An overview of geometric modeling methods that are used in modern computer-aided design systems is provided. Modeling geometric obstacles in assembly using collision detection, motion planning, and virtual reality is very computationally intensive. Combinatorial methods provide only weak necessary conditions for geometric reasoning. The important problem of minimizing the number of geometric tests during the synthesis of assembly operations and processes is considered. A formalization of this problem is based on a hypergraph model of the mechanical structure of the product. This model provides a correct mathematical description of coherent and sequential assembly operations. The key concept of the geometric situation is introduced. This is a configuration of product parts that requires analysis for freedom from obstacles and this analysis gives interpretable results. A mathematical description of geometric heredity during the assembly of complex products is proposed. Two axioms of heredity allow us to extend the results of testing one geometric situation to many other situations. The problem of minimizing the number of geometric tests is posed as a non-antagonistic game between decision maker and nature, in which it is required to color the vertices of an ordered set in two colors. The vertices represent geometric situations, and the color is a metaphor for the result of a collision-free test. The decision maker’s move is to select an uncolored vertex; nature’s answer is its color. The game requires you to color an ordered set in a minimum number of moves by decision maker. The project situation in which the decision maker makes a decision under risk conditions is discussed. A method for calculating the probabilities of coloring the vertices of an ordered set is proposed. The basic pure strategies of rational behavior in this game are described. An original synthetic criterion for making rational decisions under risk conditions has been developed. Two heuristics are proposed that can be used to color ordered sets of high cardinality and complex structure.

  4. Alkousa M.S., Gasnikov A.V., Dvurechensky P.E., Sadiev A.A., Razouk L.Ya.
    An approach for the nonconvex uniformly concave structured saddle point problem
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 225-237

    Recently, saddle point problems have received much attention due to their powerful modeling capability for a lot of problems from diverse domains. Applications of these problems occur in many applied areas, such as robust optimization, distributed optimization, game theory, and many applications in machine learning such as empirical risk minimization and generative adversarial networks training. Therefore, many researchers have actively worked on developing numerical methods for solving saddle point problems in many different settings. This paper is devoted to developing a numerical method for solving saddle point problems in the nonconvex uniformly-concave setting. We study a general class of saddle point problems with composite structure and H\"older-continuous higher-order derivatives. To solve the problem under consideration, we propose an approach in which we reduce the problem to a combination of two auxiliary optimization problems separately for each group of variables, the outer minimization problem w.r.t. primal variables, and the inner maximization problem w.r.t the dual variables. For solving the outer minimization problem, we use the Adaptive Gradient Method, which is applicable for nonconvex problems and also works with an inexact oracle that is generated by approximately solving the inner problem. For solving the inner maximization problem, we use the Restarted Unified Acceleration Framework, which is a framework that unifies the high-order acceleration methods for minimizing a convex function that has H\"older-continuous higher-order derivatives. Separate complexity bounds are provided for the number of calls to the first-order oracles for the outer minimization problem and higher-order oracles for the inner maximization problem. Moreover, the complexity of the whole proposed approach is then estimated.

  5. Soukhovolsky V.G., Kovalev A.V., Palnikova E.N., Tarasova O.V.
    Modelling the risk of insect impacts on forest stands after possible climate changes
    Computer Research and Modeling, 2016, v. 8, no. 2, pp. 241-253

    A model of forest insect population dynamics used to simulate of “forest-insect” interactions and for estimation of possible damages of forest stand by pests. This model represented a population as control system where the input variables characterized the influence of modifier (climatic) factors and the feedback loop describes the effect of regulatory factors (parasites, predators and population interactions). The technique of stress testing on the basis of population dynamics model proposed for assessment of the risks of forest stand damage and destruction after insect impact. The dangerous forest pest pine looper Bupalus piniarius L. considered as the object of analysis. Computer experiments were conducted to assess of outbreak risks with possible climate change in the territory of Central Siberia. Model experiments have shown that risk of insect impact on the forest is not increased significantly in condition of sufficiently moderate warming (not more than 4 °C in summer period). However, a stronger warming in the territory of Central Siberia, combined with a dry summer condition could cause a significant increase in the risk of pine looper outbreaks.

    Views (last year): 3. Citations: 1 (RSCI).
  6. Grachev V.A., Nayshtut Yu.S.
    Buckling problems of thin elastic shells
    Computer Research and Modeling, 2018, v. 10, no. 6, pp. 775-787

    The article covers several mathematical problems relating to elastic stability of thin shells in view of inconsistencies that have been recently identified between the experimental data and the predictions based on the shallow- shell theory. It is highlighted that the contradictions were caused by new algorithms that enabled updating the values of the so called “low critical stresses” calculated in the 20th century and adopted as a buckling criterion for thin shallow shells by technical standards. The new calculations often find the low critical stress close to zero. Therefore, the low critical stress cannot be used as a safety factor for the buckling analysis of the thinwalled structure, and the equations of the shallow-shell theory need to be replaced with other differential equations. The new theory also requires a buckling criterion ensuring the match between calculations and experimental data.

    The article demonstrates that the contradiction with the new experiments can be resolved within the dynamic nonlinear three-dimensional theory of elasticity. The stress when bifurcation of dynamic modes occurs shall be taken as a buckling criterion. The nonlinear form of original equations causes solitary (solitonic) waves that match non-smooth displacements (patterns, dents) of the shells. It is essential that the solitons make an impact at all stages of loading and significantly increase closer to bifurcation. The solitonic solutions are illustrated based on the thin cylindrical momentless shell when its three-dimensional volume is simulated with twodimensional surface of the set thickness. It is noted that the pattern-generating waves can be detected (and their amplitudes can by identified) with acoustic or electromagnetic devices.

    Thus, it is technically possible to reduce the risk of failure of the thin shells by monitoring the shape of the surface with acoustic devices. The article concludes with a setting of the mathematical problems requiring the solution for the reliable numerical assessment of the buckling criterion for thin elastic shells.

    Views (last year): 23.
  7. Dzhinchvelashvili G.A., Dzerzhinsky R.I., Denisenkova N.N.
    Quantitative assessment of seismic risk and energy concepts of earthquake engineering
    Computer Research and Modeling, 2018, v. 10, no. 1, pp. 61-76

    Currently, earthquake-resistant design of buildings based on the power calculation and presentation of effect of the earthquake static equivalent forces, which are calculated using elastic response spectra (linear-spectral method) that connects the law of motion of the soil with the absolute acceleration of the model in a nonlinear oscillator.

    This approach does not directly take into account either the influence of the duration of strong motion or the plastic behavior of the structure. Frequency content and duration of ground vibrations directly affect the energy received by the building and causing damage to its elements. Unlike power or kinematic calculation of the seismic effect on the structure can be interpreted without considering separately the forces and displacements and to provide, as the product of both variables, i.e., the work or input energy (maximum energy that can be purchased building to the earthquake).

    With the energy approach of seismic design, it is necessary to evaluate the input seismic energy in the structure and its distribution among various structural components.

    The article provides substantiation of the energy approach in the design of earthquake-resistant buildings and structures instead of the currently used method based on the power calculation and presentation of effect of the earthquake static equivalent forces, which are calculated using spectra of the reaction.

    Noted that interest in the use of energy concepts in earthquake-resistant design began with the works of Housner, which provided the seismic force in the form of the input seismic energy, using the range of speeds, and suggested that the damage in elastic-plastic system and elastic system causes one and the same input seismic energy.

    The indices of the determination of the input energy of the earthquake, proposed by various authors, are given in this paper. It is shown that modern approaches to ensuring seismic stability of structures, based on the representation of the earthquake effect as a static equivalent force, do not adequately describe the behavior of the system during an earthquake.

    In this paper, based on quantitative estimates of seismic risk analyzes developed in the NRU MSUCE Standard Organization (STO) “Seismic resistance structures. The main design provisions”. In the developed document a step forward with respect to the optimal design of earthquake-resistant structures.

    The proposed concept of using the achievements of modern methods of calculation of buildings and structures on seismic effects, which are harmonized with the Eurocodes and are not contrary to the system of national regulations.

    Views (last year): 21.
  8. Dvinskikh D.M., Pirau V.V., Gasnikov A.V.
    On the relations of stochastic convex optimization problems with empirical risk minimization problems on p-norm balls
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 309-319

    In this paper, we consider convex stochastic optimization problems arising in machine learning applications (e. g., risk minimization) and mathematical statistics (e. g., maximum likelihood estimation). There are two main approaches to solve such kinds of problems, namely the Stochastic Approximation approach (online approach) and the Sample Average Approximation approach, also known as the Monte Carlo approach, (offline approach). In the offline approach, the problem is replaced by its empirical counterpart (the empirical risk minimization problem). The natural question is how to define the problem sample size, i. e., how many realizations should be sampled so that the quite accurate solution of the empirical problem be the solution of the original problem with the desired precision. This issue is one of the main issues in modern machine learning and optimization. In the last decade, a lot of significant advances were made in these areas to solve convex stochastic optimization problems on the Euclidean balls (or the whole space). In this work, we are based on these advances and study the case of arbitrary balls in the p-norms. We also explore the question of how the parameter p affects the estimates of the required number of terms as a function of empirical risk.

    In this paper, both convex and saddle point optimization problems are considered. For strongly convex problems, the existing results on the same sample sizes in both approaches (online and offline) were generalized to arbitrary norms. Moreover, it was shown that the strong convexity condition can be weakened: the obtained results are valid for functions satisfying the quadratic growth condition. In the case when this condition is not met, it is proposed to use the regularization of the original problem in an arbitrary norm. In contradistinction to convex problems, saddle point problems are much less studied. For saddle point problems, the sample size was obtained under the condition of γ-growth of the objective function. When γ=1, this condition is the condition of sharp minimum in convex problems. In this article, it was shown that the sample size in the case of a sharp minimum is almost independent of the desired accuracy of the solution of the original problem.

  9. Lubashevsky I.A., Lubashevskiy V.I.
    Dynamical trap model for stimulus – response dynamics of human control
    Computer Research and Modeling, 2024, v. 16, no. 1, pp. 79-87

    We present a novel model for the dynamical trap of the stimulus – response type that mimics human control over dynamic systems when the bounded capacity of human cognition is a crucial factor. Our focus lies on scenarios where the subject modulates a control variable in response to a certain stimulus. In this context, the bounded capacity of human cognition manifests in the uncertainty of stimulus perception and the subsequent actions of the subject. The model suggests that when the stimulus intensity falls below the (blurred) threshold of stimulus perception, the subject suspends the control and maintains the control variable near zero with accuracy determined by the control uncertainty. As the stimulus intensity grows above the perception uncertainty and becomes accessible to human cognition, the subject activates control. Consequently, the system dynamics can be conceptualized as an alternating sequence of passive and active modes of control with probabilistic transitions between them. Moreover, these transitions are expected to display hysteresis due to decision-making inertia.

    Generally, the passive and active modes of human control are governed by different mechanisms, posing challenges in developing efficient algorithms for their description and numerical simulation. The proposed model overcomes this problem by introducing the dynamical trap of the stimulus-response type, which has a complex structure. The dynamical trap region includes two subregions: the stagnation region and the hysteresis region. The model is based on the formalism of stochastic differential equations, capturing both probabilistic transitions between control suspension and activation as well as the internal dynamics of these modes within a unified framework. It reproduces the expected properties in control suspension and activation, probabilistic transitions between them, and hysteresis near the perception threshold. Additionally, in a limiting case, the model demonstrates the capability of mimicking a similar subject’s behavior when (1) the active mode represents an open-loop implementation of locally planned actions and (2) the control activation occurs only when the stimulus intensity grows substantially and the risk of the subject losing the control over the system dynamics becomes essential.

  10. Khan S.A., Shulepina S., Shulepin D., Lukmanov R.A.
    Review of algorithmic solutions for deployment of neural networks on lite devices
    Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1601-1619

    In today’s technology-driven world, lite devices like Internet of Things (IoT) devices and microcontrollers (MCUs) are becoming increasingly common. These devices are more energyefficient and affordable, often with reduced features compared to the standard versions such as very limited memory and processing power for typical machine learning models. However, modern machine learning models can have millions of parameters, resulting in a large memory footprint. This complexity not only makes it difficult to deploy these large models on resource constrained devices but also increases the risk of latency and inefficiency in processing, which is crucial in some cases where real-time responses are required such as autonomous driving and medical diagnostics. In recent years, neural networks have seen significant advancements in model optimization techniques that help deployment and inference on these small devices. This narrative review offers a thorough examination of the progression and latest developments in neural network optimization, focusing on key areas such as quantization, pruning, knowledge distillation, and neural architecture search. It examines how these algorithmic solutions have progressed and how new approaches have improved upon the existing techniques making neural networks more efficient. This review is designed for machine learning researchers, practitioners, and engineers who may be unfamiliar with these methods but wish to explore the available techniques. It highlights ongoing research in optimizing networks for achieving better performance, lowering energy consumption, and enabling faster training times, all of which play an important role in the continued scalability of neural networks. Additionally, it identifies gaps in current research and provides a foundation for future studies, aiming to enhance the applicability and effectiveness of existing optimization strategies.

Pages: next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"