Результаты поиска по 'dependability':
Найдено статей: 308
  1. Ovcharenko E.A., Klyshnikov K.U., Savrasov G.V., Nyshtaev D.V., Glushkova T.V.
    The choosing of optimal cell parameters of transcatheter aortic valve prosthesis
    Computer Research and Modeling, 2014, v. 6, no. 6, pp. 943-954

    This paper presents the analysis of dependences between frame basic cell geometry parameters and function via finite element analysis. The simplified models of frame cell with varied strut width, thickness and quantity in a circle was researched to evaluate radial forces, maximum stress and strain, permanent residual strain and pinching load forces. The outcomes of this study might help in the development of new artificial heart valves and during the analysis of existing in-clinical TAVI prostheses.

    Views (last year): 1. Citations: 1 (RSCI).
  2. Burlakov E.A.
    Relation between performance of organization and its structure during sudden and smoldering crises
    Computer Research and Modeling, 2016, v. 8, no. 4, pp. 685-706

    The article describes a mathematical model that simulates performance of a hierarchical organization during an early stage of a crisis. A distinguished feature of this stage of crisis is presence of so called early warning signals containing information on the approaching event. Employees are capable of catching the early warnings and of preparing the organization for the crisis based on the signals’ meaning. The efficiency of the preparation depends on both parameters of the organization and parameters of the crisis. The proposed simulation agentbased model is implemented on Java programming language and is used for conducting experiments via Monte- Carlo method. The goal of the experiments is to compare how centralized and decentralized organizational structures perform during sudden and smoldering crises. By centralized organizations we assume structures with high number of hierarchy levels and low number of direct reports of every manager, while decentralized organizations mean structures with low number of hierarchy levels and high number of direct reports of every manager. Sudden crises are distinguished by short early stage and low number of warning signals, while smoldering crises are defined as crises with long lasting early stage and high number of warning signals not necessary containing important information. Efficiency of the organizational performance during early stage of a crisis is measured by two parameters: percentage of early warnings which have been acted upon in order to prepare organization for the crisis, and time spent by top-manager on working with early warnings. As a result, we show that during early stage of smoldering crises centralized organizations process signals more efficiently than decentralized organizations, while decentralized organizations handle early warning signals more efficiently during early stage of sudden crises. However, occupation of top-managers during sudden crises is higher in decentralized organizations and it is higher in centralized organizations during smoldering crises. Thus, neither of the two classes of organizational structures is more efficient by the two parameters simultaneously. Finally, we conduct sensitivity analysis to verify the obtained results.

    Views (last year): 2. Citations: 2 (RSCI).
  3. Kutovskiy N.A., Nechaevskiy A.V., Ososkov G.A., Pryahina D.I., Trofimov V.V.
    Simulation of interprocessor interactions for MPI-applications in the cloud infrastructure
    Computer Research and Modeling, 2017, v. 9, no. 6, pp. 955-963

    А new cloud center of parallel computing is to be created in the Laboratory of Information Technologies (LIT) of the Joint Institute for Nuclear Research JINR) what is expected to improve significantly the efficiency of numerical calculations and expedite the receipt of new physically meaningful results due to the more rational use of computing resources. To optimize a scheme of parallel computations at a cloud environment it is necessary to test this scheme for various combinations of equipment parameters (processor speed and numbers, throughput оf а communication network etc). As a test problem, the parallel MPI algorithm for calculations of the long Josephson junctions (LDJ) is chosen. Problems of evaluating the impact of abovementioned factors of computing mean on the computing speed of the test problem are solved by simulation with the simulation program SyMSim developed in LIT.

    The simulation of the LDJ calculations in the cloud environment enable users without a series of test to find the optimal number of CPUs with a certain type of network run the calculations in a real computer environment. This can save significant computational time in countable resources. The main parameters of the model were obtained from the results of the computational experiment conducted on a special cloud-based testbed. Computational experiments showed that the pure computation time decreases in inverse proportion to the number of processors, but depends significantly on network bandwidth. Comparison of results obtained empirically with the results of simulation showed that the simulation model correctly simulates the parallel calculations performed using the MPI-technology. Besides it confirms our recommendation: for fast calculations of this type it is needed to increase both, — the number of CPUs and the network throughput at the same time. The simulation results allow also to invent an empirical analytical formula expressing the dependence of calculation time by the number of processors for a fixed system configuration. The obtained formula can be applied to other similar studies, but requires additional tests to determine the values of variables.

    Views (last year): 10. Citations: 1 (RSCI).
  4. Madera A.G.
    Modeling thermal feedback effect on thermal processes in electronic systems
    Computer Research and Modeling, 2018, v. 10, no. 4, pp. 483-494

    The article is devoted to the effect of thermal feedback, which occurs during the operation of integrated circuits and electronic systems with their use. Thermal feedback is due to the fact that the power consumed by the functioning of the microchip heats it and, due to the significant dependence of its electrical parameters on temperature, interactive interaction arises between its electrical and thermal processes. The effect of thermal feedback leads to a change in both electrical parameters and temperature levels in microcircuits. Positive thermal feedback is an undesirable phenomenon, because it causes the output of the electrical parameters of the microcircuits beyond the permissible values, the reduction in reliability and, in some cases, burn out. Negative thermal feedback is manifested in stabilizing the electrical and thermal regimes at lower temperature levels. Therefore, when designing microcircuits and electronic systems with their application, it is necessary to achieve the implementation of negative feedback. In this paper, we propose a method for modeling of thermal modes in electronic systems, taking into account the effect of thermal feedback. The method is based on introducing into the thermal model of the electronic system new model circuit elements that are nonlinearly dependent on temperature, the number of which is equal to the number of microcircuits in the electronic system. This approach makes it possible to apply matrix-topological equations of thermal processes to the thermal model with new circuit elements introduced into it and incorporate them into existing thermal design software packages. An example of modeling a thermal process in a real electronic system is presented, taking into account the effect of thermal feedback on the example of a microcircuit installed on a printed circuit board. It is shown that in order to adequately model the electrical and thermal processes of microcircuits and electronic systems, it is necessary to take into account the effects of thermal feedback in order to avoid design errors and create competitive electronic systems.

    Views (last year): 22. Citations: 3 (RSCI).
  5. Frisman Y.Y., Kulakov M.P., Revutskaya O.L., Zhdanova O.L., Neverova G.P.
    The key approaches and review of current researches on dynamics of structured and interacting populations
    Computer Research and Modeling, 2019, v. 11, no. 1, pp. 119-151

    The review and systematization of current papers on the mathematical modeling of population dynamics allow us to conclude the key interests of authors are two or three main research lines related to the description and analysis of the dynamics of both local structured populations and systems of interacting homogeneous populations as ecological community in physical space. The paper reviews and systematizes scientific studies and results obtained within the framework of dynamics of structured and interacting populations to date. The paper describes the scientific idea progress in the direction of complicating models from the classical Malthus model to the modern models with various factors affecting population dynamics in the issues dealing with modeling the local population size dynamics. In particular, they consider the dynamic effects that arise as a result of taking into account the environmental capacity, density-dependent regulation, the Allee effect, complexity of an age and a stage structures. Particular attention is paid to the multistability of population dynamics. In addition, studies analyzing harvest effect on structured population dynamics and an appearance of the hydra effect are presented. The studies dealing with an appearance and development of spatial dissipative structures in both spatially separated populations and communities with migrations are discussed. Here, special attention is also paid to the frequency and phase multistability of population dynamics, as well as to an appearance of spatial clusters. During the systematization and review of articles on modeling the interacting population dynamics, the focus is on the “prey–predator” community. The key idea and approaches used in current mathematical biology to model a “prey–predator” system with community structure and harvesting are presented. The problems of an appearance and stability of the mosaic structure in communities distributed spatially and coupled by migration are also briefly discussed.

    Views (last year): 40. Citations: 2 (RSCI).
  6. In this paper a fluid flow between two close located rough surfaces depending on their location and discontinuity in contact areas is investigated. The area between surfaces is considered as the porous layer with the variable permeability, depending on roughness and closure of surfaces. For obtaining closure-permeability function, the flow on the small region of surfaces (100 $\mu$m) is modeled, for which the surfaces roughness profile created by fractal function of Weierstrass – Mandelbrot. The 3D-domain for this calculation fill out the area between valleys and peaks of two surfaces, located at some distance from each other. If the surfaces get closer, a contacts between roughness peaks will appears and it leads to the local discontinuities in the domain. For the assumed surfaces closure and boundary conditions the mass flow and pressure drop is calculated and based on that, permeability of the equivalent porous layer is evaluated.The calculation results of permeability obtained for set of surfaces closure were approximated by a polynom. This allows us to calculate the actual flow parameters in a thin layer of variable thickness, the length of which is much larger than the scale of the surface roughness. As an example, showing the application of this technique, flow in the gap between the billet and conical die in 3D-formulation is modeled. In this problem the permeability of an equivalent porous layer calculated for the condition of a linear decreased gap.

  7. Stepin Y.P., Leonov D.G., Papilina T.M., Stepankina O.A.
    System modeling, risks evaluation and optimization of a distributed computer system
    Computer Research and Modeling, 2020, v. 12, no. 6, pp. 1349-1359

    The article deals with the problem of a distributed system operation reliability. The system core is an open integration platform that provides interaction of varied software for modeling gas transportation. Some of them provide an access through thin clients on the cloud technology “software as a service”. Mathematical models of operation, transmission and computing are to ensure the operation of an automated dispatching system for oil and gas transportation. The paper presents a system solution based on the theory of Markov random processes and considers the stable operation stage. The stationary operation mode of the Markov chain with continuous time and discrete states is described by a system of Chapman–Kolmogorov equations with respect to the average numbers (mathematical expectations) of the objects in certain states. The objects of research are both system elements that are present in a large number – thin clients and computing modules, and individual ones – a server, a network manager (message broker). Together, they are interacting Markov random processes. The interaction is determined by the fact that the transition probabilities in one group of elements depend on the average numbers of other elements groups.

    The authors propose a multi-criteria dispersion model of risk assessment for such systems (both in the broad and narrow sense, in accordance with the IEC standard). The risk is the standard deviation of estimated object parameter from its average value. The dispersion risk model makes possible to define optimality criteria and whole system functioning risks. In particular, for a thin client, the following is calculated: the loss profit risk, the total risk of losses due to non-productive element states, and the total risk of all system states losses.

    Finally the paper proposes compromise schemes for solving the multi-criteria problem of choosing the optimal operation strategy based on the selected set of compromise criteria.

  8. Koganov A.V., Rakcheeva T.A., Prikhodko D.I.
    Comparative analysis of human adaptation to the growth of visual information in the tasks of recognizing formal symbols and meaningful images
    Computer Research and Modeling, 2021, v. 13, no. 3, pp. 571-586

    We describe an engineering-psychological experiment that continues the study of ways to adapt a person to the increasing complexity of logical problems by presenting a series of problems of increasing complexity, which is determined by the volume of initial data. Tasks require calculations in an associative or non-associative system of operations. By the nature of the change in the time of solving the problem, depending on the number of necessary operations, we can conclude that a purely sequential method of solving problems or connecting additional brain resources to the solution in parallel mode. In a previously published experimental work, a person in the process of solving an associative problem recognized color images with meaningful images. In the new study, a similar problem is solved for abstract monochrome geometric shapes. Analysis of the result showed that for the second case, the probability of the subject switching to a parallel method of processing visual information is significantly reduced. The research method is based on presenting a person with two types of tasks. One type of problem contains associative calculations and allows a parallel solution algorithm. Another type of problem is the control one, which contains problems in which calculations are not associative and parallel algorithms are ineffective. The task of recognizing and searching for a given object is associative. A parallel strategy significantly speeds up the solution with relatively small additional resources. As a control series of problems (to separate parallel work from the acceleration of a sequential algorithm), we use, as in the previous experiment, a non-associative comparison problem in cyclic arithmetic, presented in the visual form of the game “rock, paper, scissors”. In this problem, the parallel algorithm requires a large number of processors with a small efficiency coefficient. Therefore, the transition of a person to a parallel algorithm for solving this problem is almost impossible, and the acceleration of processing input information is possible only by increasing the speed. Comparing the dependence of the solution time on the volume of source data for two types of problems allows us to identify four types of strategies for adapting to the increasing complexity of the problem: uniform sequential, accelerated sequential, parallel computing (where possible), or undefined (for this method) strategy. The Reducing of the number of subjects, who switch to a parallel strategy when encoding input information with formal images, shows the effectiveness of codes that cause subject associations. They increase the speed of human perception and processing of information. The article contains a preliminary mathematical model that explains this phenomenon. It is based on the appearance of a second set of initial data, which occurs in a person as a result of recognizing the depicted objects.

  9. Sitnikov S.S., Tcheremissine F.G., Sazykina T.A.
    Simulation of the initial stage of a two-component rarefied gas mixture outflow through a thin slit into vacuum
    Computer Research and Modeling, 2021, v. 13, no. 4, pp. 747-759

    The paper considers the process of flow formation in an outflow of a binary gas mixture through a thin slit into vacuum. An approach to modeling the flows of rarefied gas mixtures in the transient regime is proposed based on the direct solution of the Boltzmann kinetic equation, in which the conservative projection method is used to calculate the collision integrals. Calculation formulas are provided; the calculation procedure is described in detail in relation to the flow of a binary gas mixture. The Lennard–Jones potential is used as an interaction potential of molecules. A software modeling environment has been developed that makes it possible to study the flows of gas mixtures in a transitional regime on systems of cluster architecture. Due to the use of code parallelization technologies, an acceleration of calculations by 50–100 times was obtained. Numerical simulation of a two-dimensional outflow of a binary argon-neon gas mixture from a vessel into vacuum through a thin slit is carried out for various values of the Knudsen number. The graphs of the dependence of gas mixture components output flow on time in the process of establishing the flow are obtained. Non-stationary regions of strong separation of gas mixture components, in which the molecular densities ratio reaches 10 or more, were discovered. The discovered effect can have applications in the problem of gas mixtures separation.

  10. Potapov I.I., Reshetnikova O.V.
    The two geometric parameters influence study on the hydrostatic problem solution accuracy by the SPH method
    Computer Research and Modeling, 2021, v. 13, no. 5, pp. 979-992

    The two significant geometric parameters are proposed that affect the physical quantities interpolation in the smoothed particle hydrodynamics method (SPH). They are: the smoothing coefficient which the particle size and the smoothing radius are connecting and the volume coefficient which determine correctly the particle mass for a given particles distribution in the medium.

    In paper proposes a technique for these parameters influence assessing on the SPH method interpolations accuracy when the hydrostatic problem solving. The analytical functions of the relative error for the density and pressure gradient in the medium are introduced for the accuracy estimate. The relative error functions are dependent on the smoothing factor and the volume factor. Designating a specific interpolation form in SPH method allows the differential form of the relative error functions to the algebraic polynomial form converting. The root of this polynomial gives the smoothing coefficient values that provide the minimum interpolation error for an assigned volume coefficient.

    In this work, the derivation and analysis of density and pressure gradient relative errors functions on a sample of popular nuclei with different smoothing radius was carried out. There is no common the smoothing coefficient value for all the considered kernels that provides the minimum error for both SPH interpolations. The nuclei representatives with different smoothing radius are identified which make it possible the smallest errors of SPH interpolations to provide when the hydrostatic problem solving. As well, certain kernels with different smoothing radius was determined which correct interpolation do not allow provide when the hydrostatic problem solving by the SPH method.

Pages: « first previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"