All issues
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Numerical simulation of adhesive technology application in tooth root canal on restoration properties
Computer Research and Modeling, 2015, v. 7, no. 5, pp. 1069-1079Views (last year): 3.The aim of the present study is to show how engineering approaches and ideas work in clinical restorative dentistry, in particular, how they affect the restoration design and durability of restored endodontically treated teeth. For these purposes a 3D-computational model of a first incisor including the elements of hard tooth tissues, periodontal ligament, surrounding bone structures and restoration itself has been constructed and numerically simulated for a variety of restoration designs under normal chewing loadings. It has been researched the effect of different adhesive technologies in root canal on the functional characteristics of a restored tooth. The 3D model designed could be applied for preclinical diagnostics to determine the areas of possible fractures of a restored tooth and prognosticate its longevity.
-
Mathematical modelling of the non-Newtonian blood flow in the aortic arc
Computer Research and Modeling, 2017, v. 9, no. 2, pp. 259-269Views (last year): 13.The purpose of research was to develop a mathematical model for pulsating blood flow in the part of aorta with their branches. Since the deformation of this most solid part of the aorta is small during the passage of the pulse wave, the blood vessels were considered as non-deformable curved cylinders. The article describes the internal structure of blood and some internal structural effects. This analysis shows that the blood, which is essentially a suspension, can only be regarded as a non-Newtonian fluid. In addition, the blood can be considered as a liquid only in the blood vessels, diameter of which is much higher than the characteristic size of blood cells and their aggregate formations. As a non-Newtonian fluid the viscous liquid with the power law of the relationship of stress with shift velocity was chosen. This law can describe the behaviour not only of liquids but also dispersions. When setting the boundary conditions at the entrance into aorta, reflecting the pulsating nature of the flow of blood, it was decided not to restrict the assignment of the total blood flow, which makes no assumptions about the spatial velocity distribution in a cross section. In this regard, it was proposed to model the surface envelope of this spatial distribution by a part of a paraboloid of rotation with a fixed base radius and height, which varies in time from zero to maximum speed value. The special attention was paid to the interaction of blood with the walls of the vessels. Having regard to the nature of this interaction, the so-called semi-slip condition was formulated as the boundary condition. At the outer ends of the aorta and its branches the amounts of pressure were given. To perform calculations the tetrahedral computer network for geometric model of the aorta with branches has been built. The total number of meshes is 9810. The calculations were performed with use of the software package ABACUS, which has also powerful tools for creating geometry of the model and visualization of calculations. The result is a distribution of velocities and pressure at each time step. In areas of branching vessels was discovered temporary presence of eddies and reverse currents. They were born via 0.47 s from the beginning of the pulse cycle and disappeared after 0.14 s.
-
Simulation of interprocessor interactions for MPI-applications in the cloud infrastructure
Computer Research and Modeling, 2017, v. 9, no. 6, pp. 955-963Views (last year): 10. Citations: 1 (RSCI).А new cloud center of parallel computing is to be created in the Laboratory of Information Technologies (LIT) of the Joint Institute for Nuclear Research JINR) what is expected to improve significantly the efficiency of numerical calculations and expedite the receipt of new physically meaningful results due to the more rational use of computing resources. To optimize a scheme of parallel computations at a cloud environment it is necessary to test this scheme for various combinations of equipment parameters (processor speed and numbers, throughput оf а communication network etc). As a test problem, the parallel MPI algorithm for calculations of the long Josephson junctions (LDJ) is chosen. Problems of evaluating the impact of abovementioned factors of computing mean on the computing speed of the test problem are solved by simulation with the simulation program SyMSim developed in LIT.
The simulation of the LDJ calculations in the cloud environment enable users without a series of test to find the optimal number of CPUs with a certain type of network run the calculations in a real computer environment. This can save significant computational time in countable resources. The main parameters of the model were obtained from the results of the computational experiment conducted on a special cloud-based testbed. Computational experiments showed that the pure computation time decreases in inverse proportion to the number of processors, but depends significantly on network bandwidth. Comparison of results obtained empirically with the results of simulation showed that the simulation model correctly simulates the parallel calculations performed using the MPI-technology. Besides it confirms our recommendation: for fast calculations of this type it is needed to increase both, — the number of CPUs and the network throughput at the same time. The simulation results allow also to invent an empirical analytical formula expressing the dependence of calculation time by the number of processors for a fixed system configuration. The obtained formula can be applied to other similar studies, but requires additional tests to determine the values of variables.
-
Simulation equatorial plasma bubbles started from plasma clouds
Computer Research and Modeling, 2019, v. 11, no. 3, pp. 463-476Views (last year): 14.Experimental, theoretical and numerical investigations of equatorial spread F, equatorial plasma bubbles (EPBs), plasma depletion shells, and plasma clouds are continued at new variety articles. Nonlinear growth, bifurcation, pinching, atomic and molecular ion dynamics are considered at there articles. But the authors of this article believe that not all parameters of EPB development are correct. For example, EPB bifurcation is highly questionable.
A maximum speed inside EPBs and a development time of EPB are defined and studied. EPBs starting from one, two or three zones of the increased density (initial plasma clouds). The development mechanism of EPB is the Rayleigh-Taylor instability (RTI). Time of the initial stage of EPB development went into EPB favorable time interval (in this case the increase linear increment is more than zero) and is 3000–7000 c for the Earth equatorial ionosphere.
Numerous computing experiments were conducted with use of the original two-dimensional mathematical and numerical model MI2, similar USA standard model SAMI2. This model MI2 is described in detail. The received results can be used both in other theoretical works and for planning and carrying out natural experiments for generation of F-spread in Earth ionosphere.
Numerical simulating was carried out for the geophysical conditions favorable for EPBs development. Numerical researches confirmed that development time of EPBs from initial irregularities with the increased density is significantly more than development time from zones of the lowered density. It is shown that developed irregularities interact among themselves strongly and not linearly even then when initial plasma clouds are strongly removed from each other. In addition, this interaction is stronger than interaction of EPBs starting from initial irregularities with the decreased density. The numerical experiments results showed the good consent of developed EPB parameters with experimental data and with theoretical researches of other authors.
-
Application of computational simulation techniques for designing swim-out release systems
Computer Research and Modeling, 2020, v. 12, no. 3, pp. 597-606The article describes the basic approaches of the calculation procedure of payload swim-out (objects of different function with own propulsor) from the underwater carrier a method of a self-exit using modern CFD technologies. It contains the description of swim-out by a self-exit method, its advantages and disadvantages. Also it contains results of research of convergence on a grid of a final-volume model with accuracy-time criterion, and results of comparison of calculation with experiment (validation of models). Validation of models was carried out using the available data of experimental definition of traction characteristics of water-jet propulsor of the natural sample in the development pool. Calculations of traction characteristics of water-jet propulsor were carried out via software package FlowVision ver. 3.10. On the basis of comparison of results of calculations for conditions of carrying out of experiments the error of water-jet propulsor calculated model which has made no more than 5% in a range of advance coefficient water-jet propulsor, realised in the process of swim-out by a selfexit method has been defined. The received value of an error of calculation of traction characteristics is used for definition of limiting settlement values of speed of branch of object from the carrier (the minimum and maximum values). The considered problem is significant from the scientific point of view thanks to features of the approach to modelling hydrojet moving system together with movement of separated object, and also from the practical point of view, thanks to possibility of reception with high degree of reliability of parametres swim-out of objects from sea bed vehicles a method of the self-exit which working conditions are assumed by movement in the closed volumes, already on a design stage.
-
System modeling, risks evaluation and optimization of a distributed computer system
Computer Research and Modeling, 2020, v. 12, no. 6, pp. 1349-1359The article deals with the problem of a distributed system operation reliability. The system core is an open integration platform that provides interaction of varied software for modeling gas transportation. Some of them provide an access through thin clients on the cloud technology “software as a service”. Mathematical models of operation, transmission and computing are to ensure the operation of an automated dispatching system for oil and gas transportation. The paper presents a system solution based on the theory of Markov random processes and considers the stable operation stage. The stationary operation mode of the Markov chain with continuous time and discrete states is described by a system of Chapman–Kolmogorov equations with respect to the average numbers (mathematical expectations) of the objects in certain states. The objects of research are both system elements that are present in a large number – thin clients and computing modules, and individual ones – a server, a network manager (message broker). Together, they are interacting Markov random processes. The interaction is determined by the fact that the transition probabilities in one group of elements depend on the average numbers of other elements groups.
The authors propose a multi-criteria dispersion model of risk assessment for such systems (both in the broad and narrow sense, in accordance with the IEC standard). The risk is the standard deviation of estimated object parameter from its average value. The dispersion risk model makes possible to define optimality criteria and whole system functioning risks. In particular, for a thin client, the following is calculated: the loss profit risk, the total risk of losses due to non-productive element states, and the total risk of all system states losses.
Finally the paper proposes compromise schemes for solving the multi-criteria problem of choosing the optimal operation strategy based on the selected set of compromise criteria.
-
Comparative analysis of human adaptation to the growth of visual information in the tasks of recognizing formal symbols and meaningful images
Computer Research and Modeling, 2021, v. 13, no. 3, pp. 571-586We describe an engineering-psychological experiment that continues the study of ways to adapt a person to the increasing complexity of logical problems by presenting a series of problems of increasing complexity, which is determined by the volume of initial data. Tasks require calculations in an associative or non-associative system of operations. By the nature of the change in the time of solving the problem, depending on the number of necessary operations, we can conclude that a purely sequential method of solving problems or connecting additional brain resources to the solution in parallel mode. In a previously published experimental work, a person in the process of solving an associative problem recognized color images with meaningful images. In the new study, a similar problem is solved for abstract monochrome geometric shapes. Analysis of the result showed that for the second case, the probability of the subject switching to a parallel method of processing visual information is significantly reduced. The research method is based on presenting a person with two types of tasks. One type of problem contains associative calculations and allows a parallel solution algorithm. Another type of problem is the control one, which contains problems in which calculations are not associative and parallel algorithms are ineffective. The task of recognizing and searching for a given object is associative. A parallel strategy significantly speeds up the solution with relatively small additional resources. As a control series of problems (to separate parallel work from the acceleration of a sequential algorithm), we use, as in the previous experiment, a non-associative comparison problem in cyclic arithmetic, presented in the visual form of the game “rock, paper, scissors”. In this problem, the parallel algorithm requires a large number of processors with a small efficiency coefficient. Therefore, the transition of a person to a parallel algorithm for solving this problem is almost impossible, and the acceleration of processing input information is possible only by increasing the speed. Comparing the dependence of the solution time on the volume of source data for two types of problems allows us to identify four types of strategies for adapting to the increasing complexity of the problem: uniform sequential, accelerated sequential, parallel computing (where possible), or undefined (for this method) strategy. The Reducing of the number of subjects, who switch to a parallel strategy when encoding input information with formal images, shows the effectiveness of codes that cause subject associations. They increase the speed of human perception and processing of information. The article contains a preliminary mathematical model that explains this phenomenon. It is based on the appearance of a second set of initial data, which occurs in a person as a result of recognizing the depicted objects.
-
The effect of nonlinear supratransmission in discrete structures: a review
Computer Research and Modeling, 2023, v. 15, no. 3, pp. 599-617This paper provides an overview of studies on nonlinear supratransmission and related phenomena. This effect consists in the transfer of energy at frequencies not supported by the systems under consideration. The supratransmission does not depend on the integrability of the system, it is resistant to damping and various classes of boundary conditions. In addition, a nonlinear discrete medium, under certain general conditions imposed on the structure, can create instability due to external periodic influence. This instability is the generative process underlying the nonlinear supratransmission. This is possible when the system supports nonlinear modes of various nature, in particular, discrete breathers. Then the energy penetrates into the system as soon as the amplitude of the external harmonic excitation exceeds the maximum amplitude of the static breather of the same frequency.
The effect of nonlinear supratransmission is an important property of many discrete structures. A necessary condition for its existence is the discreteness and nonlinearity of the medium. Its manifestation in systems of various nature speaks of its fundamentality and significance. This review considers the main works that touch upon the issue of nonlinear supratransmission in various systems, mainly model ones.
Many teams of authors are studying this effect. First of all, these are models described by discrete equations, including sin-Gordon and the discrete Schr¨odinger equation. At the same time, the effect is not exclusively model and manifests itself in full-scale experiments in electrical circuits, in nonlinear chains of oscillators, as well as in metastable modular metastructures. There is a gradual complication of models, which leads to a deeper understanding of the phenomenon of supratransmission, and the transition to disordered structures and those with elements of chaos structures allows us to talk about a more subtle manifestation of this effect. Numerical asymptotic approaches make it possible to study nonlinear supratransmission in complex nonintegrable systems. The complication of all kinds of oscillators, both physical and electrical, is relevant for various real devices based on such systems, in particular, in the field of nano-objects and energy transport in them through the considered effect. Such systems include molecular and crystalline clusters and nanodevices. In the conclusion of the paper, the main trends in the research of nonlinear supratransmission are given.
-
Superscale simulation of the magnetic states and reconstruction of the ordering types for nanodots arrays
Computer Research and Modeling, 2011, v. 3, no. 3, pp. 309-318Views (last year): 2.We consider two possible computational methods of the interpretation of experimental data obtained by means of the magnetic force microscopy. These methods of macrospin distribution simulation and reconstruction can be used for research of magnetization reversal processes of nanodots in ordered 2D arrays of nanodots. New approaches to the development of high-performance superscale algorithms for parallel executing on a supercomputer clusters for solving direct and inverse task of the modeling of magnetic states, types of ordering, reversal processes of nanosystems with a collective behavior are proposed. The simulation results are consistent with experimental results.
-
JINR TIER-1-level computing system for the CMS experiment at LHC: status and perspectives
Computer Research and Modeling, 2015, v. 7, no. 3, pp. 455-462Views (last year): 3. Citations: 2 (RSCI).The Compact Muon Solenoid (CMS) is a high-performance general-purpose detector at the Large Hadron Collider (LHC) at CERN. A distributed data analysis system for processing and further analysis of CMS experimental data has been developed and this model foresees the obligatory usage of modern grid-technologies. The CMS Computing Model makes use of the hierarchy of computing centers (Tiers). The Joint Institute for Nuclear Research (JINR) takes an active part in the CMS experiment. In order to provide a proper computing infrastructure for the CMS experiment at JINR and for Russian institutes collaborating in CMS, Tier-1 center for the CMS experiment is constructing at JINR. The main tasks and services of the CMS Tier-1 at JINR are described. The status and perspectives of the Tier1 center for the CMS experiment at JINR are presented.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"