All issues
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Parallel implementation of the grid-characteristic method in the case of explicit contact boundaries
Computer Research and Modeling, 2018, v. 10, no. 5, pp. 667-678Views (last year): 18.We consider an application of the Message Passing Interface (MPI) technology for parallelization of the program code which solves equation of the linear elasticity theory. The solution of this equation describes the propagation of elastic waves in demormable rigid bodies. The solution of such direct problem of seismic wave propagation is of interest in seismics and geophysics. Our implementation of solver uses grid-characteristic method to make simulations. We consider technique to reduce time of communication between MPI processes during the simulation. This is important when it is necessary to conduct modeling in complex problem formulations, and still maintain the high level of parallelism effectiveness, even when thousands of processes are used. A solution of the problem of effective communication is extremely important when several computational grids with arbirtrary geometry of contacts between them are used in the calculation. The complexity of this task increases if an independent distribution of the grid nodes between processes is allowed. In this paper, a generalized approach is developed for processing contact conditions in terms of nodes reinterpolation from a given section of one grid to a certain area of the second grid. An efficient way of parallelization and establishing effective interprocess communications is proposed. For provided example problems we provide wave fileds and seismograms for both 2D and 3D formulations. It is shown that the algorithm can be realized both on Cartesian and on structured (curvilinear) computational grids. The considered statements demonstrate the possibility of carrying out calculations taking into account the surface topographies and curvilinear geometry of curvilinear contacts between the geological layers. Application of curvilinear grids allows to obtain more accurate results than when calculating only using Cartesian grids. The resulting parallelization efficiency is almost 100% up to 4096 processes (we used 128 processes as a basis to find efficiency). With number of processes larger than 4096, an expected gradual decrease in efficiency is observed. The rate of decline is not great, so at 16384 processes the parallelization efficiency remains at 80%.
-
Dynamical trap model for stimulus – response dynamics of human control
Computer Research and Modeling, 2024, v. 16, no. 1, pp. 79-87We present a novel model for the dynamical trap of the stimulus – response type that mimics human control over dynamic systems when the bounded capacity of human cognition is a crucial factor. Our focus lies on scenarios where the subject modulates a control variable in response to a certain stimulus. In this context, the bounded capacity of human cognition manifests in the uncertainty of stimulus perception and the subsequent actions of the subject. The model suggests that when the stimulus intensity falls below the (blurred) threshold of stimulus perception, the subject suspends the control and maintains the control variable near zero with accuracy determined by the control uncertainty. As the stimulus intensity grows above the perception uncertainty and becomes accessible to human cognition, the subject activates control. Consequently, the system dynamics can be conceptualized as an alternating sequence of passive and active modes of control with probabilistic transitions between them. Moreover, these transitions are expected to display hysteresis due to decision-making inertia.
Generally, the passive and active modes of human control are governed by different mechanisms, posing challenges in developing efficient algorithms for their description and numerical simulation. The proposed model overcomes this problem by introducing the dynamical trap of the stimulus-response type, which has a complex structure. The dynamical trap region includes two subregions: the stagnation region and the hysteresis region. The model is based on the formalism of stochastic differential equations, capturing both probabilistic transitions between control suspension and activation as well as the internal dynamics of these modes within a unified framework. It reproduces the expected properties in control suspension and activation, probabilistic transitions between them, and hysteresis near the perception threshold. Additionally, in a limiting case, the model demonstrates the capability of mimicking a similar subject’s behavior when (1) the active mode represents an open-loop implementation of locally planned actions and (2) the control activation occurs only when the stimulus intensity grows substantially and the risk of the subject losing the control over the system dynamics becomes essential.
-
Simulation of interprocessor interactions for MPI-applications in the cloud infrastructure
Computer Research and Modeling, 2017, v. 9, no. 6, pp. 955-963Views (last year): 10. Citations: 1 (RSCI).А new cloud center of parallel computing is to be created in the Laboratory of Information Technologies (LIT) of the Joint Institute for Nuclear Research JINR) what is expected to improve significantly the efficiency of numerical calculations and expedite the receipt of new physically meaningful results due to the more rational use of computing resources. To optimize a scheme of parallel computations at a cloud environment it is necessary to test this scheme for various combinations of equipment parameters (processor speed and numbers, throughput оf а communication network etc). As a test problem, the parallel MPI algorithm for calculations of the long Josephson junctions (LDJ) is chosen. Problems of evaluating the impact of abovementioned factors of computing mean on the computing speed of the test problem are solved by simulation with the simulation program SyMSim developed in LIT.
The simulation of the LDJ calculations in the cloud environment enable users without a series of test to find the optimal number of CPUs with a certain type of network run the calculations in a real computer environment. This can save significant computational time in countable resources. The main parameters of the model were obtained from the results of the computational experiment conducted on a special cloud-based testbed. Computational experiments showed that the pure computation time decreases in inverse proportion to the number of processors, but depends significantly on network bandwidth. Comparison of results obtained empirically with the results of simulation showed that the simulation model correctly simulates the parallel calculations performed using the MPI-technology. Besides it confirms our recommendation: for fast calculations of this type it is needed to increase both, — the number of CPUs and the network throughput at the same time. The simulation results allow also to invent an empirical analytical formula expressing the dependence of calculation time by the number of processors for a fixed system configuration. The obtained formula can be applied to other similar studies, but requires additional tests to determine the values of variables.
-
Cluster method of mathematical modeling of interval-stochastic thermal processes in electronic systems
Computer Research and Modeling, 2020, v. 12, no. 5, pp. 1023-1038A cluster method of mathematical modeling of interval-stochastic thermal processes in complex electronic systems (ES), is developed. In the cluster method, the construction of a complex ES is represented in the form of a thermal model, which is a system of clusters, each of which contains a core that combines the heat-generating elements falling into a given cluster, the cluster shell and a medium flow through the cluster. The state of the thermal process in each cluster and every moment of time is characterized by three interval-stochastic state variables, namely, the temperatures of the core, shell, and medium flow. The elements of each cluster, namely, the core, shell, and medium flow, are in thermal interaction between themselves and elements of neighboring clusters. In contrast to existing methods, the cluster method allows you to simulate thermal processes in complex ESs, taking into account the uneven distribution of temperature in the medium flow pumped into the ES, the conjugate nature of heat exchange between the medium flow in the ES, core and shells of clusters, and the intervalstochastic nature of thermal processes in the ES, caused by statistical technological variation in the manufacture and installation of electronic elements in ES and random fluctuations in the thermal parameters of the environment. The mathematical model describing the state of thermal processes in a cluster thermal model is a system of interval-stochastic matrix-block equations with matrix and vector blocks corresponding to the clusters of the thermal model. The solution to the interval-stochastic equations are statistical measures of the state variables of thermal processes in clusters - mathematical expectations, covariances between state variables and variance. The methodology for applying the cluster method is shown on the example of a real ES.
-
System modeling, risks evaluation and optimization of a distributed computer system
Computer Research and Modeling, 2020, v. 12, no. 6, pp. 1349-1359The article deals with the problem of a distributed system operation reliability. The system core is an open integration platform that provides interaction of varied software for modeling gas transportation. Some of them provide an access through thin clients on the cloud technology “software as a service”. Mathematical models of operation, transmission and computing are to ensure the operation of an automated dispatching system for oil and gas transportation. The paper presents a system solution based on the theory of Markov random processes and considers the stable operation stage. The stationary operation mode of the Markov chain with continuous time and discrete states is described by a system of Chapman–Kolmogorov equations with respect to the average numbers (mathematical expectations) of the objects in certain states. The objects of research are both system elements that are present in a large number – thin clients and computing modules, and individual ones – a server, a network manager (message broker). Together, they are interacting Markov random processes. The interaction is determined by the fact that the transition probabilities in one group of elements depend on the average numbers of other elements groups.
The authors propose a multi-criteria dispersion model of risk assessment for such systems (both in the broad and narrow sense, in accordance with the IEC standard). The risk is the standard deviation of estimated object parameter from its average value. The dispersion risk model makes possible to define optimality criteria and whole system functioning risks. In particular, for a thin client, the following is calculated: the loss profit risk, the total risk of losses due to non-productive element states, and the total risk of all system states losses.
Finally the paper proposes compromise schemes for solving the multi-criteria problem of choosing the optimal operation strategy based on the selected set of compromise criteria.
-
On a possible approach to a sport game with discrete time simulation
Computer Research and Modeling, 2017, v. 9, no. 2, pp. 271-279Views (last year): 9.The paper proposes an approach to simulation of a sport game, consisting of a discrete set of separate competitions. According to this approach, such a competition is considered as a random processes, generally — a non-Markov’s one. At first we treat the flow of the game as a Markov’s process, obtaining recursive relationship between the probabilities of achieving certain states of score in a tennis match, as well as secondary indicators of the game, such as expectation and variance of the number of serves to finish the game. Then we use a simulation system, modeling the match, to allow an arbitrary change of the probabilities of the outcomes in the competitions that compose the match. We, for instance, allow the probabilities to depend on the results of previous competitions. Therefore, this paper deals with a modification of the model, previously proposed by the authors for sports games with continuous time.
The proposed approach allows to evaluate not only the probability of the final outcome of the match, but also the probabilities of reaching each of the possible intermediate results, as well as secondary indicators of the game, such as the number of separate competitions it takes to finish the match. The paper includes a detailed description of the construction of a simulation system for a game of a tennis match. Then we consider simulating a set and the whole tennis match by analogy. We show some statements concerning fairness of tennis serving rules, understood as independence of the outcome of a competition on the right to serve first. We perform simulation of a cancelled ATP series match, obtaining its most probable intermediate and final outcomes for three different possible variants of the course of the match.
The main result of this paper is the developed method of simulation of the match, applicable not only to tennis, but also to other types of sports games with discrete time.
-
On some properties of short-wave statistics of FOREX time series
Computer Research and Modeling, 2017, v. 9, no. 4, pp. 657-669Views (last year): 10.Financial mathematics is one of the most natural applications for the statistical analysis of time series. Financial time series reflect simultaneous activity of a large number of different economic agents. Consequently, one expects that methods of statistical physics and the theory of random processes can be applied to them.
In this paper, we provide a statistical analysis of time series of the FOREX currency market. Of particular interest is the comparison of the time series behavior depending on the way time is measured: physical time versus trading time measured in the number of elementary price changes (ticks). The experimentally observed statistics of the time series under consideration (euro–dollar for the first half of 2007 and for 2009 and British pound – dollar for 2007) radically differs depending on the choice of the method of time measurement. When measuring time in ticks, the distribution of price increments can be well described by the normal distribution already on a scale of the order of ten ticks. At the same time, when price increments are measured in real physical time, the distribution of increments continues to differ radically from the normal up to scales of the order of minutes and even hours.
To explain this phenomenon, we investigate the statistical properties of elementary increments in price and time. In particular, we show that the distribution of time between ticks for all three time series has a long (1-2 orders of magnitude) power-law tails with exponential cutoff at large times. We obtained approximate expressions for the distributions of waiting times for all three cases. Other statistical characteristics of the time series (the distribution of elementary price changes, pair correlation functions for price increments and for waiting times) demonstrate fairly simple behavior. Thus, it is the anomalously wide distribution of the waiting times that plays the most important role in the deviation of the distribution of increments from the normal. As a result, we discuss the possibility of applying a continuous time random walk (CTRW) model to describe the FOREX time series.
-
Deriving specifications of dependable systems
Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1637-1650Although human skills are heavily involved in the Requirements Engineering process, in particular, in requirements elicitation, analysis and specification, still methodology and formalism play a determining role in providing clarity and enabling analysis. In this paper, we propose a method for deriving formal specifications, which are applicable to dependable software systems. First, we clarify what the method itself is. Computer science has a proliferation of languages and methods, but the difference between the two is not always clear. This is a conceptual contribution. Furthermore, we propose the idea of Layered Fault Tolerant Specification (LFTS). The principle consists in layering specifications in (at least) two different layers: one for normal behaviors and others (if more than one) for abnormal behaviors. Abnormal behaviors are described in terms of an Error Injector (EI), which represent a model of the expected erroneous interference coming from the environment. This structure has been inspired by the notion of an idealized Fault Tolerant component, but the combination of LFTS and EI using rely guarantee thinking to describe interference is our second contribution. The overall result is the definition of a method for the specification of systems that do not run in isolation but in the real, physical world. We propose an approach that is pragmatic to its target audience: techniques must scale and be usable by non-experts, if they are to make it into an industrial setting. This article is making tentative steps, but the recent trends in Software Engineering such as Microservices, smart and software-defined buildings, M2M micropayments and Devops are relevant fields continue the investigation concerning dependability and rely guarantee thinking.
Keywords: formal methods, dependability. -
Research on the achievability of a goal in a medical quest
Computer Research and Modeling, 2025, v. 17, no. 6, pp. 1149-1179The work presents an experimental study of the tree structure that occurs during a medical examination. At each meeting with a medical specialist, the patient receives a certain number of areas for consulting other specialists or for tests. A tree of directions arises, each branch of which the patient should pass. Depending on the branching of the tree, it can be as final — and in this case the examination can be completed — and endless when the patient’s goal cannot be achieved. In the work both experimentally and theoretically studied the critical properties of the transition of the system from the forest of the final trees to the forest endless, depending on the probabilistic characteristics of the tree.
For the description, a model is proposed in which a discrete function of the probability of the number of branches on the node repeats the dynamics of a continuous gaussian distribution. The characteristics of the distribution of the Gauss (mathematical expectation of $x_0$, the average quadratic deviation of $\sigma$) are model parameters. In the selected setting, the task refers to the problems of branching random processes (BRP) in the heterogeneous model of Galton – Watson.
Experimental study is carried out by numerical modeling on the final grilles. A phase diagram was built, the boundaries of areas of various phases are determined. A comparison was made with the phase diagram obtained from theoretical criteria for macrosystems, and an adequate correspondence was established. It is shown that on the final grilles the transition is blurry.
The description of the blurry phase transition was carried out using two approaches. In the first, standard approach, the transition is described using the so-called inclusion function, which makes the meaning of the share of one of the phases in the general set. It was established that such an approach in this system is ineffective, since the found position of the conditional boundary of the blurred transition is determined only by the size of the chosen experimental lattice and does not bear objective meaning.
The second, original approach is proposed, based on the introduction of an parameter of order equal to the reverse average tree height, and the analysis of its behavior. It was established that the dynamics of such an order parameter in the $\sigma = \text{const}$ section with very small differences has the type of distribution of Fermi – Dirac ($\sigma$ performs the same function as the temperature for the distribution of Fermi – Dirac, $x_0$ — energy function). An empirical expression has been selected for the order parameter, an analogue of the chemical potential is introduced and calculated, which makes sense of the characteristic scale of the order parameter — that is, the values of $x_0$, in which the order can be considered a disorder. This criterion is the basis for determining the boundary of the conditional transition in this approach. It was established that this boundary corresponds to the average height of a tree equal to two generations. Based on the found properties, recommendations for medical institutions are proposed to control the provision of limb of the path of patients.
The model discussed and its description using conditionally-infinite trees have applications to many hierarchical systems. These systems include: internet routing networks, bureaucratic networks, trade and logistics networks, citation networks, game strategies, population dynamics problems, and others.
-
The analysis of respiratory reactions of the person in the conditions of the changed gas environment on mathematical model
Computer Research and Modeling, 2017, v. 9, no. 2, pp. 281-296Views (last year): 5.The aim of the work was to study and develop methods of forecasting the dynamics of the human respiratory reactions, based on mathematical modeling. To achieve this goal have been set and solved the following tasks: developed and justified the overall structure and formalized description of the model Respiro-reflex system; built and implemented the algorithm in software models of gas exchange of the body; computational experiments and checking the adequacy of the model-based Lite-ture data and our own experimental studies.
In this embodiment, a new comprehensive model entered partial model modified version of physicochemical properties and blood acid-base balance. In developing the model as the basis of a formalized description was based on the concept of separation of physiologically-fi system of regulation on active and passive subsystems regulation. Development of the model was carried out in stages. Integrated model of gas exchange consisted of the following special models: basic biophysical models of gas exchange system; model physicochemical properties and blood acid-base balance; passive mechanisms of gas exchange model developed on the basis of mass balance equations Grodinza F.; chemical regulation model developed on the basis of a multifactor model D. Gray.
For a software implementation of the model, calculations were made in MatLab programming environment. To solve the equations of the method of Runge–Kutta–Fehlberga. It is assumed that the model will be presented in the form of a computer research program, which allows implements vat various hypotheses about the mechanism of the observed processes. Calculate the expected value of the basic indicators of gas exchange under giperkap Britain and hypoxia. The results of calculations as the nature of, and quantity is good enough co-agree with the data obtained in the studies on the testers. The audit on Adek-vatnost confirmed that the error calculation is within error of copper-to-biological experiments. The model can be used in the theoretical prediction of the dynamics of the respiratory reactions of the human body in a changed atmosphere.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"




