All issues
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Vacuum polarization of scalar field on Lie groups with Bi-invariant metric
Computer Research and Modeling, 2015, v. 7, no. 5, pp. 989-999We consider vacuum polarization of a scalar field on the Lie groups with a bi-invariant metric of Robertson-Walker type. Using the method of orbits we found expression for the vacuum expectation values of the energy-momentum tensor of the scalar field which are determined by the representation character of the group. It is shown that Einstein’s equations with the energy-momentum tensor are consistent. As an example, we consider isotropic Bianchi type IX model.
Keywords: vacuum polarization, orbit method.Views (last year): 2. -
Simulation of flight and destruction of the Benešov bolid
Computer Research and Modeling, 2018, v. 10, no. 5, pp. 605-618Views (last year): 24. Citations: 1 (RSCI).Comets and asteroids are recognized by the scientists and the governments of all countries in the world to be one of the most significant threats to the development and even the existence of our civilization. Preventing this threat includes studying the motion of large meteors through the atmosphere that is accompanied by various physical and chemical phenomena. Of particular interest to such studies are the meteors whose trajectories have been recorded and whose fragments have been found on Earth. Here, we study one of such cases. We develop a model for the motion and destruction of natural bodies in the Earth’s atmosphere, focusing on the Benešov bolid (EN070591), a bright meteor registered in 1991 in the Czech Republic by the European Observation System. Unique data, that includes the radiation spectra, is available for this bolid. We simulate the aeroballistics of the Benešov meteoroid and of its fragments, taking into account destruction due to thermal and mechanical processes. We compute the velocity of the meteoroid and its mass ablation using the equations of the classical theory of meteor motion, taking into account the variability of the mass ablation along the trajectory. The fragmentation of the meteoroid is considered using the model of sequential splitting and the statistical stress theory, that takes into account the dependency of the mechanical strength on the length scale. We compute air flows around a system of bodies (shards of the meteoroid) in the regime where mutual interplay between them is essential. To that end, we develop a method of simulating air flows based on a set of grids that allows us to consider fragments of various shapes, sizes, and masses, as well as arbitrary positions of the fragments relative to each other. Due to inaccuracies in the early simulations of the motion of this bolid, its fragments could not be located for about 23 years. Later and more accurate simulations have allowed researchers to locate four of its fragments rather far from the location expected earlier. Our simulations of the motion and destruction of the Benešov bolid show that its interaction with the atmosphere is affected by multiple factors, such as the mass and the mechanical strength of the bolid, the parameters of its motion, the mechanisms of destruction, and the interplay between its fragments.
-
Ellipsoid method for convex stochastic optimization in small dimension
Computer Research and Modeling, 2021, v. 13, no. 6, pp. 1137-1147The article considers minimization of the expectation of convex function. Problems of this type often arise in machine learning and a variety of other applications. In practice, stochastic gradient descent (SGD) and similar procedures are usually used to solve such problems. We propose to use the ellipsoid method with mini-batching, which converges linearly and can be more efficient than SGD for a class of problems. This is verified by our experiments, which are publicly available. The algorithm does not require neither smoothness nor strong convexity of the objective to achieve linear convergence. Thus, its complexity does not depend on the conditional number of the problem. We prove that the method arrives at an approximate solution with given probability when using mini-batches of size proportional to the desired accuracy to the power −2. This enables efficient parallel execution of the algorithm, whereas possibilities for batch parallelization of SGD are rather limited. Despite fast convergence, ellipsoid method can result in a greater total number of calls to oracle than SGD, which works decently with small batches. Complexity is quadratic in dimension of the problem, hence the method is suitable for relatively small dimensionalities.
-
Origin and growth of the disorder within an ordered state of the spatially extended chemical reaction model
Computer Research and Modeling, 2017, v. 9, no. 4, pp. 595-607Views (last year): 7.We now review the main points of mean-field approximation (MFA) in its application to multicomponent stochastic reaction-diffusion systems.
We present the chemical reaction model under study — brusselator. We write the kinetic equations of reaction supplementing them with terms that describe the diffusion of the intermediate components and the fluctuations of the concentrations of the initial products. We simulate the fluctuations as random Gaussian homogeneous and spatially isotropic fields with zero means and spatial correlation functions with a non-trivial structure. The model parameter values correspond to a spatially-inhomogeneous ordered state in the deterministic case.
In the MFA we derive single-site two-dimensional nonlinear self-consistent Fokker–Planck equation in the Stratonovich's interpretation for spatially extended stochastic brusselator, which describes the dynamics of probability distribution density of component concentration values of the system under consideration. We find the noise intensity values appropriate to two types of Fokker–Planck equation solutions: solution with transient bimodality and solution with the multiple alternation of unimodal and bimodal types of probability density. We study numerically the probability density dynamics and time behavior of variances, expectations, and most probable values of component concentrations at various noise intensity values and the bifurcation parameter in the specified region of the problem parameters.
Beginning from some value of external noise intensity inside the ordered phase disorder originates existing for a finite time, and the higher the noise level, the longer this disorder “embryo” lives. The farther away from the bifurcation point, the lower the noise that generates it and the narrower the range of noise intensity values at which the system evolves to the ordered, but already a new statistically steady state. At some second noise intensity value the intermittency of the ordered and disordered phases occurs. The increasing noise intensity leads to the fact that the order and disorder alternate increasingly.
Thus, the scenario of the noise induced order–disorder transition in the system under study consists in the intermittency of the ordered and disordered phases.
-
A Monte-Carlo study of the inner tracking system main characteristics for multi purpose particle detector MPD
Computer Research and Modeling, 2019, v. 11, no. 1, pp. 87-94Views (last year): 28.At present, the accelerator complex NICA is being built at JINR (Dubna). It is intended for performing experiments to study interactions of relativistic nuclei and polarized particles (protons and deuterons). One of the experimental facilitues MPD (MultiPurpose Detector) was designed to investigate nucleus-nucleus, protonnucleus and proton-proton interactions. The existing plans of future MPD upgrade consider a possibility to install an inner tracker made of the new generation silicon pixel sensors. It is expected that such a detector will considerably enhance the research capability of the experiment both for nucleus-nucleus interactions (due to a high spatial resolution near the collision region) and proton-proton ones (due to a fast detector response).
This paper presents main characteristics of such a tracker, obtained using a Monte-Carlo simulation of the detector for proton-proton collisions. In particular, the detector ability to reconstruct decay vertices of short-lived particles and perform a selection of rare events of such decays from much more frequent “common” interactions are evaluated. Also, the problem of a separation of multiple collisions during the high luminosity accelerator running and the task of detector triggering on rare events are addressed. The results obtained can be used to justify the necessity to build such a detector and to develop a high-level trigger system, possibly based on machine learning techniques.
-
Mathematical modeling of the interval stochastic thermal processes in technical systems at the interval indeterminacy of the determinative parameters
Computer Research and Modeling, 2016, v. 8, no. 3, pp. 501-520Views (last year): 15. Citations: 6 (RSCI).The currently performed mathematical and computer modeling of thermal processes in technical systems is based on an assumption that all the parameters determining thermal processes are fully and unambiguously known and identified (i.e., determined). Meanwhile, experience has shown that parameters determining the thermal processes are of undefined interval-stochastic character, which in turn is responsible for the intervalstochastic nature of thermal processes in the electronic system. This means that the actual temperature values of each element in an technical system will be randomly distributed within their variation intervals. Therefore, the determinative approach to modeling of thermal processes that yields specific values of element temperatures does not allow one to adequately calculate temperature distribution in electronic systems. The interval-stochastic nature of the parameters determining the thermal processes depends on three groups of factors: (a) statistical technological variation of parameters of the elements when manufacturing and assembling the system; (b) the random nature of the factors caused by functioning of an technical system (fluctuations in current and voltage; power, temperatures, and flow rates of the cooling fluid and the medium inside the system); and (c) the randomness of ambient parameters (temperature, pressure, and flow rate). The interval-stochastic indeterminacy of the determinative factors in technical systems is irremediable; neglecting it causes errors when designing electronic systems. A method that allows modeling of unsteady interval-stochastic thermal processes in technical systems (including those upon interval indeterminacy of the determinative parameters) is developed in this paper. The method is based on obtaining and further solving equations for the unsteady statistical measures (mathematical expectations, variances and covariances) of the temperature distribution in an technical system at given variation intervals and the statistical measures of the determinative parameters. Application of the elaborated method to modeling of the interval-stochastic thermal process in a particular electronic system is considered.
-
A multilayer neural network for determination of particle size distribution in Dynamic Light Scattering problem
Computer Research and Modeling, 2019, v. 11, no. 2, pp. 265-273Views (last year): 16.Solution of Dynamic Light Scattering problem makes it possible to determine particle size distribution (PSD) from the spectrum of the intensity of scattered light. As a result of experiment, an intensity curve is obtained. The experimentally obtained spectrum of intensity is compared with the theoretically expected spectrum, which is the Lorentzian line. The main task is to determine on the basis of these data the relative concentrations of particles of each class presented in the solution. The article presents a method for constructing and using a neural network trained on synthetic data to determine PSD in a solution in the range of 1–500 nm. The neural network has a fully connected layer of 60 neurons with the RELU activation function at the output, a layer of 45 neurons and the same activation function, a dropout layer and 2 layers with 15 and 1 neurons (network output). The article describes how the network has been trained and tested on synthetic and experimental data. On the synthetic data, the standard deviation metric (rmse) gave a value of 1.3157 nm. Experimental data were obtained for particle sizes of 200 nm, 400 nm and a solution with representatives of both sizes. The results of the neural network and the classical linear methods are compared. The disadvantages of the classical methods are that it is difficult to determine the degree of regularization: too much regularization leads to the particle size distribution curves are much smoothed out, and weak regularization gives oscillating curves and low reliability of the results. The paper shows that the neural network gives a good prediction for particles with a large size. For small sizes, the prediction is worse, but the error quickly decreases as the particle size increases.
-
Parallel implementation of the grid-characteristic method in the case of explicit contact boundaries
Computer Research and Modeling, 2018, v. 10, no. 5, pp. 667-678Views (last year): 18.We consider an application of the Message Passing Interface (MPI) technology for parallelization of the program code which solves equation of the linear elasticity theory. The solution of this equation describes the propagation of elastic waves in demormable rigid bodies. The solution of such direct problem of seismic wave propagation is of interest in seismics and geophysics. Our implementation of solver uses grid-characteristic method to make simulations. We consider technique to reduce time of communication between MPI processes during the simulation. This is important when it is necessary to conduct modeling in complex problem formulations, and still maintain the high level of parallelism effectiveness, even when thousands of processes are used. A solution of the problem of effective communication is extremely important when several computational grids with arbirtrary geometry of contacts between them are used in the calculation. The complexity of this task increases if an independent distribution of the grid nodes between processes is allowed. In this paper, a generalized approach is developed for processing contact conditions in terms of nodes reinterpolation from a given section of one grid to a certain area of the second grid. An efficient way of parallelization and establishing effective interprocess communications is proposed. For provided example problems we provide wave fileds and seismograms for both 2D and 3D formulations. It is shown that the algorithm can be realized both on Cartesian and on structured (curvilinear) computational grids. The considered statements demonstrate the possibility of carrying out calculations taking into account the surface topographies and curvilinear geometry of curvilinear contacts between the geological layers. Application of curvilinear grids allows to obtain more accurate results than when calculating only using Cartesian grids. The resulting parallelization efficiency is almost 100% up to 4096 processes (we used 128 processes as a basis to find efficiency). With number of processes larger than 4096, an expected gradual decrease in efficiency is observed. The rate of decline is not great, so at 16384 processes the parallelization efficiency remains at 80%.
-
Dynamical trap model for stimulus – response dynamics of human control
Computer Research and Modeling, 2024, v. 16, no. 1, pp. 79-87We present a novel model for the dynamical trap of the stimulus – response type that mimics human control over dynamic systems when the bounded capacity of human cognition is a crucial factor. Our focus lies on scenarios where the subject modulates a control variable in response to a certain stimulus. In this context, the bounded capacity of human cognition manifests in the uncertainty of stimulus perception and the subsequent actions of the subject. The model suggests that when the stimulus intensity falls below the (blurred) threshold of stimulus perception, the subject suspends the control and maintains the control variable near zero with accuracy determined by the control uncertainty. As the stimulus intensity grows above the perception uncertainty and becomes accessible to human cognition, the subject activates control. Consequently, the system dynamics can be conceptualized as an alternating sequence of passive and active modes of control with probabilistic transitions between them. Moreover, these transitions are expected to display hysteresis due to decision-making inertia.
Generally, the passive and active modes of human control are governed by different mechanisms, posing challenges in developing efficient algorithms for their description and numerical simulation. The proposed model overcomes this problem by introducing the dynamical trap of the stimulus-response type, which has a complex structure. The dynamical trap region includes two subregions: the stagnation region and the hysteresis region. The model is based on the formalism of stochastic differential equations, capturing both probabilistic transitions between control suspension and activation as well as the internal dynamics of these modes within a unified framework. It reproduces the expected properties in control suspension and activation, probabilistic transitions between them, and hysteresis near the perception threshold. Additionally, in a limiting case, the model demonstrates the capability of mimicking a similar subject’s behavior when (1) the active mode represents an open-loop implementation of locally planned actions and (2) the control activation occurs only when the stimulus intensity grows substantially and the risk of the subject losing the control over the system dynamics becomes essential.
-
Simulation of interprocessor interactions for MPI-applications in the cloud infrastructure
Computer Research and Modeling, 2017, v. 9, no. 6, pp. 955-963Views (last year): 10. Citations: 1 (RSCI).А new cloud center of parallel computing is to be created in the Laboratory of Information Technologies (LIT) of the Joint Institute for Nuclear Research JINR) what is expected to improve significantly the efficiency of numerical calculations and expedite the receipt of new physically meaningful results due to the more rational use of computing resources. To optimize a scheme of parallel computations at a cloud environment it is necessary to test this scheme for various combinations of equipment parameters (processor speed and numbers, throughput оf а communication network etc). As a test problem, the parallel MPI algorithm for calculations of the long Josephson junctions (LDJ) is chosen. Problems of evaluating the impact of abovementioned factors of computing mean on the computing speed of the test problem are solved by simulation with the simulation program SyMSim developed in LIT.
The simulation of the LDJ calculations in the cloud environment enable users without a series of test to find the optimal number of CPUs with a certain type of network run the calculations in a real computer environment. This can save significant computational time in countable resources. The main parameters of the model were obtained from the results of the computational experiment conducted on a special cloud-based testbed. Computational experiments showed that the pure computation time decreases in inverse proportion to the number of processors, but depends significantly on network bandwidth. Comparison of results obtained empirically with the results of simulation showed that the simulation model correctly simulates the parallel calculations performed using the MPI-technology. Besides it confirms our recommendation: for fast calculations of this type it is needed to increase both, — the number of CPUs and the network throughput at the same time. The simulation results allow also to invent an empirical analytical formula expressing the dependence of calculation time by the number of processors for a fixed system configuration. The obtained formula can be applied to other similar studies, but requires additional tests to determine the values of variables.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"