All issues
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Development of a methodological approach and numerical simulation of thermal-hydraulic processes in the intermediate heat exchanger of a BN reactor
Computer Research and Modeling, 2023, v. 15, no. 4, pp. 877-894The paper presents the results of three-dimensional numerical simulation of thermal-hydraulic processes in the Intermediate Heat Exchanger of the advanced Sodium-Cooled Fast-Neutron (BN) Reactor considering a developed methodological approach.
The Intermediate Heat Exchanger (IHX) is located in the reactor vessel and intended to transfer heat from the primary sodium circulating on the shell side to the secondary sodium circulating on the tube side. In case of an integral layout of the primary equipment in the BN reactor, upstream the IHX inlet windows there is a temperature stratification of the coolant due to incomplete mixing of different temperature flows at the core outlet. Inside the IHX, in the area of the input and output windows, a complex longitudinal and transverse flow of the coolant also takes place resulting in an uneven distribution of the coolant flow rate on the tube side and, as a consequence, in an uneven temperature distribution and heat transfer efficiency along the height and radius of the tube bundle.
In order to confirm the thermal-hydraulic parameters of the IHX of the advanced BN reactor applied in the design, a methodological approach for three-dimensional numerical simulation of the heat exchanger located in the reactor vessel was developed, taking into account the three-dimensional sodium flow pattern at the IHX inlet and inside the IHX, as well as justifying the recommendations for simplifying the geometry of the computational model of the IHX.
Numerical simulation of thermal-hydraulic processes in the IHX of the advanced BN reactor was carried out using the FlowVision software package with the standard $k-\varepsilon$ turbulence model and the LMS turbulent heat transfer model.
To increase the representativeness of numerical simulation of the IHX tube bundle, verification calculations of singletube and multi-tube sodium-sodium heat exchangers were performed with the geometric characteristics corresponding to the IHX design.
To determine the input boundary conditions in the IHX model, an additional three-dimensional calculation was performed taking into account the uneven flow pattern in the upper mixing chamber of the reactor.
The IHX computational model was optimized by simplifying spacer belts and selecting a sector model.
As a result of numerical simulation of the IHX, the distributions of the primary sodium velocity and primary and secondary sodium temperature were obtained. Satisfactory agreement of the calculation results with the design data on integral parameters confirmed the adopted design thermal-hydraulic characteristics of the IHX of the advanced BN reactor.
-
Mathematical model and heuristic methods of distributed computations organizing in the Internet of Things systems
Computer Research and Modeling, 2025, v. 17, no. 5, pp. 851-870Currently, a significant development has been observed in the direction of distributed computing theory, where computational tasks are solved collectively by resource-constrained devices. In practice, this scenario is implemented when processing data in Internet of Things systems, with the aim of reducing system latency and network infrastructure load, as data is processed on edge network computing devices. However, the rapid growth and widespread adoption of IoT systems raise questions about the need to develop methods for reducing the resource intensity of computations. The resource constraints of computing devices pose the following issues regarding the distribution of computational resources: firstly, the necessity to account for the transit cost between different devices solving various tasks; secondly, the necessity to consider the resource cost associated directly with the process of distributing computational resources, which is particularly relevant for groups of autonomous devices such as drones or robots. An analysis of modern publications available in open access demonstrated the absence of proposed models or methods for distributing computational resources that would simultaneously take into account all these factors, making the creation of a new mathematical model for organizing distributed computing in IoT systems and its solution methods topical. This article proposes a novel mathematical model for distributing computational resources along with heuristic optimization methods, providing an integrated approach to implementing distributed computing in IoT systems. A scenario is considered where there exists a leader device within a group that makes decisions concerning the allocation of computational resources, including its own, for distributed task resolution involving information exchanges. It is also assumed that no prior knowledge exists regarding which device will assume the role of leader or the migration paths of computational tasks across devices. Experimental results have shown the effectiveness of using the proposed models and heuristics: achieving up to a 52% reduction in resource costs for solving computational problems while accounting for data transit costs, saving up to 73% of resources through supplementary criteria optimizing task distribution based on minimizing fragment migrations and distances, and decreasing the resource cost of resolving the computational resource distribution problem by up to 28 times with reductions in distribution quality up to 10%.
-
Determining the characteristics of a random process by comparing them with values based on models of distribution laws
Computer Research and Modeling, 2025, v. 17, no. 6, pp. 1105-1118The effectiveness of communication and data transmission systems (CSiPS), which are an integral part of modern systems in almost any field of science and technology, largely depends on the stability of the frequency of the generated signals. The signals generated in the CSiPD can be considered as processes, the frequency of which changes under the influence of a combination of external influences. Changing the frequency of the signals leads to a decrease in the signal-tonoise ratio (SNR) and, consequently, a deterioration in the characteristics of the signal-to-noise ratio, such as the probability of a bit error and bandwidth. It is most convenient to consider the description of such changes in the frequency of signals as random processes, the apparatus of which is widely used in the construction of mathematical models describing the functioning of systems and devices in various fields of science and technology. Moreover, in many cases, the characteristics of a random process, such as the distribution law, mathematical expectation, and variance, may be unknown or known with errors that do not allow us to obtain estimates of the signal parameters that are acceptable in accuracy. The article proposes an algorithm for solving the problem of determining the characteristics of a random process (signal frequency) based on a set of samples of its frequency, allowing to determine the sample mean, sample variance and the distribution law of frequency deviations in the general population. The basis of this algorithm is the comparison of the values of the observed random process measured over a certain time interval with a set of the same number of random values formed on the basis of model distribution laws. Distribution laws based on mathematical models of these systems and devices or corresponding to similar systems and devices can be considered as model distribution laws. When forming a set of random values for the accepted model distribution law, the sample mean value and variance obtained from the measurement results of the observed random process are used as mathematical expectation and variance. The feature of the algorithm is to compare the measured values of the observed random process ordered in ascending or descending order and the generated sets of values in accordance with the accepted models of distribution laws. The results of mathematical modeling illustrating the application of this algorithm are presented.
-
Modelling of astrocyte morphology with space colonization algorithm
Computer Research and Modeling, 2025, v. 17, no. 3, pp. 465-481We examine a phenomenological algorithm for generating morphology of astrocytes, a major class of glial brain cells, based on morphometric data of rat brain protoplasmic astrocytes and observations of general cell development trends in vivo, based on current literature. We adapted the Space Colonization Algorithm (SCA) for procedural generation of astrocytic morphology from scratch. Attractor points used in generation were spatially distributed in the model volume according to the synapse distribution density in the rat hippocampus tissue during the first week of postnatal brain development. We analyzed and compared astrocytic morphology reconstructions at different brain development stages using morphometry estimation techniques such as Sholl analysis, number of bifurcations, number of terminals, total tree length, and maximum branching order. Using morphometric data from protoplasmic astrocytes of rats at different ages, we selected the necessary generation parameters to obtain the most realistic three-dimensional cell morphology models. We demonstrate that our proposed algorithm allows not only to obtain individual cell geometry but also recreate the phenomenon of tiling domain organization in the cell populations. In our algorithm tiling emerges due to the cell competition for territory and the assignment of unique attractor points to their processes, which then become unavailable to other cells and their processes. We further extend the original algorithm by splitting morphology generation in two phases, thereby simulating astrocyte tree structure development during the first and third-fourth weeks of rat postnatal brain development: rapid space exploration at the first stage and extensive branching at the second stage. To this end, we introduce two attractor types to separate two different growth strategies in time. We hypothesize that the extended algorithm with dynamic attractor generation can explain the formation process of fine astrocyte cell structures and maturation of astrocytic arborizations.
-
Deriving semantics from WS-BPEL specifications of parallel business processes on an example
Computer Research and Modeling, 2015, v. 7, no. 3, pp. 445-454Views (last year): 6.WS-BPEL is a widely accepted standard for specification of business distributed and parallel processes. This standard is a mismatch of algebraic and Petri net paradigms. Following that, it is easy to specify WS-BPEL business process with unwanted features. That is why the verification of WS-BPEL business processes is very important. The intent of this paper is to show some possibilities for conversion of a WS-BPEL processes into more formal specifications that can be verified. CSP and Z-notation are used as formal models. Z-notation is useful for specification of abstract data types. Web services can be viewed as a kind of abstract data types.
-
Numerical studies of the parameters of the perturbed region formed in the lower ionosphere under the action of a directed radio waves flux from a terrestrial source
Computer Research and Modeling, 2018, v. 10, no. 5, pp. 679-708Views (last year): 17.The paper presents a physico-mathematical model of the perturbed region formed in the lower D-layer of the ionosphere under the action of directed radio emission flux from a terrestrial stand of the megahertz frequency range, obtained as a result of comprehensive theoretical studies. The model is based on the consideration of a wide range of kinetic processes taking into account their nonequilibrium and in the two-temperature approximation for describing the transformation of the radio beam energy absorbed by electrons. The initial data on radio emission achieved by the most powerful radio-heating stands are taken in the paper. Their basic characteristics and principles of functioning, and features of the altitude distribution of the absorbed electromagnetic energy of the radio beam are briefly described. The paper presents the decisive role of the D-layer of the ionosphere in the absorption of the energy of the radio beam. On the basis of theoretical analysis, analytical expressions are obtained for the contribution of various inelastic processes to the distribution of the absorbed energy, which makes it possible to correctly describe the contribution of each of the processes considered. The work considers more than 60 components. The change of the component concentration describe about 160 reactions. All the reactions are divided into five groups according to their physical content: ionization-chemical block, excitation block of metastable electronic states, cluster block, excitation block of vibrational states and block of impurities. Blocks are interrelated and can be calculated both jointly and separately. The paper presents the behavior of the parameters of the perturbed region in daytime and nighttime conditions is significantly different at the same radio flux density: under day conditions, the maximum electron concentration and temperature are at an altitude of ~45–55 km; in night ~80 km, with the temperature of heavy particles rapidly increasing, which leads to the occurrence of a gas-dynamic flow. Therefore, a special numerical algorithm are developed to solve two basic problems: kinetic and gas dynamic. Based on the altitude and temporal behavior of concentrations and temperatures, the algorithm makes it possible to determine the ionization and emission of the ionosphere in the visible and infrared spectral range, which makes it possible to evaluate the influence of the perturbed region on radio engineering and optoelectronic devices used in space technology.
-
Image classification based on deep learning with automatic relevance determination and structured Bayesian pruning
Computer Research and Modeling, 2024, v. 16, no. 4, pp. 927-938Deep learning’s power stems from complex architectures; however, these can lead to overfitting, where models memorize training data and fail to generalize to unseen examples. This paper proposes a novel probabilistic approach to mitigate this issue. We introduce two key elements: Truncated Log-Uniform Prior and Truncated Log-Normal Variational Approximation, and Automatic Relevance Determination (ARD) with Bayesian Deep Neural Networks (BDNNs). Within the probabilistic framework, we employ a specially designed truncated log-uniform prior for noise. This prior acts as a regularizer, guiding the learning process towards simpler solutions and reducing overfitting. Additionally, a truncated log-normal variational approximation is used for efficient handling of the complex probability distributions inherent in deep learning models. ARD automatically identifies and removes irrelevant features or weights within a model. By integrating ARD with BDNNs, where weights have a probability distribution, we achieve a variational bound similar to the popular variational dropout technique. Dropout randomly drops neurons during training, encouraging the model not to rely heavily on any single feature. Our approach with ARD achieves similar benefits without the randomness of dropout, potentially leading to more stable training.
To evaluate our approach, we have tested the model on two datasets: the Canadian Institute For Advanced Research (CIFAR-10) for image classification and a dataset of Macroscopic Images of Wood, which is compiled from multiple macroscopic images of wood datasets. Our method is applied to established architectures like Visual Geometry Group (VGG) and Residual Network (ResNet). The results demonstrate significant improvements. The model reduced overfitting while maintaining, or even improving, the accuracy of the network’s predictions on classification tasks. This validates the effectiveness of our approach in enhancing the performance and generalization capabilities of deep learning models.
-
Superscale simulation of the magnetic states and reconstruction of the ordering types for nanodots arrays
Computer Research and Modeling, 2011, v. 3, no. 3, pp. 309-318Views (last year): 2.We consider two possible computational methods of the interpretation of experimental data obtained by means of the magnetic force microscopy. These methods of macrospin distribution simulation and reconstruction can be used for research of magnetization reversal processes of nanodots in ordered 2D arrays of nanodots. New approaches to the development of high-performance superscale algorithms for parallel executing on a supercomputer clusters for solving direct and inverse task of the modeling of magnetic states, types of ordering, reversal processes of nanosystems with a collective behavior are proposed. The simulation results are consistent with experimental results.
-
On one particular model of a mixture of the probability distributions in the radio measurements
Computer Research and Modeling, 2012, v. 4, no. 3, pp. 563-568Views (last year): 3. Citations: 7 (RSCI).This paper presents a model mixture of probability distributions of signal and noise. Typically, when analyzing the data under conditions of uncertainty it is necessary to use nonparametric tests. However, such an analysis of nonstationary data in the presence of uncertainty on the mean of the distribution and its parameters may be ineffective. The model involves the implementation of a case of a priori non-parametric uncertainty in the processing of the signal at a time when the separation of signal and noise are related to different general population, is feasible.
-
JINR TIER-1-level computing system for the CMS experiment at LHC: status and perspectives
Computer Research and Modeling, 2015, v. 7, no. 3, pp. 455-462Views (last year): 3. Citations: 2 (RSCI).The Compact Muon Solenoid (CMS) is a high-performance general-purpose detector at the Large Hadron Collider (LHC) at CERN. A distributed data analysis system for processing and further analysis of CMS experimental data has been developed and this model foresees the obligatory usage of modern grid-technologies. The CMS Computing Model makes use of the hierarchy of computing centers (Tiers). The Joint Institute for Nuclear Research (JINR) takes an active part in the CMS experiment. In order to provide a proper computing infrastructure for the CMS experiment at JINR and for Russian institutes collaborating in CMS, Tier-1 center for the CMS experiment is constructing at JINR. The main tasks and services of the CMS Tier-1 at JINR are described. The status and perspectives of the Tier1 center for the CMS experiment at JINR are presented.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"




