All issues
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Monte Carlo simulation of nonequilibrium critical behavior of 3D Ising model
Computer Research and Modeling, 2014, v. 6, no. 1, pp. 119-129Views (last year): 11.Investigation of influence of non-equilibrium initial states and structural disorder on characteristics of anomalous slow non-equilibrium critical behavior of three-dimensional Ising model is carried out. The unique ageing properties and violations of the equilibrium fluctuation-dissipation theorem are observed for considered pure and disordered systems which were prepared in high-temperature initial state and then quenched in their critical points. The heat-bath algorithm description of ageing properties in non-equilibrium critical behavior of three-dimensional Ising model with spin concentrations p = 1.0, p = 0.8, and 0.6 is realized. On the base of analysis of such two-time quantities as autocorrelation function and dynamical susceptibility were demonstrated the ageing effects and were calculated asymptotic values of universal fluctuation-dissipation ratio in these systems. It was shown that the presence of defects leads to aging gain.
-
JINR TIER-1-level computing system for the CMS experiment at LHC: status and perspectives
Computer Research and Modeling, 2015, v. 7, no. 3, pp. 455-462Views (last year): 3. Citations: 2 (RSCI).The Compact Muon Solenoid (CMS) is a high-performance general-purpose detector at the Large Hadron Collider (LHC) at CERN. A distributed data analysis system for processing and further analysis of CMS experimental data has been developed and this model foresees the obligatory usage of modern grid-technologies. The CMS Computing Model makes use of the hierarchy of computing centers (Tiers). The Joint Institute for Nuclear Research (JINR) takes an active part in the CMS experiment. In order to provide a proper computing infrastructure for the CMS experiment at JINR and for Russian institutes collaborating in CMS, Tier-1 center for the CMS experiment is constructing at JINR. The main tasks and services of the CMS Tier-1 at JINR are described. The status and perspectives of the Tier1 center for the CMS experiment at JINR are presented.
-
Computer aided analysis of medical image recognition for example of scintigraphy
Computer Research and Modeling, 2016, v. 8, no. 3, pp. 541-548Views (last year): 3. Citations: 3 (RSCI).The practical application of nuclear medicine demonstrates the continued information deficiency of the algorithms and programs that provide visualization and analysis of medical images. The aim of the study was to determine the principles of optimizing the processing of planar osteostsintigraphy on the basis of сomputer aided diagnosis (CAD) for analysis of texture descriptions of images of metastatic zones on planar scintigrams of skeleton. A computer-aided diagnosis system for analysis of skeletal metastases based on planar scintigraphy data has been developed. This system includes skeleton image segmentation, calculation of textural, histogram and morphometrical parameters and the creation of a training set. For study of metastatic images’ textural characteristics on planar scintigrams of skeleton was developed the computer program of automatic analysis of skeletal metastases is used from data of planar scintigraphy. Also expert evaluation was used to distinguishing ‘pathological’ (metastatic) from ‘physiological’ (non-metastatic) radiopharmaceutical hyperfixation zones in which Haralick’s textural features were determined: autocorrelation, contrast, ‘forth moment’ and heterogeneity. This program was established on the principles of сomputer aided diagnosis researches planar scintigrams of skeletal patients with metastatic breast cancer hearths hyperfixation of radiopharmaceuticals were identified. Calculated parameters were made such as brightness, smoothness, the third moment of brightness, brightness uniformity, entropy brightness. It has been established that in most areas of the skeleton of histogram values of parameters in pathologic hyperfixation of radiopharmaceuticals predominate over the same values in the physiological. Most often pathological hyperfixation of radiopharmaceuticals as the front and rear fixed scintigramms prevalence of brightness and smoothness of the image brightness in comparison with those of the physiological hyperfixation of radiopharmaceuticals. Separate figures histogram analysis can be used in specifying the diagnosis of metastases in the mathematical modeling and interpretation bone scintigraphy. Separate figures histogram analysis can be used in specifying the diagnosis of metastases in the mathematical modeling and interpretation bone scintigraphy.
-
Development of methodology for computational analysis of thermo-hydraulic processes proceeding in fast-neutron reactor with FlowVision CFD software
Computer Research and Modeling, 2017, v. 9, no. 1, pp. 87-94Views (last year): 6. Citations: 1 (RSCI).An approach to numerical analysis of thermo-hydraulic processes proceeding in a fast-neutron reactor is described in the given article. The description covers physical models, numerical schemes and geometry simplifications accepted in the computational model. Steady-state and dynamic regimes of reactor operation are considered. The steady-state regimes simulate the reactor operation at nominal power. The dynamic regimes simulate the shutdown reactor cooling by means of the heat-removal system.
Simulation of thermo-hydraulic processes is carried out in the FlowVision CFD software. A mathematical model describing the coolant flow in the first loop of the fast-neutron reactor was developed on the basis of the available geometrical model. The flow of the working fluid in the reactor simulator is calculated under the assumption that the fluid density does not depend on pressure, with use a $k–\varepsilon$ turbulence model, with use of a model of dispersed medium, and with account of conjugate heat exchange. The model of dispersed medium implemented in the FlowVision software allowed taking into account heat exchange between the heat-exchanger lops. Due to geometric complexity of the core region, the zones occupied by the two heat exchangers were modeled by hydraulic resistances and heat sources.
Numerical simulation of the coolant flow in the FlowVision software enabled obtaining the distributions of temperature, velocity and pressure in the entire computational domain. Using the model of dispersed medium allowed calculation of the temperature distributions in the second loops of the heat exchangers. Besides that, the variation of the coolant temperature along the two thermal probes is determined. The probes were located in the cool and hot chambers of the fast-neutron reactor simulator. Comparative analysis of the numerical and experimental data has shown that the developed mathematical model is correct and, therefore, it can be used for simulation of thermo-hydraulic processes proceeding in fast-neutron reactors with sodium coolant.
-
Experimental identification of the organization of mental calculations of the person on the basis of algebras of different associativity
Computer Research and Modeling, 2019, v. 11, no. 2, pp. 311-327Views (last year): 16.The work continues research on the ability of a person to improve the productivity of information processing, using parallel work or improving the performance of analyzers. A person receives a series of tasks, the solution of which requires the processing of a certain amount of information. The time and the validity of the decision are recorded. The dependence of the average solution time on the amount of information in the problem is determined by correctly solved problems. In accordance with the proposed method, the problems contain calculations of expressions in two algebras, one of which is associative and the other is nonassociative. To facilitate the work of the subjects in the experiment were used figurative graphic images of elements of algebra. Non-associative calculations were implemented in the form of the game “rock-paper-scissors”. It was necessary to determine the winning symbol in the long line of these figures, considering that they appear sequentially from left to right and play with the previous winner symbol. Associative calculations were based on the recognition of drawings from a finite set of simple images. It was necessary to determine which figure from this set in the line is not enough, or to state that all the pictures are present. In each problem there was no more than one picture. Computation in associative algebra allows the parallel counting, and in the absence of associativity only sequential computations are possible. Therefore, the analysis of the time for solving a series of problems reveals a consistent uniform, sequential accelerated and parallel computing strategy. In the experiments it was found that all subjects used a uniform sequential strategy to solve non-associative problems. For the associative task, all subjects used parallel computing, and some have used parallel computing acceleration of the growth of complexity of the task. A small part of the subjects with a high complexity, judging by the evolution of the solution time, supplemented the parallel account with a sequential stage of calculations (possibly to control the solution). We develop a special method for assessing the rate of processing of input information by a person. It allowed us to estimate the level of parallelism of the calculation in the associative task. Parallelism of level from two to three was registered. The characteristic speed of information processing in the sequential case (about one and a half characters per second) is twice less than the typical speed of human image recognition. Apparently the difference in processing time actually spent on the calculation process. For an associative problem in the case of a minimum amount of information, the solution time is near to the non-associativity case or less than twice. This is probably due to the fact that for a small number of characters recognition almost exhausts the calculations for the used non-associative problem.
-
CFD analysis of hemodynamics in idealized abdominal aorta-renal artery junction: preliminary study to locate atherosclerotic plaque
Computer Research and Modeling, 2019, v. 11, no. 4, pp. 695-706Views (last year): 3.Atherosclerotic diseases such as carotid artery diseases (CAD) and chronic kidney diseases (CKD) are the major causes of death worldwide. The onset of these atherosclerotic diseases in the arteries are governed by complex blood flow dynamics and hemodynamic parameters. Atherosclerosis in renal arteries leads to reduction in arterial efficiency, which ultimately leads to Reno-vascular hypertension. This work attempts to identify the localization of atherosclerotic plaque in human abdominal aorta — renal artery junction using Computational fluid dynamics (CFD).
The atherosclerosis prone regions in an idealized human abdominal aorta-renal artery junction are identified by calculating relevant hemodynamic indicators from computational simulations using the rheologically accurate shear-thinning Yeleswarapu model for human blood. Blood flow is numerically simulated in a 3-D model of the artery junction using ANSYS FLUENT v18.2.
Hemodynamic indicators calculated are average wall shear stress (AWSS), oscillatory shear index (OSI), and relative residence time (RRT). Simulations of pulsatile flow (f=1.25 Hz, Re = 1000) show that low AWSS, and high OSI manifest in the regions of renal artery downstream of the junction and on the infrarenal section of the abdominal aorta lateral to the junction. High RRT, which is a relative index and dependent on AWSS and OSI, is found to overlap with the low AWSS and high OSI at the cranial surface of renal artery proximal to the junction and on the surface of the abdominal aorta lateral to the bifurcation: this indicates that these regions of the junction are prone to atherosclerosis. The results match qualitatively with the findings reported in literature and serve as initial step to illustrate utility of CFD for the location of atherosclerotic plaque.
-
Simulation of pollution migration processes at municipal solid waste landfills
Computer Research and Modeling, 2020, v. 12, no. 2, pp. 369-385The article reports the findings of an investigation into pollution migration processes at the municipal solid waste (MSW) landfill located in the water protection zone of Lake Seliger (Tver Region). The distribution of pollutants is investigated and migration parameters are determined in field and laboratory conditions at the landfill site. A mathematical model describing physical and chemical processes of substance migration in soil strata is constructed. Pollutant migration is found to be due to a variety of factors. The major ones, having a significant impact on the migration of MSW ingredients and taken into account mathematically, include convective transport, diffusion and sorption processes. A modified mathematical model differs from its conventional counterparts by considering a number of parameters reflecting the decrease in the concentration of ammonium and nitrate nitrogen ions in ground water (transpiration by plant roots, dilution with infiltration waters, etc.). An analytical solution to assess the pollutant spread from the landfill is presented. The mathematical model provides a set of simulation models helping to obtain a computational solution of specific problems, vertical and horizontal migration of substances in the underground flow. Numerical experiments, analytical solutions, as well as field and laboratory data was studied the dynamics of pollutant distribution in the object under study up to the lake. A long-term forecast for the spread of landfill pollution is made. Simulation experiments showed that some zones of clean groundwater interact with those of contaminated groundwater during the pollution migration from the landfill, each characterized by a different pollutant content. The data of a computational experiments and analytical calculations are consistent with the findings of field and laboratory investigations of the object and give grounds to recommend the proposed models for predicting pollution migration from a landfill. The analysis of the pollution migration simulation allows to substantiate the numerical estimates of the increase in $NH_4^+$ and $NO_3^-$ ion concentration with the landfill operation time. It is found that, after 100 years following the landfill opening, toxic filtrate components will fill the entire pore space from the landfill to the lake resulting in a significant deterioration of the ecosystem of Lake Seliger.
-
Computational algorithm for solving the nonlinear boundary-value problem of hydrogen permeability with dynamic boundary conditions and concentration-dependent diffusion coefficient
Computer Research and Modeling, 2024, v. 16, no. 5, pp. 1179-1193The article deals with the nonlinear boundary-value problem of hydrogen permeability corresponding to the following experiment. A membrane made of the target structural material heated to a sufficiently high temperature serves as the partition in the vacuum chamber. Degassing is performed in advance. A constant pressure of gaseous (molecular) hydrogen is built up at the inlet side. The penetrating flux is determined by mass-spectrometry in the vacuum maintained at the outlet side.
A linear model of dependence on concentration is adopted for the coefficient of dissolved atomic hydrogen diffusion in the bulk. The temperature dependence conforms to the Arrhenius law. The surface processes of dissolution and sorptiondesorption are taken into account in the form of nonlinear dynamic boundary conditions (differential equations for the dynamics of surface concentrations of atomic hydrogen). The characteristic mathematical feature of the boundary-value problem is that concentration time derivatives are included both in the diffusion equation and in the boundary conditions with quadratic nonlinearity. In terms of the general theory of functional differential equations, this leads to the so-called neutral type equations and requires a more complex mathematical apparatus. An iterative computational algorithm of second-(higher- )order accuracy is suggested for solving the corresponding nonlinear boundary-value problem based on explicit-implicit difference schemes. To avoid solving the nonlinear system of equations at every time step, we apply the explicit component of difference scheme to slower sub-processes.
The results of numerical modeling are presented to confirm the fitness of the model to experimental data. The degrees of impact of variations in hydrogen permeability parameters (“derivatives”) on the penetrating flux and the concentration distribution of H atoms through the sample thickness are determined. This knowledge is important, in particular, when designing protective structures against hydrogen embrittlement or membrane technologies for producing high-purity hydrogen. The computational algorithm enables using the model in the analysis of extreme regimes for structural materials (pressure drops, high temperatures, unsteady heating), identifying the limiting factors under specific operating conditions, and saving on costly experiments (especially in deuterium-tritium investigations).
-
Deriving specifications of dependable systems
Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1637-1650Although human skills are heavily involved in the Requirements Engineering process, in particular, in requirements elicitation, analysis and specification, still methodology and formalism play a determining role in providing clarity and enabling analysis. In this paper, we propose a method for deriving formal specifications, which are applicable to dependable software systems. First, we clarify what the method itself is. Computer science has a proliferation of languages and methods, but the difference between the two is not always clear. This is a conceptual contribution. Furthermore, we propose the idea of Layered Fault Tolerant Specification (LFTS). The principle consists in layering specifications in (at least) two different layers: one for normal behaviors and others (if more than one) for abnormal behaviors. Abnormal behaviors are described in terms of an Error Injector (EI), which represent a model of the expected erroneous interference coming from the environment. This structure has been inspired by the notion of an idealized Fault Tolerant component, but the combination of LFTS and EI using rely guarantee thinking to describe interference is our second contribution. The overall result is the definition of a method for the specification of systems that do not run in isolation but in the real, physical world. We propose an approach that is pragmatic to its target audience: techniques must scale and be usable by non-experts, if they are to make it into an industrial setting. This article is making tentative steps, but the recent trends in Software Engineering such as Microservices, smart and software-defined buildings, M2M micropayments and Devops are relevant fields continue the investigation concerning dependability and rely guarantee thinking.
Keywords: formal methods, dependability. -
Classifier size optimisation in segmentation of three-dimensional point images of wood vegetation
Computer Research and Modeling, 2025, v. 17, no. 4, pp. 665-675The advent of laser scanning technologies has revolutionized forestry. Their use made it possible to switch from studying woodlands using manual measurements to computer analysis of stereo point images called point clouds.
Automatic calculation of some tree parameters (such as trunk diameter) using a point cloud requires the removal of foliage points. To perform this operation, a preliminary segmentation of the stereo image into the “foliage” and “trunk” classes is required. The solution to this problem often involves the use of machine learning methods.
One of the most popular classifiers used for segmentation of stereo images of trees is a random forest. This classifier is quite demanding on the amount of memory. At the same time, the size of the machine learning model can be critical if it needs to be sent by wire, which is required, for example, when performing distributed learning. In this paper, the goal is to find a classifier that would be less demanding in terms of memory, but at the same time would have comparable segmentation accuracy. The search is performed among classifiers such as logistic regression, naive Bayes classifier, and decision tree. In addition, a method for segmentation refinement performed by a decision tree using logistic regression is being investigated.
The experiments were conducted on data from the collection of the University of Heidelberg. The collection contains hand-marked stereo images of trees of various species, both coniferous and deciduous, typical of the forests of Central Europe.
It has been shown that classification using a decision tree, adjusted using logistic regression, is able to produce a result that is only slightly inferior to the result of a random forest in accuracy, while spending less time and RAM. The difference in balanced accuracy is no more than one percent on all the clouds considered, while the total size and inference time of the decision tree and logistic regression classifiers is an order of magnitude smaller than of the random forest classifier.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"




