All issues
- 2026 Vol. 18
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Calculating technogenic vibrations in urban environments using grid-characteristic method
Computer Research and Modeling, 2025, v. 17, no. 6, pp. 1119-1129Amid the ongoing trend of rapid urbanization and the intensive development of megacities and large cities worldwide, the impact of man-made vibrations on residential structures and infrastructure is increasing. The operation of subway systems, construction using pile-driving and drilling equipment, and heavy traffic have become active sources of wave disturbances, which can be a decisive factor in reducing the structural stability of buildings and, consequently, their long-term reliability. This paper proposes a numerical calculation using the grid-characteristic method to model elastic waves propagating through soil layers and load-bearing structures from various sources. By solving the direct problem of numerical pulse simulation and varying its location, the values of velocity vector projections and components of the Cauchy stress tensor were obtained at each time step. Two scenarios were examined: the first simulates the impact of noise generated by construction work or nearby traffic, while the second demonstrates how a subway running through an underground tunnel affects multi-story residential buildings. Wave propagation patterns from these sources were visualized in terms of the parameters of interest, enabling a quick and convenient comprehensive analysis of the problem. The analysis of the obtained data will help adjust the timing and types of repair work, identify structural weak points, and develop innovative methods for preserving historical buildings that are cultural heritage sites. Additionally, it will allow for the most economically optimal construction of modern buildings near architectural landmarks, provide an efficient and safe action plan in emergencies, and modernize existing construction technologies to enhance the comfort of residential buildings, office structures, and other socially significant facilities. It will also aid in selecting the most suitable locations for modern high-precision manufacturing plants.
-
Hybrid neural network for predicting coating characteristics in flame spraying
Computer Research and Modeling, 2026, v. 18, no. 1, pp. 101-116The paper presents a hybrid artificial neural network model based on an architecture that incorporates a convolutional image encoder (CNN) and an attention module (Attention-based Multiple Instance Learning, Attention MIL). This module aggregates informative features from a sequence of frames capturing the flame spraying process. Additional technological parameters—air pressure, propane pressure, and standoff distance — are integrated into the model via a tabular channel, enabling it to account for the relationship between visual data and numerical process regime characteristics. The software implementation was developed using the Streamlit platform and the PyTorch library. It features an interactive interface for model training and result visualization, analysis of attention weights across frames, and a prediction mode for output characteristics: surface roughness ($R_a$) and the mass of the deposited coating ($m$). Experimental studies were conducted on data from real-world technological processes, and a comparative analysis of the accuracy of various model configurations was performed. The results demonstrate that the hybrid neural network, which combines visual and tabular features, achieves higher prediction accuracy compared to models using only a single modality. Furthermore, when comparing different implementations of the hybrid network, it was established that using the attention mechanism to process the series of flame spray images provides a significant increase in accuracy over a simple averaging of features without attention. The application includes an attention visualization module that creates a montage of the most significant frames and displays their attention weights, allowing users to identify which frames had the greatest influence on the prediction. The model’s capability for export to the ONNX format for integration into process control systems is also demonstrated. The proposed approach showcases the effectiveness of fusing visual and tabular information for manufacturing process monitoring tasks. The model can serve as a foundation for developing a decision support system or an automated quality control system for coatings produced by flame spraying. The limitations of the implemented model and prospects for its further development are also considered.
-
Methodology and program for the storage and statistical analysis of the results of computer experiment
Computer Research and Modeling, 2013, v. 5, no. 4, pp. 589-595Views (last year): 1. Citations: 5 (RSCI).The problem of accumulation and the statistical analysis of computer experiment results are solved. The main experiment program is considered as the data source. The results of main experiment are collected on specially prepared sheet Excel with pre-organized structure for the accumulation, statistical processing and visualization of the data. The created method and the program are used at efficiency research of the scientific researches which are carried out by authors.
-
Comparative analysis of human adaptation to the growth of visual information in the tasks of recognizing formal symbols and meaningful images
Computer Research and Modeling, 2021, v. 13, no. 3, pp. 571-586We describe an engineering-psychological experiment that continues the study of ways to adapt a person to the increasing complexity of logical problems by presenting a series of problems of increasing complexity, which is determined by the volume of initial data. Tasks require calculations in an associative or non-associative system of operations. By the nature of the change in the time of solving the problem, depending on the number of necessary operations, we can conclude that a purely sequential method of solving problems or connecting additional brain resources to the solution in parallel mode. In a previously published experimental work, a person in the process of solving an associative problem recognized color images with meaningful images. In the new study, a similar problem is solved for abstract monochrome geometric shapes. Analysis of the result showed that for the second case, the probability of the subject switching to a parallel method of processing visual information is significantly reduced. The research method is based on presenting a person with two types of tasks. One type of problem contains associative calculations and allows a parallel solution algorithm. Another type of problem is the control one, which contains problems in which calculations are not associative and parallel algorithms are ineffective. The task of recognizing and searching for a given object is associative. A parallel strategy significantly speeds up the solution with relatively small additional resources. As a control series of problems (to separate parallel work from the acceleration of a sequential algorithm), we use, as in the previous experiment, a non-associative comparison problem in cyclic arithmetic, presented in the visual form of the game “rock, paper, scissors”. In this problem, the parallel algorithm requires a large number of processors with a small efficiency coefficient. Therefore, the transition of a person to a parallel algorithm for solving this problem is almost impossible, and the acceleration of processing input information is possible only by increasing the speed. Comparing the dependence of the solution time on the volume of source data for two types of problems allows us to identify four types of strategies for adapting to the increasing complexity of the problem: uniform sequential, accelerated sequential, parallel computing (where possible), or undefined (for this method) strategy. The Reducing of the number of subjects, who switch to a parallel strategy when encoding input information with formal images, shows the effectiveness of codes that cause subject associations. They increase the speed of human perception and processing of information. The article contains a preliminary mathematical model that explains this phenomenon. It is based on the appearance of a second set of initial data, which occurs in a person as a result of recognizing the depicted objects.
-
Image classification based on deep learning with automatic relevance determination and structured Bayesian pruning
Computer Research and Modeling, 2024, v. 16, no. 4, pp. 927-938Deep learning’s power stems from complex architectures; however, these can lead to overfitting, where models memorize training data and fail to generalize to unseen examples. This paper proposes a novel probabilistic approach to mitigate this issue. We introduce two key elements: Truncated Log-Uniform Prior and Truncated Log-Normal Variational Approximation, and Automatic Relevance Determination (ARD) with Bayesian Deep Neural Networks (BDNNs). Within the probabilistic framework, we employ a specially designed truncated log-uniform prior for noise. This prior acts as a regularizer, guiding the learning process towards simpler solutions and reducing overfitting. Additionally, a truncated log-normal variational approximation is used for efficient handling of the complex probability distributions inherent in deep learning models. ARD automatically identifies and removes irrelevant features or weights within a model. By integrating ARD with BDNNs, where weights have a probability distribution, we achieve a variational bound similar to the popular variational dropout technique. Dropout randomly drops neurons during training, encouraging the model not to rely heavily on any single feature. Our approach with ARD achieves similar benefits without the randomness of dropout, potentially leading to more stable training.
To evaluate our approach, we have tested the model on two datasets: the Canadian Institute For Advanced Research (CIFAR-10) for image classification and a dataset of Macroscopic Images of Wood, which is compiled from multiple macroscopic images of wood datasets. Our method is applied to established architectures like Visual Geometry Group (VGG) and Residual Network (ResNet). The results demonstrate significant improvements. The model reduced overfitting while maintaining, or even improving, the accuracy of the network’s predictions on classification tasks. This validates the effectiveness of our approach in enhancing the performance and generalization capabilities of deep learning models.
-
Computer aided analysis of medical image recognition for example of scintigraphy
Computer Research and Modeling, 2016, v. 8, no. 3, pp. 541-548Views (last year): 3. Citations: 3 (RSCI).The practical application of nuclear medicine demonstrates the continued information deficiency of the algorithms and programs that provide visualization and analysis of medical images. The aim of the study was to determine the principles of optimizing the processing of planar osteostsintigraphy on the basis of сomputer aided diagnosis (CAD) for analysis of texture descriptions of images of metastatic zones on planar scintigrams of skeleton. A computer-aided diagnosis system for analysis of skeletal metastases based on planar scintigraphy data has been developed. This system includes skeleton image segmentation, calculation of textural, histogram and morphometrical parameters and the creation of a training set. For study of metastatic images’ textural characteristics on planar scintigrams of skeleton was developed the computer program of automatic analysis of skeletal metastases is used from data of planar scintigraphy. Also expert evaluation was used to distinguishing ‘pathological’ (metastatic) from ‘physiological’ (non-metastatic) radiopharmaceutical hyperfixation zones in which Haralick’s textural features were determined: autocorrelation, contrast, ‘forth moment’ and heterogeneity. This program was established on the principles of сomputer aided diagnosis researches planar scintigrams of skeletal patients with metastatic breast cancer hearths hyperfixation of radiopharmaceuticals were identified. Calculated parameters were made such as brightness, smoothness, the third moment of brightness, brightness uniformity, entropy brightness. It has been established that in most areas of the skeleton of histogram values of parameters in pathologic hyperfixation of radiopharmaceuticals predominate over the same values in the physiological. Most often pathological hyperfixation of radiopharmaceuticals as the front and rear fixed scintigramms prevalence of brightness and smoothness of the image brightness in comparison with those of the physiological hyperfixation of radiopharmaceuticals. Separate figures histogram analysis can be used in specifying the diagnosis of metastases in the mathematical modeling and interpretation bone scintigraphy. Separate figures histogram analysis can be used in specifying the diagnosis of metastases in the mathematical modeling and interpretation bone scintigraphy.
-
Using Docker service containers to build browser-based clinical decision support systems (CDSS)
Computer Research and Modeling, 2026, v. 18, no. 1, pp. 133-147The article presents a technology for building clinical decision support systems (CDSS) based on service containers using Docker and a web interface that runs directly in the browser without installing specialized software on workstation of a clinician. A modular architecture is proposed in which each application module is packaged as an independent service container combining a lightweight web server, a user interface, and computational components for medical image processing. Communication between the browser and the server side is implemented via a persistent bidirectional WebSocket connection with binary message serialization (MessagePack), which provides low latency and efficient transfer of large data. For local storage of images and analysis of results, browser facilities (IndexedDB with the Dexie.js wrapper) are used to speed up repeated data access. Three-dimensional visualization and basic operations with DICOM data are implemented with Three.js and AMI.js: this toolchain supports the integration of interactive elements arising from the task context (annotations, landmarks, markers, 3D models) into volumetric medical images.
Server components and functional modules are assembled as a set of interacting containers managed by Docker. The paper discusses the choice of base images, approaches to minimizing containers down to runtime-only executables without external utilities, and the organization of multi-stage builds with a dedicated build container. It describes a hub service that launches application containers on user request, performs request proxying, manages sessions, and switches a container from shared to exclusive mode at the start of computations. Examples of application modules are provided (fractional flow reserve estimation, quantitative flow ratio computation, aortic valve closure modeling), along with the integration of a React-based interface with a three-dimensional scene, a versioning policy, automated reproducibility checks, and the deployment procedure on the target platform.
It is demonstrated that containerization ensures portability and reproducibility of the software environment, dependency isolation and scalability, while the browser-based interface provides accessibility, reduced infrastructure requirements, and interactive real-time visualization of medical data. Technical limitations are noted (dependence on versions of visualization libraries and data formats) together with practical mitigation measures.
-
Signal and noise calculation at Rician data analysis by means of combining maximum likelihood technique and method of moments
Computer Research and Modeling, 2018, v. 10, no. 4, pp. 511-523Views (last year): 11.The paper develops a new mathematical method of the joint signal and noise calculation at the Rice statistical distribution based on combing the maximum likelihood method and the method of moments. The calculation of the sough-for values of signal and noise is implemented by processing the sampled measurements of the analyzed Rician signal’s amplitude. The explicit equations’ system has been obtained for required signal and noise parameters and the results of its numerical solution are provided confirming the efficiency of the proposed technique. It has been shown that solving the two-parameter task by means of the proposed technique does not lead to the increase of the volume of demanded calculative resources if compared with solving the task in one-parameter approximation. An analytical solution of the task has been obtained for the particular case of small value of the signal-to-noise ratio. The paper presents the investigation of the dependence of the sought for parameters estimation accuracy and dispersion on the quantity of measurements in experimental sample. According to the results of numerical experiments, the dispersion values of the estimated sought-for signal and noise parameters calculated by means of the proposed technique change in inverse proportion to the quantity of measurements in a sample. There has been implemented a comparison of the accuracy of the soughtfor Rician parameters’ estimation by means of the proposed technique and by earlier developed version of the method of moments. The problem having been considered in the paper is meaningful for the purposes of Rician data processing, in particular, at the systems of magnetic-resonance visualization, in devices of ultrasonic visualization, at optical signals’ analysis in range-measuring systems, at radar signals’ analysis, as well as at solving many other scientific and applied tasks that are adequately described by the Rice statistical model.
-
Detection of influence of upper working roll’s vibrayion on thickness of sheet at cold rolling with the help of DEFORM-3D software
Computer Research and Modeling, 2017, v. 9, no. 1, pp. 111-116Views (last year): 12. Citations: 1 (RSCI).Technical diagnosis’ current trends are connected to application of FEM computer simulation, which allows, to some extent, replace real experiments, reduce costs for investigation and minimize risks. Computer simulation, just at the stage of research and development, allows carrying out of diagnostics of equipment to detect permissible fluctuations of parameters of equipment’s work. Peculiarity of diagnosis of rolling equipment is that functioning of rolling equipment is directly tied with manufacturing of product with required quality, including accuracy. At that design of techniques of technical diagnosis and diagnostical modelling is very important. Computer simulation of cold rolling of strip was carried out. At that upper working roll was doing vibrations in horizontal direction according with published data of experiments on continuous 1700 rolling mill. Vibration of working roll in a stand appeared due to gap between roll’s craft and guide in a stand and led to periodical fluctuations of strip’s thickness. After computer simulation with the help of DEFORM software strip with longitudinal and transversal thickness variation was gotten. Visualization of strip’s geometrical parameters, according with simulation data, corresponded to type of inhomogeneity of surface of strip rolled in real. Further analysis of thickness variation was done in order to identify, on the basis of simulation, sources of periodical components of strip’s thickness, whose reasons are malfunctions of equipment. Advantage of computer simulation while searching the sources of forming of thickness variation is that different hypothesis concerning thickness formations may be tested without conducting real experiments and costs of different types may be reduced. Moreover, while simulation, initial strip’s thickness will not have fluctuations as opposed to industrial or laboratorial experiments. On the basis of spectral analysis of random process, it was established that frequency of changing of strip’s thickness after rolling in one stand coincides with frequency of working roll’s vibration. Results of computer simulation correlate with results of the researches for 1700 mill. Therefore, opportunity to apply computer simulation to find reasons of formation of thickness variation of strip on the industrial rolling mill is shown.
-
Investigation of the averaged model of coked catalyst oxidative regeneration
Computer Research and Modeling, 2021, v. 13, no. 1, pp. 149-161The article is devoted to the construction and investigation of an averaged mathematical model of an aluminum-cobalt-molybdenum hydrocracking catalyst oxidative regeneration. The oxidative regeneration is an effective means of restoring the activity of the catalyst when its granules are coating with coke scurf.
The mathematical model of this process is a nonlinear system of ordinary differential equations, which includes kinetic equations for reagents’ concentrations and equations for changes in the temperature of the catalyst granule and the reaction mixture as a result of isothermal reactions and heat transfer between the gas and the catalyst layer. Due to the heterogeneity of the oxidative regeneration process, some of the equations differ from the standard kinetic ones and are based on empirical data. The article discusses the scheme of chemical interaction in the regeneration process, which the material balance equations are compiled on the basis of. It reflects the direct interaction of coke and oxygen, taking into account the degree of coverage of the coke granule with carbon-hydrogen and carbon-oxygen complexes, the release of carbon monoxide and carbon dioxide during combustion, as well as the release of oxygen and hydrogen inside the catalyst granule. The change of the radius and, consequently, the surface area of coke pellets is taken into account. The adequacy of the developed averaged model is confirmed by an analysis of the dynamics of the concentrations of substances and temperature.
The article presents a numerical experiment for a mathematical model of oxidative regeneration of an aluminum-cobalt-molybdenum hydrocracking catalyst. The experiment was carried out using the Kutta–Merson method. This method belongs to the methods of the Runge–Kutta family, but is designed to solve stiff systems of ordinary differential equations. The results of a computational experiment are visualized.
The paper presents the dynamics of the concentrations of substances involved in the oxidative regeneration process. A conclusion on the adequacy of the constructed mathematical model is drawn on the basis of the correspondence of the obtained results to physicochemical laws. The heating of the catalyst granule and the release of carbon monoxide with a change in the radius of the granule for various degrees of initial coking are analyzed. There are a description of the results.
In conclusion, the main results and examples of problems which can be solved using the developed mathematical model are noted.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"




