All issues
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Technology for collecting initial data for constructing models for assessing the functional state of a human by pupil's response to illumination changes in the solution of some problems of transport safety
Computer Research and Modeling, 2021, v. 13, no. 2, pp. 417-427This article solves the problem of developing a technology for collecting initial data for building models for assessing the functional state of a person. This condition is assessed by the pupil response of a person to a change in illumination based on the pupillometry method. This method involves the collection and analysis of initial data (pupillograms), presented in the form of time series characterizing the dynamics of changes in the human pupils to a light impulse effect. The drawbacks of the traditional approach to the collection of initial data using the methods of computer vision and smoothing of time series are analyzed. Attention is focused on the importance of the quality of the initial data for the construction of adequate mathematical models. The need for manual marking of the iris and pupil circles is updated to improve the accuracy and quality of the initial data. The stages of the proposed technology for collecting initial data are described. An example of the obtained pupillogram is given, which has a smooth shape and does not contain outliers, noise, anomalies and missing values. Based on the presented technology, a software and hardware complex has been developed, which is a collection of special software with two main modules, and hardware implemented on the basis of a Raspberry Pi 4 Model B microcomputer, with peripheral equipment that implements the specified functionality. To evaluate the effectiveness of the developed technology, models of a single-layer perspetron and a collective of neural networks are used, for the construction of which the initial data on the functional state of intoxication of a person were used. The studies have shown that the use of manual marking of the initial data (in comparison with automatic methods of computer vision) leads to a decrease in the number of errors of the 1st and 2nd years of the kind and, accordingly, to an increase in the accuracy of assessing the functional state of a person. Thus, the presented technology for collecting initial data can be effectively used to build adequate models for assessing the functional state of a person by pupillary response to changes in illumination. The use of such models is relevant in solving individual problems of ensuring transport security, in particular, monitoring the functional state of drivers.
-
Determination of post-reconstruction correction factors for quantitative assessment of pathological bone lesions using gamma emission tomography
Computer Research and Modeling, 2025, v. 17, no. 4, pp. 677-696In single-photon emission computed tomography (SPECT), patients with bone disorders receive a radiopharmaceutical (RP) that accumulates selectively in pathological lesions. Accurate quantification of RP uptake plays a critical role in disease staging, prognosis, and the development of personalized treatment strategies. Traditionally, the accuracy of quantitative assessment is evaluated through in vitro clinical trials using the standardized physical NEMA IEC phantom, which contains six spheres simulating lesions of various sizes. However, such experiments are limited by high costs and radiation exposure to researchers. This study proposes an alternative in silico approach based on numerical simulation using a digital twin of the NEMA IEC phantom. The computational framework allows for extensive testing under varying conditions without physical constraints. Analogous to clinical protocols, we calculated the recovery coefficient (RCmax), defined as the ratio of the maximum activity in a lesion to its known true value. The simulation settings were tailored to clinical SPECT/CT protocols involving 99mTc for patients with bone-related diseases. For the first time, we systematically analyzed the impact of lesion-to-background ratios and post-reconstruction filtering on RCmax values. Numerical experiments revealed the presence of edge artifacts in reconstructed lesion images, consistent with those observed in both real NEMA IEC phantom studies and patient scans. These artifacts introduce instability into the iterative reconstruction process and lead to errors in activity quantification. Our results demonstrate that post-filtering helps suppress edge artifacts and stabilizes the solution. However, it also significantly underestimates activity in small lesions. To address this issue, we introduce post-reconstruction correction factors derived from our simulations to improve the accuracy of quantification in lesions smaller than 20 mm in diameter.
-
Mathematicity of physics is surprising, but it enables us to understand the laws of nature through the analysis of mathematical structures describing it. This concerns, however, only physics. The degree of the mathematization of biology is low, and attempts to mathematize it are limited to the application of mathematical methods used for the description of physical systems. When doing so, we are likely to commit an error of attributing to biological systems features that they do not have. Some argue that biology does need new mathematical methods conforming to its needs, and not known from physics. However, because of a specific complexity of biological systems, we should speak of their algorithmicity, rather than of their mathematicity. As an example of algorithmic approach one can indicate so called individual-based models used in ecology to describe population dynamics or fractal models applied to describe geometrical complexity of such biological structures as trees.
-
The applicability of the approximation of single scattering in pulsed sensing of an inhomogeneous medium
Computer Research and Modeling, 2020, v. 12, no. 5, pp. 1063-1079The mathematical model based on the linear integro-differential Boltzmann equation is considered in this article. The model describes the radiation transfer in the scattering medium irradiated by a point source. The inverse problem for the transfer equation is defined. This problem consists of determining the scattering coefficient from the time-angular distribution of the radiation flux density at a given point in space. The Neumann series representation for solving the radiation transfer equation is analyzed in the study of the inverse problem. The zero member of the series describes the unscattered radiation, the first member of the series describes a single-scattered field, the remaining members of the series describe a multiple-scattered field. When calculating the approximate solution of the radiation transfer equation, the single scattering approximation is widespread to calculated an approximate solution of the equation for regions with a small optical thickness and a low level of scattering. An analytical formula is obtained for finding the scattering coefficient by using this approximation for problem with additional restrictions on the initial data. To verify the adequacy of the obtained formula the Monte Carlo weighted method for solving the transfer equation is constructed and software implemented taking into account multiple scattering in the medium and the space-time singularity of the radiation source. As applied to the problems of high-frequency acoustic sensing in the ocean, computational experiments were carried out. The application of the single scattering approximation is justified, at least, at a sensing range of about one hundred meters and the double and triple scattered fields make the main impact on the formula error. For larger regions, the single scattering approximation gives at the best only a qualitative evaluation of the medium structure, sometimes it even does not allow to determine the order of the parameters quantitative characteristics of the interaction of radiation with matter.
-
A neural network model for traffic signs recognition in intelligent transport systems
Computer Research and Modeling, 2021, v. 13, no. 2, pp. 429-435This work analyzes the problem of traffic signs recognition in intelligent transport systems. The basic concepts of computer vision and image recognition tasks are considered. The most effective approach for solving the problem of analyzing and recognizing images now is the neural network method. Among all kinds of neural networks, the convolutional neural network has proven itself best. Activation functions such as Relu and SoftMax are used to solve the classification problem when recognizing traffic signs. This article proposes a technology for recognizing traffic signs. The choice of an approach for solving the problem based on a convolutional neural network due to the ability to effectively solve the problem of identifying essential features and classification. The initial data for the neural network model were prepared and a training sample was formed. The Google Colaboratory cloud service with the external libraries for deep learning TensorFlow and Keras was used as a platform for the intelligent system development. The convolutional part of the network is designed to highlight characteristic features in the image. The first layer includes 512 neurons with the Relu activation function. Then there is the Dropout layer, which is used to reduce the effect of overfitting the network. The output fully connected layer includes four neurons, which corresponds to the problem of recognizing four types of traffic signs. An intelligent traffic sign recognition system has been developed and tested. The used convolutional neural network included four stages of convolution and subsampling. Evaluation of the efficiency of the traffic sign recognition system using the three-block cross-validation method showed that the error of the neural network model is minimal, therefore, in most cases, new images will be recognized correctly. In addition, the model has no errors of the first kind, and the error of the second kind has a low value and only when the input image is very noisy.
-
Computer model development for a verified computational experiment to restore the parameters of bodies with arbitrary shape and dielectric properties
Computer Research and Modeling, 2023, v. 15, no. 6, pp. 1555-1571The creation of a virtual laboratory stand that allows one to obtain reliable characteristics that can be proven as actual, taking into account errors and noises (which is the main distinguishing feature of a computational experiment from model studies) is one of the main problems of this work. It considers the following task: there is a rectangular waveguide in the single operating mode, on the wide wall of which a technological hole is cut, through which a sample for research is placed into the cavity of the transmission line. The recovery algorithm is as follows: the laboratory measures the network parameters (S11 and/or S21) in the transmission line with the sample. In the computer model of the laboratory stand, the sample geometry is reconstructed and an iterative process of optimization (or sweeping) of the electrophysical parameters is started, the mask of this process is the experimental data, and the stop criterion is the interpretive estimate of proximity (or residual). It is important to note that the developed computer model, along with its apparent simplicity, is initially ill-conditioned. To set up a computational experiment, the Comsol modeling environment is used. The results of the computational experiment with a good degree of accuracy coincided with the results of laboratory studies. Thus, experimental verification was carried out for several significant components, both the computer model in particular and the algorithm for restoring the target parameters in general. It is important to note that the computer model developed and described in this work may be effectively used for a computational experiment to restore the full dielectric parameters of a complex geometry target. Weak bianisotropy effects can also be detected, including chirality, gyrotropy, and material nonreciprocity. The resulting model is, by definition, incomplete, but its completeness is the highest of the considered options, while at the same time, the resulting model is well conditioned. Particular attention in this work is paid to the modeling of a coaxial-waveguide transition, it is shown that the use of a discrete-element approach is preferable to the direct modeling of the geometry of a microwave device.
-
Improving the quality of route generation in SUMO based on data from detectors using reinforcement learning
Computer Research and Modeling, 2024, v. 16, no. 1, pp. 137-146This work provides a new approach for constructing high-precision routes based on data from transport detectors inside the SUMO traffic modeling package. Existing tools such as flowrouter and routeSampler have a number of disadvantages, such as the lack of interaction with the network in the process of building routes. Our rlRouter uses multi-agent reinforcement learning (MARL), where the agents are incoming lanes and the environment is the road network. By performing actions to launch vehicles, agents receive a reward for matching data from transport detectors. Parameter Sharing DQN with the LSTM backbone of the Q-function was used as an algorithm for multi-agent reinforcement learning.
Since the rlRouter is trained inside the SUMO simulation, it can restore routes better by taking into account the interaction of vehicles within the network with each other and with the network infrastructure. We have modeled diverse traffic situations on three different junctions in order to compare the performance of SUMO’s routers with the rlRouter. We used Mean Absoluter Error (MAE) as the measure of the deviation from both cumulative detectors and routes data. The rlRouter achieved the highest compliance with the data from the detectors. We also found that by maximizing the reward for matching detectors, the resulting routes also get closer to the real ones. Despite the fact that the routes recovered using rlRouter are superior to the routes obtained using SUMO tools, they do not fully correspond to the real ones, due to the natural limitations of induction-loop detectors. To achieve more plausible routes, it is necessary to equip junctions with other types of transport counters, for example, camera detectors.
-
Modeling of rheological characteristics of aqueous suspensions based on nanoscale silicon dioxide particles
Computer Research and Modeling, 2024, v. 16, no. 5, pp. 1217-1252The rheological behavior of aqueous suspensions based on nanoscale silicon dioxide particles strongly depends on the dynamic viscosity, which affects directly the use of nanofluids. The purpose of this work is to develop and validate models for predicting dynamic viscosity from independent input parameters: silicon dioxide concentration SiO2, pH acidity, and shear rate $\gamma$. The influence of the suspension composition on its dynamic viscosity is analyzed. Groups of suspensions with statistically homogeneous composition have been identified, within which the interchangeability of compositions is possible. It is shown that at low shear rates, the rheological properties of suspensions differ significantly from those obtained at higher speeds. Significant positive correlations of the dynamic viscosity of the suspension with SiO2 concentration and pH acidity were established, and negative correlations with the shear rate $\gamma$. Regression models with regularization of the dependence of the dynamic viscosity $\eta$ on the concentrations of SiO2, NaOH, H3PO4, surfactant (surfactant), EDA (ethylenediamine), shear rate γ were constructed. For more accurate prediction of dynamic viscosity, the models using algorithms of neural network technologies and machine learning (MLP multilayer perceptron, RBF radial basis function network, SVM support vector method, RF random forest method) were trained. The effectiveness of the constructed models was evaluated using various statistical metrics, including the average absolute approximation error (MAE), the average quadratic error (MSE), the coefficient of determination $R^2$, and the average percentage of absolute relative deviation (AARD%). The RF model proved to be the best model in the training and test samples. The contribution of each component to the constructed model is determined. It is shown that the concentration of SiO2 has the greatest influence on the dynamic viscosity, followed by pH acidity and shear rate γ. The accuracy of the proposed models is compared to the accuracy of models previously published. The results confirm that the developed models can be considered as a practical tool for studying the behavior of nanofluids, which use aqueous suspensions based on nanoscale particles of silicon dioxide.
-
Double layer interval weighted graphs in assessing the market risks
Computer Research and Modeling, 2014, v. 6, no. 1, pp. 159-166Views (last year): 2. Citations: 1 (RSCI).This scientific work is dedicated to applying of two-layer interval weighted graphs in nonstationary time series forecasting and evaluation of market risks. The first layer of the graph, formed with the primary system training, displays potential system fluctuations at the time of system training. Interval vertexes of the second layer of the graph (the superstructure of the first layer) which display the degree of time series modeling error are connected with the first layer by edges. The proposed model has been approved by the 90-day forecast of steel billets. The average forecast error amounts 2,6 % (it’s less than the average forecast error of the autoregression models).
-
Numerical modeling of ecologic situation of the Azov Sea with using schemes of increased order of accuracy on multiprocessor computer system
Computer Research and Modeling, 2016, v. 8, no. 1, pp. 151-168Views (last year): 4. Citations: 31 (RSCI).The article covered results of three-dimensional modeling of ecologic situation of shallow water on the example of the Azov Sea with using schemes of increased order of accuracy on multiprocessor computer system of Southern Federal University. Discrete analogs of convective and diffusive transfer operators of the fourth order of accuracy in the case of partial occupancy of cells were constructed and studied. The developed scheme of the high (fourth) order of accuracy were used for solving problems of aquatic ecology and modeling spatial distribution of polluting nutrients, which caused growth of phytoplankton, many species of which are toxic and harmful. The use of schemes of the high order of accuracy are improved the quality of input data and decreased the error in solutions of model tasks of aquatic ecology. Numerical experiments were conducted for the problem of transportation of substances on the basis of the schemes of the second and fourth orders of accuracy. They’re showed that the accuracy was increased in 48.7 times for diffusion-convection problem. The mathematical algorithm was proposed and numerically implemented, which designed to restore the bottom topography of shallow water on the basis of hydrographic data (water depth at individual points or contour level). The map of bottom relief of the Azov Sea was generated with using this algorithm. It’s used to build fields of currents calculated on the basis of hydrodynamic model. The fields of water flow currents were used as input data of the aquatic ecology models. The library of double-layered iterative methods was developed for solving of nine-diagonal difference equations. It occurs in discretization of model tasks of challenges of pollutants concentration, plankton and fish on multiprocessor computer system. It improved the precision of the calculated data and gave the possibility to obtain operational forecasts of changes in ecologic situation of shallow water in short time intervals.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"




