All issues
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Determination of CT dose by means of noise analysis
Computer Research and Modeling, 2018, v. 10, no. 4, pp. 525-533Views (last year): 23. Citations: 1 (RSCI).The article deals with the process of creating an effective algorithm for determining the amount of emitted quanta from an X-ray tube in computer tomography (CT) studies. An analysis of domestic and foreign literature showed that most of the work in the field of radiometry and radiography takes the tabulated values of X-ray absorption coefficients into account, while individual dose factors are not taken into account at all since many studies are lacking the Dose Report. Instead, an average value is used to simplify the calculation of statistics. In this regard, it was decided to develop a method to detect the amount of ionizing quanta by analyzing the noise of CT data. As the basis of the algorithm, we used Poisson and Gauss distribution mathematical model of owns’ design of logarithmic value. The resulting mathematical model was tested on the CT data of a calibration phantom consisting of three plastic cylinders filled with water, the X-ray absorption coefficient of which is known from the table values. The data were obtained from several CT devices from different manufacturers (Siemens, Toshiba, GE, Phillips). The developed algorithm made it possible to calculate the number of emitted X-ray quanta per unit time. These data, taking into account the noise level and the radiuses of the cylinders, were converted to X-ray absorption values, after which a comparison was made with tabulated values. As a result of this operation, the algorithm used with CT data of various configurations, experimental data were obtained, consistent with the theoretical part and the mathematical model. The results showed good accuracy of the algorithm and mathematical apparatus, which shows reliability of the obtained data. This mathematical model is already used in the noise reduction program of the CT of own design, where it participates as a method of creating a dynamic threshold of noise reduction. At the moment, the algorithm is being processed to work with real data from computer tomography of patients.
-
High-throughput identification of hydride phase-change kinetics models
Computer Research and Modeling, 2020, v. 12, no. 1, pp. 171-183Metal hydrides are an interesting class of chemical compounds that can reversibly bind a large amount of hydrogen and are, therefore, of interest for energy applications. Understanding the factors affecting the kinetics of hydride formation and decomposition is especially important. Features of the material, experimental setup and conditions affect the mathematical description of the processes, which can undergo significant changes during the processing of experimental data. The article proposes a general approach to numerical modeling of the formation and decomposition of metal hydrides and solving inverse problems of estimating material parameters from measurement data. The models are divided into two classes: diffusive ones, that take into account the gradient of hydrogen concentration in the metal lattice, and models with fast diffusion. The former are more complex and take the form of non-classical boundary value problems of parabolic type. A rather general approach to the grid solution of such problems is described. The second ones are solved relatively simply, but can change greatly when model assumptions change. Our experience in processing experimental data shows that a flexible software tool is needed; a tool that allows, on the one hand, building models from standard blocks, freely changing them if necessary, and, on the other hand, avoiding the implementation of routine algorithms. It also should be adapted for high-performance systems of different paradigms. These conditions are satisfied by the HIMICOS library presented in the paper, which has been tested on a large number of experimental data. It allows simulating the kinetics of formation and decomposition of metal hydrides, as well as related tasks, at three levels of abstraction. At the low level, the user defines the interface procedures, such as calculating the time layer based on the previous layer or the entire history, calculating the observed value and the independent variable from the task variables, comparing the curve with the reference. Special algorithms can be used for solving quite general parabolic-type boundary value problems with free boundaries and with various quasilinear (i.e., linear with respect to the derivative only) boundary conditions, as well as calculating the distance between the curves in different metric spaces and with different normalization. This is the middle level of abstraction. At the high level, it is enough to choose a ready tested model for a particular material and modify it in relation to the experimental conditions.
-
A neural network model for traffic signs recognition in intelligent transport systems
Computer Research and Modeling, 2021, v. 13, no. 2, pp. 429-435This work analyzes the problem of traffic signs recognition in intelligent transport systems. The basic concepts of computer vision and image recognition tasks are considered. The most effective approach for solving the problem of analyzing and recognizing images now is the neural network method. Among all kinds of neural networks, the convolutional neural network has proven itself best. Activation functions such as Relu and SoftMax are used to solve the classification problem when recognizing traffic signs. This article proposes a technology for recognizing traffic signs. The choice of an approach for solving the problem based on a convolutional neural network due to the ability to effectively solve the problem of identifying essential features and classification. The initial data for the neural network model were prepared and a training sample was formed. The Google Colaboratory cloud service with the external libraries for deep learning TensorFlow and Keras was used as a platform for the intelligent system development. The convolutional part of the network is designed to highlight characteristic features in the image. The first layer includes 512 neurons with the Relu activation function. Then there is the Dropout layer, which is used to reduce the effect of overfitting the network. The output fully connected layer includes four neurons, which corresponds to the problem of recognizing four types of traffic signs. An intelligent traffic sign recognition system has been developed and tested. The used convolutional neural network included four stages of convolution and subsampling. Evaluation of the efficiency of the traffic sign recognition system using the three-block cross-validation method showed that the error of the neural network model is minimal, therefore, in most cases, new images will be recognized correctly. In addition, the model has no errors of the first kind, and the error of the second kind has a low value and only when the input image is very noisy.
-
Study of turbulence models for calculating a strongly swirling flow in an abrupt expanding channel
Computer Research and Modeling, 2021, v. 13, no. 4, pp. 793-805In this paper, compared fundamentally different turbulence models for calculating a strongly swirling flow in an abrupt expanding pipe. This task is not only of great importance in practice, but also in theoretical terms. Because in such a flow a very complex anisotropic turbulence with recirculation zones arises and the study of the ongoing processes allows us to find an answer to many questions about turbulence. The flow under consideration has been well studied experimentally. Therefore, it is a very complex and interesting test problem for turbulence models. In the paper compared the numerical results of the one-parameter vt-92 model, the SSG/LRR-RSMw2012 Reynolds stress method and the new two-fluid model. These models are very different from each other. Because the Boussinesq hypothesis is used in the one-parameter vt-92 model, in the SSG/LRR-RSM-w2012 model, its own equation is written for each stress, and for the new two-fluid model, the basis is a completely different approach to turbulence. A feature of the approach to turbulence for the new two-fluid model is that it allows one to obtain a closed system of equations. Comparison of these models is carried out not only by the correspondence of their results with experimental data, but also by the computational resources expended on the numerical implementation of these models. Therefore, in this work, for all models, the same technique was used to numerically calculate the turbulent swirling flow at the Reynolds number $Re=3\cdot 10^4$ and the swirl parameter $S_w=0.6$. In the paper showed that the new two-fluid model is effective for the study of turbulent flows, because has good accuracy in describing complex anisotropic turbulent flows and is simple enough for numerical implementation.
-
Modeling the dynamics of public attention to extended processes on the example of the COVID-19 pandemic
Computer Research and Modeling, 2022, v. 14, no. 5, pp. 1131-1141The dynamics of public attention to COVID-19 epidemic is studied. The level of public attention is described by the daily number of search requests in Google made by users from a given country. In the empirical part of the work, data on the number of requests and the number of infected cases for a number of countries are considered. It is shown that in all cases the maximum of public attention occurs earlier than the maximum daily number of newly infected individuals. Thus, for a certain period of time, the growth of the epidemics occurs in parallel with the decline in public attention to it. It is also shown that the decline in the number of requests is described by an exponential function of time. In order to describe the revealed empirical pattern, a mathematical model is proposed, which is a modification of the model of the decline in attention after a one-time political event. The model develops the approach that considers decision-making by an individual as a member of the society in which the information process takes place. This approach assumes that an individual’s decision about whether or not to make a request on a given day about COVID is based on two factors. One of them is an attitude that reflects the individual’s long-term interest in a given topic and accumulates the individual’s previous experience, cultural preferences, social and economic status. The second is the dynamic factor of public attention to the epidemic, which changes during the process under consideration under the influence of informational stimuli. With regard to the subject under consideration, information stimuli are related to epidemic dynamics. The behavioral hypothesis is that if on some day the sum of the attitude and the dynamic factor exceeds a certain threshold value, then on that day the individual in question makes a search request on the topic of COVID. The general logic is that the higher the rate of infection growth, the higher the information stimulus, the slower decreases public attention to the pandemic. Thus, the constructed model made it possible to correlate the rate of exponential decrease in the number of requests with the rate of growth in the number of cases. The regularity found with the help of the model was tested on empirical data. It was found that the Student’s statistic is 4.56, which allows us to reject the hypothesis of the absence of a correlation with a significance level of 0.01.
-
Coherent constant delay transceiver for a synchronous fiber optic network
Computer Research and Modeling, 2023, v. 15, no. 1, pp. 141-155This paper proposes the implementation of a coherent transceiver with a constant delay and the ability to select any clock frequency grid used for clocking peripheral DACs and ADCs, tasks of device synchronization and data transmission. The choice of the required clock frequency grid directly affects the data transfer rate in the network, however, it allows one to flexibly configure the network for the tasks of transmitting clock signals and subnanosecond generation of sync signals on all devices in the network. A method for increasing the synchronization accuracy to tenths of nanoseconds by using digital phase detectors and a Phase Locked Loop (PLL) system on the slave device is proposed. The use of high-speed fiber-optic communication lines (FOCL) for synchronization tasks allows simultaneously exchanging control commands and signaling data. To simplify and reduce the cost of devices of a synchronous network of transceivers, it is proposed to use a clock signal restored from a data transmission line to filter phase noise and form a frequency grid in the PLL system for heterodyne signals and clock peripheral devices, including DAC and ADC. The results of multiple synchronization tests in the proposed synchronous network are presented.
-
Current issues in computational modeling of thrombosis, fibrinolysis, and thrombolysis
Computer Research and Modeling, 2024, v. 16, no. 4, pp. 975-995Hemostasis system is one of the key body’s defense systems, which is presented in all the liquid tissues and especially important in blood. Hemostatic response is triggered as a result of the vessel injury. The interaction between specialized cells and humoral systems leads to the formation of the initial hemostatic clot, which stops bleeding. After that the slow process of clot dissolution occurs. The formation of hemostatic plug is a unique physiological process, because during several minutes the hemostatic system generates complex structures on a scale ranging from microns for microvessel injury or damaged endothelial cell-cell contacts, to centimeters for damaged systemic arteries. Hemostatic response depends on the numerous coordinated processes, which include platelet adhesion and aggregation, granule secretion, platelet shape change, modification of the chemical composition of the lipid bilayer, clot contraction, and formation of the fibrin mesh due to activation of blood coagulation cascade. Computer modeling is a powerful tool, which is used to study this complex system at different levels of organization. This includes study of intracellular signaling in platelets, modelling humoral systems of blood coagulation and fibrinolysis, and development of the multiscale models of thrombus growth. There are two key issues of the computer modeling in biology: absence of the adequate physico-mathematical description of the existing experimental data due to the complexity of the biological processes, and high computational complexity of the models, which doesn’t allow to use them to test physiologically relevant scenarios. Here we discuss some key unresolved problems in the field, as well as the current progress in experimental research of hemostasis and thrombosis. New findings lead to reevaluation of the existing concepts and development of the novel computer models. We focus on the arterial thrombosis, venous thrombosis, thrombosis in microcirculation and the problems of fibrinolysis and thrombolysis. We also briefly discuss basic types of the existing mathematical models, their computational complexity, and principal issues in simulation of thrombus growth in arteries.
-
Modeling of rheological characteristics of aqueous suspensions based on nanoscale silicon dioxide particles
Computer Research and Modeling, 2024, v. 16, no. 5, pp. 1217-1252The rheological behavior of aqueous suspensions based on nanoscale silicon dioxide particles strongly depends on the dynamic viscosity, which affects directly the use of nanofluids. The purpose of this work is to develop and validate models for predicting dynamic viscosity from independent input parameters: silicon dioxide concentration SiO2, pH acidity, and shear rate $\gamma$. The influence of the suspension composition on its dynamic viscosity is analyzed. Groups of suspensions with statistically homogeneous composition have been identified, within which the interchangeability of compositions is possible. It is shown that at low shear rates, the rheological properties of suspensions differ significantly from those obtained at higher speeds. Significant positive correlations of the dynamic viscosity of the suspension with SiO2 concentration and pH acidity were established, and negative correlations with the shear rate $\gamma$. Regression models with regularization of the dependence of the dynamic viscosity $\eta$ on the concentrations of SiO2, NaOH, H3PO4, surfactant (surfactant), EDA (ethylenediamine), shear rate γ were constructed. For more accurate prediction of dynamic viscosity, the models using algorithms of neural network technologies and machine learning (MLP multilayer perceptron, RBF radial basis function network, SVM support vector method, RF random forest method) were trained. The effectiveness of the constructed models was evaluated using various statistical metrics, including the average absolute approximation error (MAE), the average quadratic error (MSE), the coefficient of determination $R^2$, and the average percentage of absolute relative deviation (AARD%). The RF model proved to be the best model in the training and test samples. The contribution of each component to the constructed model is determined. It is shown that the concentration of SiO2 has the greatest influence on the dynamic viscosity, followed by pH acidity and shear rate γ. The accuracy of the proposed models is compared to the accuracy of models previously published. The results confirm that the developed models can be considered as a practical tool for studying the behavior of nanofluids, which use aqueous suspensions based on nanoscale particles of silicon dioxide.
-
NLP-based automated compliance checking of data processing agreements against General Data Protection Regulation
Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1667-1685As it stands in the contemporary world, compliance with regulations concerning data protection such as GDPR is central to organizations. Another important issue analysis identified is the fact that compliance is hampered by the fact that legal documents are often complex and that regulations are ever changing. This paper aims to describe the ways in which NLP aids in keeping GDPR compliance effortless through automated scanning for compliance, evaluating privacy policies, and increasing the level of transparency. The work does not only limit to exploring the application of NLP for dealing with the privacy policies and facilitate better understanding of the third-party data sharing but also proceed to perform the preliminary studies to evaluate the difference of several NLP models. They implement and execute the models to distinguish the one that performs the best based on the efficiency and speed at which it automates the process of compliance verification and analyzing the privacy policy. Moreover, some of the topics discussed in the research deal with the possibility of using automatic tools and data analysis to GDPR, for instance, generation of the machine readable models that assist in evaluation of compliance. Among the evaluated models from our studies, SBERT performed best at the policy level with an accuracy of 0.57, precision of 0.78, recall of 0.83, and F1-score of 0.80. BERT showed the highest performance at the sentence level, achieving an accuracy of 0.63, precision of 0.70, recall of 0.50, and F1-score of 0.55. Therefore, this paper emphasizes the importance of NLP to help organizations overcome the difficulties of GDPR compliance, create a roadmap to a more client-oriented data protection regime. In this regard, by comparing preliminary studies done in the test and showing the performance of the better model, it helps enhance the measures taken in compliance and fosters the defense of individual rights in the cyberspace.
-
Modeling of evacuation of people of various age groups
Computer Research and Modeling, 2013, v. 5, no. 3, pp. 483-490Views (last year): 2. Citations: 2 (RSCI).The program for an assessment of an estimated time of evacuation with possibility of a choice of age group of people is offered and tested. Influence of age structure of groups of people on results of calculation is investigated.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"




