All issues
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Application of the streamline method for nonlinear filtration problems acceleration
Computer Research and Modeling, 2018, v. 10, no. 5, pp. 709-728Views (last year): 18.The paper contains numerical simulation of nonisothermal nonlinear flow in a porous medium. Twodimensional unsteady problem of heavy oil, water and steam flow is considered. Oil phase consists of two pseudocomponents: light and heavy fractions, which like the water component, can vaporize. Oil exhibits viscoplastic rheology, its filtration does not obey Darcy's classical linear law. Simulation considers not only the dependence of fluids density and viscosity on temperature, but also improvement of oil rheological properties with temperature increasing.
To solve this problem numerically we use streamline method with splitting by physical processes, which consists in separating the convective heat transfer directed along filtration from thermal conductivity and gravitation. The article proposes a new approach to streamline methods application, which allows correctly simulate nonlinear flow problems with temperature-dependent rheology. The core of this algorithm is to consider the integration process as a set of quasi-equilibrium states that are results of solving system on a global grid. Between these states system solved on a streamline grid. Usage of the streamline method allows not only to accelerate calculations, but also to obtain a physically reliable solution, since integration takes place on a grid that coincides with the fluid flow direction.
In addition to the streamline method, the paper presents an algorithm for nonsmooth coefficients accounting, which arise during simulation of viscoplastic oil flow. Applying this algorithm allows keeping sufficiently large time steps and does not change the physical structure of the solution.
Obtained results are compared with known analytical solutions, as well as with the results of commercial package simulation. The analysis of convergence tests on the number of streamlines, as well as on different streamlines grids, justifies the applicability of the proposed algorithm. In addition, the reduction of calculation time in comparison with traditional methods demonstrates practical significance of the approach.
-
Mathematical model of the biometric iris recognition system
Computer Research and Modeling, 2020, v. 12, no. 3, pp. 629-639Automatic recognition of personal identity by biometric features is based on unique peculiarities or characteristics of people. Biometric identification process consist in making of reference templates and comparison with new input data. Iris pattern recognition algorithms presents high accuracy and low identification errors percent on practice. Iris pattern advantages over other biometric features are determined by its high degree of freedom (nearly 249), excessive density of unique features and constancy. High recognition reliability level is very important because it provides search in big databases. Unlike one-to-one check mode that is applicable only to small calculation count it allows to work in one-to-many identification mode. Every biometric identification system appears to be probabilistic and qualitative characteristics description utilizes such parameters as: recognition accuracy, false acceptance rate and false rejection rate. These characteristics allows to compare identity recognition methods and asses the system performance under any circumstances. This article explains the mathematical model of iris pattern biometric identification and its characteristics. Besides, there are analyzed results of comparison of model and real recognition process. To make such analysis there was carried out the review of existing iris pattern recognition methods based on different unique features vector. The Python-based software package is described below. It builds-up probabilistic distributions and generates large test data sets. Such data sets can be also used to educate the identification decision making neural network. Furthermore, synergy algorithm of several iris pattern identification methods was suggested to increase qualitative characteristics of system in comparison with the use of each method separately.
-
Experimental comparison of PageRank vector calculation algorithms
Computer Research and Modeling, 2023, v. 15, no. 2, pp. 369-379Finding PageRank vector is of great scientific and practical interest due to its applicability to modern search engines. Despite the fact that this problem is reduced to finding the eigenvector of the stochastic matrix $P$, the need for new algorithms is justified by a large size of the input data. To achieve no more than linear execution time, various randomized methods have been proposed, returning the expected result only with some probability close enough to one. We will consider two of them by reducing the problem of calculating the PageRank vector to the problem of finding equilibrium in an antagonistic matrix game, which is then solved using the Grigoriadis – Khachiyan algorithm. This implementation works effectively under the assumption of sparsity of the input matrix. As far as we know, there are no successful implementations of neither the Grigoriadis – Khachiyan algorithm nor its application to the task of calculating the PageRank vector. The purpose of this paper is to fill this gap. The article describes an algorithm giving pseudocode and some details of the implementation. In addition, it discusses another randomized method of calculating the PageRank vector, namely, Markov chain Monte Carlo (MCMC), in order to compare the results of these algorithms on matrices with different values of the spectral gap. The latter is of particular interest, since the magnitude of the spectral gap strongly affects the convergence rate of MCMC and does not affect the other two approaches at all. The comparison was carried out on two types of generated graphs: chains and $d$-dimensional cubes. The experiments, as predicted by the theory, demonstrated the effectiveness of the Grigoriadis – Khachiyan algorithm in comparison with MCMC for sparse graphs with a small spectral gap value. The written code is publicly available, so everyone can reproduce the results themselves or use this implementation for their own needs. The work has a purely practical orientation, no theoretical results were obtained.
-
A surrogate neural network model for resolving the flow field in serial calculations of steady turbulent flows with a resolution of the nearwall region
Computer Research and Modeling, 2024, v. 16, no. 5, pp. 1195-1216When modeling turbulent flows in practical applications, it is often necessary to carry out a series of calculations of bodies of similar topology. For example, bodies that differ in the shape of the fairing. The use of convolutional neural networks allows to reduce the number of calculations in a series, restoring some of them based on calculations already performed. The paper proposes a method that allows to apply a convolutional neural network regardless of the method of constructing a computational mesh. To do this, the flow field is reinterpolated to a uniform mesh along with the body itself. The geometry of the body is set using the signed distance function and masking. The restoration of the flow field based on part of the calculations for similar geometries is carried out using a neural network of the UNet type with a spatial attention mechanism. The resolution of the nearwall region, which is a critical condition for turbulent modeling, is based on the equations obtained in the nearwall domain decomposition method.
A demonstration of the method is given for the case of a flow around a rounded plate by a turbulent air flow with different rounding at fixed parameters of the incoming flow with the Reynolds number $Re = 10^5$ and the Mach number $M = 0.15$. Since flows with such parameters of the incoming flow can be considered incompressible, only the velocity components are studied directly. The flow fields, velocity and friction profiles obtained by the surrogate model and numerically are compared. The analysis is carried out both on the plate and on the rounding. The simulation results confirm the prospects of the proposed approach. In particular, it was shown that even if the model is used at the maximum permissible limits of its applicability, friction can be obtained with an accuracy of up to 90%. The work also analyzes the constructed architecture of the neural network. The obtained surrogate model is compared with alternative models based on a variational autoencoder or the principal component analysis using radial basis functions. Based on this comparison, the advantages of the proposed method are demonstrated.
-
Model for operational optimal control of financial recourses distribution in a company
Computer Research and Modeling, 2019, v. 11, no. 2, pp. 343-358Views (last year): 33.A critical analysis of existing approaches, methods and models to solve the problem of financial resources operational management has been carried out in the article. A number of significant shortcomings of the presented models were identified, limiting the scope of their effective usage. There are a static nature of the models, probabilistic nature of financial flows are not taken into account, daily amounts of receivables and payables that significantly affect the solvency and liquidity of the company are not identified. This necessitates the development of a new model that reflects the essential properties of the planning financial flows system — stochasticity, dynamism, non-stationarity.
The model for the financial flows distribution has been developed. It bases on the principles of optimal dynamic control and provides financial resources planning ensuring an adequate level of liquidity and solvency of a company and concern initial data uncertainty. The algorithm for designing the objective cash balance, based on principles of a companies’ financial stability ensuring under changing financial constraints, is proposed.
Characteristic of the proposed model is the presentation of the cash distribution process in the form of a discrete dynamic process, for which a plan for financial resources allocation is determined, ensuring the extremum of an optimality criterion. Designing of such plan is based on the coordination of payments (cash expenses) with the cash receipts. This approach allows to synthesize different plans that differ in combinations of financial outflows, and then to select the best one according to a given criterion. The minimum total costs associated with the payment of fines for non-timely financing of expenses were taken as the optimality criterion. Restrictions in the model are the requirement to ensure the minimum allowable cash balances for the subperiods of the planning period, as well as the obligation to make payments during the planning period, taking into account the maturity of these payments. The suggested model with a high degree of efficiency allows to solve the problem of financial resources distribution under uncertainty over time and receipts, coordination of funds inflows and outflows. The practical significance of the research is in developed model application, allowing to improve the financial planning quality, to increase the management efficiency and operational efficiency of a company.
-
Estimation of maximal values of biomass growth yield based on the mass-energy balance of cell metabolism
Computer Research and Modeling, 2019, v. 11, no. 4, pp. 723-750Views (last year): 2.The biomass growth yield is the ratio of the newly synthesized substance of growing cells to the amount of the consumed substrate, the source of matter and energy for cell growth. The yield is a characteristic of the efficiency of substrate conversion to cell biomass. The conversion is carried out by the cell metabolism, which is a complete aggregate of biochemical reactions occurring in the cells.
This work newly considers the problem of maximal cell growth yield prediction basing on balances of the whole living cell metabolism and its fragments called as partial metabolisms (PM). The following PM’s are used for the present consideration. During growth on any substrate we consider i) the standard constructive metabolism (SCM) which consists of identical pathways during growth of various organisms on any substrate. SCM starts from several standard compounds (nodal metabolites): glucose, acetyl-CoA 2-oxoglutarate, erythrose-4-phosphate, oxaloacetate, ribose-5- phosphate, 3-phosphoglycerate, phosphoenolpyruvate, and pyruvate, and ii) the full forward metabolism (FM) — the remaining part of the whole metabolism. The first one consumes high-energy bonds (HEB) formed by the second one. In this work we examine a generalized variant of the FM, when the possible presence of extracellular products, as well as the possibilities of both aerobic and anaerobic growth are taken into account. Instead of separate balances of each nodal metabolite formation as it was made in our previous work, this work deals at once with the whole aggregate of these metabolites. This makes the problem solution more compact and requiring a smaller number of biochemical quantities and substantially less computational time. An equation expressing the maximal biomass yield via specific amounts of HEB formed and consumed by the partial metabolisms has been derived. It includes the specific HEB consumption by SCM which is a universal biochemical parameter applicable to the wide range of organisms and growth substrates. To correctly determine this parameter, the full constructive metabolism and its forward part are considered for the growth of cells on glucose as the mostly studied substrate. We used here the found earlier properties of the elemental composition of lipid and lipid-free fractions of cell biomass. Numerical study of the effect of various interrelations between flows via different nodal metabolites has been made. It showed that the requirements of the SCM in high-energy bonds and NAD(P)H are practically constants. The found HEB-to-formed-biomass coefficient is an efficient tool for finding estimates of maximal biomass yield from substrates for which the primary metabolism is known. Calculation of ATP-to-substrate ratio necessary for the yield estimation has been made using the special computer program package, GenMetPath.
-
Application of simplified implicit Euler method for electrophysiological models
Computer Research and Modeling, 2020, v. 12, no. 4, pp. 845-864A simplified implicit Euler method was analyzed as an alternative to the explicit Euler method, which is a commonly used method in numerical modeling in electrophysiology. The majority of electrophysiological models are quite stiff, since the dynamics they describe includes a wide spectrum of time scales: a fast depolarization, that lasts milliseconds, precedes a considerably slow repolarization, with both being the fractions of the action potential observed in excitable cells. In this work we estimate stiffness by a formula that does not require calculation of eigenvalues of the Jacobian matrix of the studied ODEs. The efficiency of the numerical methods was compared on the case of typical representatives of detailed and conceptual type models of excitable cells: Hodgkin–Huxley model of a neuron and Aliev–Panfilov model of a cardiomyocyte. The comparison of the efficiency of the numerical methods was carried out via norms that were widely used in biomedical applications. The stiffness ratio’s impact on the speedup of simplified implicit method was studied: a real gain in speed was obtained for the Hodgkin–Huxley model. The benefits of the usage of simple and high-order methods for electrophysiological models are discussed along with the discussion of one method’s stability issues. The reasons for using simplified instead of high-order methods during practical simulations were discussed in the corresponding section. We calculated higher order derivatives of the solutions of Hodgkin-Huxley model with various stiffness ratios; their maximum absolute values appeared to be quite large. A numerical method’s approximation constant’s formula contains the latter and hence ruins the effect of the other term (a small factor which depends on the order of approximation). This leads to the large value of global error. We committed a qualitative stability analysis of the explicit Euler method and were able to estimate the model’s parameters influence on the border of the region of absolute stability. The latter is used when setting the value of the timestep for simulations a priori.
-
Enhancing DevSecOps with continuous security requirements analysis and testing
Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1687-1702The fast-paced environment of DevSecOps requires integrating security at every stage of software development to ensure secure, compliant applications. Traditional methods of security testing, often performed late in the development cycle, are insufficient to address the unique challenges of continuous integration and continuous deployment (CI/CD) pipelines, particularly in complex, high-stakes sectors such as industrial automation. In this paper, we propose an approach that automates the analysis and testing of security requirements by embedding requirements verification into the CI/CD pipeline. Our method employs the ARQAN tool to map high-level security requirements to Security Technical Implementation Guides (STIGs) using semantic search, and RQCODE to formalize these requirements as code, providing testable and enforceable security guidelines.We implemented ARQAN and RQCODE within a CI/CD framework, integrating them with GitHub Actions for realtime security checks and automated compliance verification. Our approach supports established security standards like IEC 62443 and automates security assessment starting from the planning phase, enhancing the traceability and consistency of security practices throughout the pipeline. Evaluation of this approach in collaboration with an industrial automation company shows that it effectively covers critical security requirements, achieving automated compliance for 66.15% of STIG guidelines relevant to the Windows 10 platform. Feedback from industry practitioners further underscores its practicality, as 85% of security requirements mapped to concrete STIG recommendations, with 62% of these requirements having matching testable implementations in RQCODE. This evaluation highlights the approach’s potential to shift security validation earlier in the development process, contributing to a more resilient and secure DevSecOps lifecycle.
-
Development of a computational environment for mathematical modeling of superconducting nanostructures with a magnet
Computer Research and Modeling, 2023, v. 15, no. 5, pp. 1349-1358Now days the main research activity in the field of nanotechnology is aimed at the creation, study and application of new materials and new structures. Recently, much attention has been attracted by the possibility of controlling magnetic properties using a superconducting current, as well as the influence of magnetic dynamics on the current–voltage characteristics of hybrid superconductor/ferromagnet (S/F) nanostructures. In particular, such structures include the S/F/S Josephson junction or molecular nanomagnets coupled to the Josephson junctions. Theoretical studies of the dynamics of such structures need processes of a large number of coupled nonlinear equations. Numerical modeling of hybrid superconductor/magnet nanostructures implies the calculation of both magnetic dynamics and the dynamics of the superconducting phase, which strongly increases their complexity and scale, so it is advisable to use heterogeneous computing systems.
In the course of studying the physical properties of these objects, it becomes necessary to numerically solve complex systems of nonlinear differential equations, which requires significant time and computational resources.
The currently existing micromagnetic algorithms and frameworks are based on the finite difference or finite element method and are extremely useful for modeling the dynamics of magnetization on a wide time scale. However, the functionality of existing packages does not allow to fully implement the desired computation scheme.
The aim of the research is to develop a unified environment for modeling hybrid superconductor/magnet nanostructures, providing access to solvers and developed algorithms, and based on a heterogeneous computing paradigm that allows research of superconducting elements in nanoscale structures with magnets and hybrid quantum materials. In this paper, we investigate resonant phenomena in the nanomagnet system associated with the Josephson junction. Such a system has rich resonant physics. To study the possibility of magnetic reversal depending on the model parameters, it is necessary to solve numerically the Cauchy problem for a system of nonlinear equations. For numerical simulation of hybrid superconductor/magnet nanostructures, a computing environment based on the heterogeneous HybriLIT computing platform is implemented. During the calculations, all the calculation times obtained were averaged over three launches. The results obtained here are of great practical importance and provide the necessary information for evaluating the physical parameters in superconductor/magnet hybrid nanostructures.
-
Classification of pest-damaged coniferous trees in unmanned aerial vehicles images using convolutional neural network models
Computer Research and Modeling, 2024, v. 16, no. 5, pp. 1271-1294This article considers the task of multiclass classification of coniferous trees with varying degrees of damage by insect pests on images obtained using unmanned aerial vehicles (UAVs). We propose the use of convolutional neural networks (CNNs) for the classification of fir trees Abies sibirica and Siberian pine trees Pinus sibirica in unmanned aerial vehicles (UAV) imagery. In our approach, we develop three CNN models based on the classical U-Net architecture, designed for pixel-wise classification of images (semantic segmentation). The first model, Mo-U-Net, incorporates several changes to the classical U-Net model. The second and third models, MSC-U-Net and MSC-Res-U-Net, respectively, form ensembles of three Mo-U-Net models, each varying in depth and input image sizes. Additionally, the MSC-Res-U-Net model includes the integration of residual blocks. To validate our approach, we have created two datasets of UAV images depicting trees affected by pests, specifically Abies sibirica and Pinus sibirica, and trained the proposed three CNN models utilizing mIoULoss and Focal Loss as loss functions. Subsequent evaluation focused on the effectiveness of each trained model in classifying damaged trees. The results obtained indicate that when mIoULoss served as the loss function, the proposed models fell short of practical applicability in the forestry industry, failing to achieve classification accuracy above the threshold value of 0.5 for individual classes of both tree species according to the IoU metric. However, under Focal Loss, the MSC-Res-U-Net and Mo-U-Net models, in contrast to the third proposed model MSC-U-Net, exhibited high classification accuracy (surpassing the threshold value of 0.5) for all classes of Abies sibirica and Pinus sibirica trees. Thus, these results underscore the practical significance of the MSC-Res-U-Net and Mo-U-Net models for forestry professionals, enabling accurate classification and early detection of pest outbreaks in coniferous trees.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"




