All issues
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Approach to development of algorithms of Newtonian methods of unconstrained optimization, their software implementation and benchmarking
Computer Research and Modeling, 2013, v. 5, no. 3, pp. 367-377Views (last year): 2. Citations: 7 (RSCI).The approach to increase efficiency of Gill and Murray's algorithm of Newtonian methods of unconstrained optimization with step adjustment creation is offered, rests on Cholesky’s factorization. It is proved that the strategy of choice of the descent direction also determines the solution of the problem of scaling of steps at descent, and approximation by non-quadratic functions, and integration with a method of a confidential vicinity.
-
Benchmarking of CEA FlowVision in ship flow simulation
Computer Research and Modeling, 2014, v. 6, no. 6, pp. 889-899Views (last year): 1. Citations: 5 (RSCI).In the field of naval architecture the most competent recommendations in verification and validation of the numerical methods were developed within an international workshop on the numerical prediction of ship viscous flow which is held every five years in Gothenburg (Sweden) and Tokyo (Japan) alternately. In the workshop “Gothenburg–2000” three modern hull forms with reliable experimental data were introduced as test cases. The most general case among them is a containership KCS, a ship of moderate specific speed and fullness. The paper focuses on a numerical research of KCS hull flow, which was made according to the formal procedures of the workshop with the help of CEA FlowVision. Findings were compared with experimental data and computational data of other key CEA.
-
Iterative diffusion importance: advancing edge criticality evaluation in complex networks
Computer Research and Modeling, 2025, v. 17, no. 5, pp. 783-797This paper is devoted to the problem of edge criticality identification and ranking in complex networks, which is a part of a modern research direction in the novel network science. The diffusion importance belongs to the set of acknowledged methods that help to identify the significant connections in the graph that are critical to retaining structural integrity. In the present work, we develop the Iterative Diffusion Importance algorithm that is based on the re-estimation of critical topological features at each step of the graph deconstruction. The Iterative Diffusion Importance has been compared with methods such as diffusion importance and degree product, which are two very well-known benchmark algorithms. As for benchmark networks, we tested the Iterative Diffusion Importance on three standard networks, such as Zachary’s Karate Club, the American Football Network, and the Dolphins Network, which are often used for algorithm efficiency evaluation and are different in size and density. Also, we proposed a new benchmark network representing the airplane communication between Japan and the US. The numerical experiment on finding the ranking of critical edges and the following network decomposition demonstrated that the proposed Iterative Diffusion Importance exceeds the conventional diffusion importance by the efficiency for 2–35% depending on the network complexity, the number of nodes, and the number of edges. The only drawback of the Iterative Diffusion Importance is an increase in computation complexity and hencely in the runtime, but this drawback can be easily compensated for by the preliminary planning of the network deconstruction or protection and by reducing the re-evaluation frequency of the iterative process.
-
Analysis of dissipative properties of a hybrid large-particle method for structurally complicated gas flows
Computer Research and Modeling, 2020, v. 12, no. 4, pp. 757-772We study the computational properties of a parametric class of finite-volume schemes with customizable dissipative properties with splitting by physical processes into Lagrangian, Eulerian, and the final stages (the hybrid large-particle method). The method has a second-order approximation in space and time on smooth solutions. The regularization of a numerical solution at the Lagrangian stage is performed by nonlinear correction of artificial viscosity. Regardless of the grid resolution, the artificial viscosity value tends to zero outside the zone of discontinuities and extremes in the solution. At Eulerian and final stages, primitive variables (density, velocity, and total energy) are first reconstructed by an additive combination of upwind and central approximations weighted by a flux limiter. Then numerical divergent fluxes are formed from them. In this case, discrete analogs of conservation laws are performed.
The analysis of dissipative properties of the method using known viscosity and flow limiters, as well as their linear combination, is performed. The resolution of the scheme and the quality of numerical solutions are demonstrated by examples of two-dimensional benchmarks: a gas flow around the step with Mach numbers 3, 10 and 20, the double Mach reflection of a strong shock wave, and the implosion problem. The influence of the scheme viscosity of the method on the numerical reproduction of a gases interface instability is studied. It is found that a decrease of the dissipation level in the implosion problem leads to the symmetric solution destruction and formation of a chaotic instability on the contact surface.
Numerical solutions are compared with the results of other authors obtained using higher-order approximation schemes: CABARET, HLLC (Harten Lax van Leer Contact), CFLFh (CFLF hybrid scheme), JT (centered scheme with limiter by Jiang and Tadmor), PPM (Piecewise Parabolic Method), WENO5 (weighted essentially non-oscillatory scheme), RKGD (Runge –Kutta Discontinuous Galerkin), hybrid weighted nonlinear schemes CCSSR-HW4 and CCSSR-HW6. The advantages of the hybrid large-particle method include extended possibilities for solving hyperbolic and mixed types of problems, a good ratio of dissipative and dispersive properties, a combination of algorithmic simplicity and high resolution in problems with complex shock-wave structure, both instability and vortex formation at interfaces.
-
Using RAG technology and large language models to search for documents and obtain information in corporate information systems
Computer Research and Modeling, 2025, v. 17, no. 5, pp. 871-888This paper investigates the effectiveness of Retrieval-Augmented Generation (RAG) combined with various Large Language Models (LLMs) for document retrieval and information access in corporate information systems. We survey typical use-cases of LLMs in enterprise environments, outline the RAG architecture, and discuss the major challenges that arise when integrating LLMs into a RAG pipeline. A system architecture is proposed that couples a text-vector encoder with an LLM. The encoder builds a vector database that indexes a library of corporate documents. For every user query, relevant contextual fragments are retrieved from this library via the FAISS engine and appended to the prompt given to the LLM. The LLM then generates an answer grounded in the supplied context. The overall structure and workflow of the proposed RAG solution are described in detail. To justify the choice of the generative component, we benchmark a set of widely used LLMs — ChatGPT, GigaChat, YandexGPT, Llama, Mistral, Qwen, and others — when employed as the answer-generation module. Using an expert-annotated test set of queries, we evaluate the accuracy, completeness, linguistic quality, and conciseness of the responses. Model-specific characteristics and average response latencies are analysed; the study highlights the significant influence of available GPU memory on the throughput of local LLM deployments. An overall ranking of the models is derived from an aggregated quality metric. The results confirm that the proposed RAG architecture provides efficient document retrieval and information delivery in corporate environments. Future research directions include richer context augmentation techniques and a transition toward agent-based LLM architectures. The paper concludes with practical recommendations on selecting an optimal RAG–LLM configuration to ensure fast and precise access to enterprise knowledge assets.
-
Benchmark «line-by-line» calculations of atmospheric radiation
Computer Research and Modeling, 2012, v. 4, no. 3, pp. 553-562Views (last year): 4. Citations: 3 (RSCI).The paper presents the methodology of «line-by-line» calculations of the Earth and atmosphere thermal radiation. Intensity of radiation is computed by numerical integration of the radiative transfer kinetic equation and the system of the angular momentum equations using quasi-diffusion method. Data from HITRAN molecular spectroscopic database [Rothman et al., 2009] are used to calculate the atmosphere optical parameters.
-
Repressilator with time-delayed gene expression. Part II. Stochastic description
Computer Research and Modeling, 2021, v. 13, no. 3, pp. 587-609The repressilator is the first genetic regulatory network in synthetic biology, which was artificially constructed in 2000. It is a closed network of three genetic elements $lacI$, $\lambda cI$ and $tetR$, which have a natural origin, but are not found in nature in such a combination. The promoter of each of the three genes controls the next cistron via the negative feedback, suppressing the expression of the neighboring gene. In our previous paper [Bratsun et al., 2018], we proposed a mathematical model of a delayed repressillator and studied its properties within the framework of a deterministic description. We assume that delay can be both natural, i.e. arises during the transcription / translation of genes due to the multistage nature of these processes, and artificial, i.e. specially to be introduced into the work of the regulatory network using gene engineering technologies. In this work, we apply the stochastic description of dynamic processes in a delayed repressilator, which is an important addition to deterministic analysis due to the small number of molecules involved in gene regulation. The stochastic study is carried out numerically using the Gillespie algorithm, which is modified for time delay systems. We present the description of the algorithm, its software implementation, and the results of benchmark simulations for a onegene delayed autorepressor. When studying the behavior of a repressilator, we show that a stochastic description in a number of cases gives new information about the behavior of a system, which does not reduce to deterministic dynamics even when averaged over a large number of realizations. We show that in the subcritical range of parameters, where deterministic analysis predicts the absolute stability of the system, quasi-regular oscillations may be excited due to the nonlinear interaction of noise and delay. Earlier, we have discovered within the framework of the deterministic description, that there exists a long-lived transient regime, which is represented in the phase space by a slow manifold. This mode reflects the process of long-term synchronization of protein pulsations in the work of the repressilator genes. In this work, we show that the transition to the cooperative mode of gene operation occurs a two order of magnitude faster, when the effect of the intrinsic noise is taken into account. We have obtained the probability distribution of moment when the phase trajectory leaves the slow manifold and have determined the most probable time for such a transition. The influence of the intrinsic noise of chemical reactions on the dynamic properties of the repressilator is discussed.
-
Methodology of aircraft icing calculation in a wide range of climate and speed parameters. Applicability within the NLG-25 airworthiness standards
Computer Research and Modeling, 2023, v. 15, no. 4, pp. 957-978Certifying a transport airplane for the flights under icing conditions in Russia was carried out within the framework of the requirements of Annex С to the AP-25 Aviation Rules. In force since 2023 to replace AP-25 the new Russian certification document “Airworthiness Standards” (NLG-25) proposes the introduction of Appendix O. A feature of Appendix O is the need to carry out calculations in conditions of high liquid water content and with large water drops (500 microns or more). With such parameters of the dispersed flow, such physical processes as the disruption and splashing of a water film when large drops enter it become decisive. The flow of a dispersed medium under such conditions is essentially polydisperse. This paper describes the modifications of the IceVision technique implemented on the basis of the FlowVision software package for the ice accretion calculations within the framework of Appendix O.
The main difference between the IceVision method and the known approaches is the use of the Volume of fluid (VOF) technology to the shape of ice changes tracking. The external flow around the aircraft is calculated simultaneously with the growth of ice and its heating. Ice is explicitly incorporated in the computational domain; the heat transfer equation is solved in it. Unlike the Lagrangian approaches, the Euler computational grid is not completely rebuilt in the IceVision technique: only the cells containing the contact surface are changed.
The IceVision 2.0 version accounts for stripping the film, as well as bouncing and splashing of falling drops at the surfaces of the aircraft and ice. The diameter of secondary droplets is calculated using known empirical correlations. The speed of the water film flow over the surface is determined taking into account the action of aerodynamic forces, gravity, hydrostatic pressure gradient and surface tension force. The result of taking into account surface tension is the effect of contraction of the film, which leads to the formation of water flows in the form of rivulets and ice deposits in the form of comb-like growths. An energy balance relation is fulfilled on the ice surface that takes into account the energy of falling drops, heat exchange between ice and air, the heat of crystallization, evaporation, sublimation and condensation. The paper presents the results of solving benchmark and model problems, demonstrating the effectiveness of the IceVision technique and the reliability of the obtained results.
-
Development of and research into a rigid algorithm for analyzing Twitter publications and its influence on the movements of the cryptocurrency market
Computer Research and Modeling, 2023, v. 15, no. 1, pp. 157-170Social media is a crucial indicator of the position of assets in the financial market. The paper describes the rigid solution for the classification problem to determine the influence of social media activity on financial market movements. Reputable crypto traders influencers are selected. Twitter posts packages are used as data. The methods of text, which are characterized by the numerous use of slang words and abbreviations, and preprocessing consist in lemmatization of Stanza and the use of regular expressions. A word is considered as an element of a vector of a data unit in the course of solving the problem of binary classification. The best markup parameters for processing Binance candles are searched for. Methods of feature selection, which is necessary for a precise description of text data and the subsequent process of establishing dependence, are represented by machine learning and statistical analysis. First, the feature selection is used based on the information criterion. This approach is implemented in a random forest model and is relevant for the task of feature selection for splitting nodes in a decision tree. The second one is based on the rigid compilation of a binary vector during a rough check of the presence or absence of a word in the package and counting the sum of the elements of this vector. Then a decision is made depending on the superiority of this sum over the threshold value that is predetermined previously by analyzing the frequency distribution of mentions of the word. The algorithm used to solve the problem was named benchmark and analyzed as a tool. Similar algorithms are often used in automated trading strategies. In the course of the study, observations of the influence of frequently occurring words, which are used as a basis of dimension 2 and 3 in vectorization, are described as well.
-
Boundary conditions for lattice Boltzmann equations in applications to hemodynamics
Computer Research and Modeling, 2020, v. 12, no. 4, pp. 865-882We consider a one-dimensional three velocity kinetic lattice Boltzmann model, which represents a secondorder difference scheme for hydrodynamic equations. In the framework of kinetic theory this system describes the propagation and interaction of three types of particles. It has been shown previously that the lattice Boltzmann model with external virtual force is equivalent at the hydrodynamic limit to the one-dimensional hemodynamic equations for elastic vessels, this equivalence can be achieved with use of the Chapman – Enskog expansion. The external force in the model is responsible for the ability to adjust the functional dependence between the lumen area of the vessel and the pressure applied to the wall of the vessel under consideration. Thus, the form of the external force allows to model various elastic properties of the vessels. In the present paper the physiological boundary conditions are considered at the inlets and outlets of the arterial network in terms of the lattice Boltzmann variables. We consider the following boundary conditions: for pressure and blood flow at the inlet of the vascular network, boundary conditions for pressure and blood flow for the vessel bifurcations, wave reflection conditions (correspond to complete occlusion of the vessel) and wave absorption at the ends of the vessels (these conditions correspond to the passage of the wave without distortion), as well as RCR-type conditions, which are similar to electrical circuits and consist of two resistors (corresponding to the impedance of the vessel, at the end of which the boundary conditions are set and the friction forces in microcirculatory bed) and one capacitor (describing the elastic properties of arterioles). The numerical simulations were performed: the propagation of blood in a network of three vessels was considered, the boundary conditions for the blood flow were set at the entrance of the network, RCR boundary conditions were stated at the ends of the network. The solutions to lattice Boltzmann model are compared with the benchmark solutions (based on numerical calculations for second-order McCormack difference scheme without viscous terms), it is shown that the both approaches give very similar results.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"




