All issues
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Advanced neural network models for UAV-based image analysis in remote pathology monitoring of coniferous forests
Computer Research and Modeling, 2025, v. 17, no. 4, pp. 641-663The key problems of remote forest pathology monitoring for coniferous forests affected by insect pests have been analyzed. It has been demonstrated that addressing these tasks requires the use of multiclass classification results for coniferous trees in high- and ultra-high-resolution images, which are promptly obtained through monitoring via satellites or unmanned aerial vehicles (UAVs). An analytical review of modern models and methods for multiclass classification of coniferous forest images was conducted, leading to the development of three fully convolutional neural network models: Mo-U-Net, At-Mo-U-Net, and Res-Mo-U-Net, all based on the classical U-Net architecture. Additionally, the Segformer transformer model was modified to suit the task. For RGB images of fir trees Abies sibirica affected by the four-eyed bark beetle Polygraphus proximus, captured using a UAV-mounted camera, two datasets were created: the first dataset contains image fragments and their corresponding reference segmentation masks sized 256 × 256 × 3 pixels, while the second dataset contains fragments sized 480 × 480 × 3 pixels. Comprehensive studies were conducted on each trained neural network model to evaluate both classification accuracy for assessing the degree of damage (health status) of Abies sibirica trees and computation speed using test datasets from each set. The results revealed that for fragments sized 256 × 256 × 3 pixels, the At-Mo-U-Net model with an attention mechanism is preferred alongside the Modified Segformer model. For fragments sized 480 × 480 × 3 pixels, the Res-Mo-U-Net hybrid model with residual blocks demonstrated superior performance. Based on classification accuracy and computation speed results for each developed model, it was concluded that, for production-scale multiclass classification of affected fir trees, the Res-Mo-U-Net model is the most suitable choice. This model strikes a balance between high classification accuracy and fast computation speed, meeting conflicting requirements effectively.
-
Using RAG technology and large language models to search for documents and obtain information in corporate information systems
Computer Research and Modeling, 2025, v. 17, no. 5, pp. 871-888This paper investigates the effectiveness of Retrieval-Augmented Generation (RAG) combined with various Large Language Models (LLMs) for document retrieval and information access in corporate information systems. We survey typical use-cases of LLMs in enterprise environments, outline the RAG architecture, and discuss the major challenges that arise when integrating LLMs into a RAG pipeline. A system architecture is proposed that couples a text-vector encoder with an LLM. The encoder builds a vector database that indexes a library of corporate documents. For every user query, relevant contextual fragments are retrieved from this library via the FAISS engine and appended to the prompt given to the LLM. The LLM then generates an answer grounded in the supplied context. The overall structure and workflow of the proposed RAG solution are described in detail. To justify the choice of the generative component, we benchmark a set of widely used LLMs — ChatGPT, GigaChat, YandexGPT, Llama, Mistral, Qwen, and others — when employed as the answer-generation module. Using an expert-annotated test set of queries, we evaluate the accuracy, completeness, linguistic quality, and conciseness of the responses. Model-specific characteristics and average response latencies are analysed; the study highlights the significant influence of available GPU memory on the throughput of local LLM deployments. An overall ranking of the models is derived from an aggregated quality metric. The results confirm that the proposed RAG architecture provides efficient document retrieval and information delivery in corporate environments. Future research directions include richer context augmentation techniques and a transition toward agent-based LLM architectures. The paper concludes with practical recommendations on selecting an optimal RAG–LLM configuration to ensure fast and precise access to enterprise knowledge assets.
-
Research of deformation behavior of bone fragment at an axial compression, containing compact and spongy layers of different density
Computer Research and Modeling, 2013, v. 5, no. 3, pp. 433-441Views (last year): 3. Citations: 2 (RSCI).The results of computer simulation of deformation behavior of bone fragment at axial compression, containing compact and spongy layers of different density, are presented. The result of calculations show that changing of prevailing type of deformation of compression of bone sample on deformation of bend and vice versa is possible at densities change of his spongy constituent and compact constituent
-
Synchronous components of financial time series
Computer Research and Modeling, 2017, v. 9, no. 4, pp. 639-655The article proposes a method of joint analysis of multidimensional financial time series based on the evaluation of the set of properties of stock quotes in a sliding time window and the subsequent averaging of property values for all analyzed companies. The main purpose of the analysis is to construct measures of joint behavior of time series reacting to the occurrence of a synchronous or coherent component. The coherence of the behavior of the characteristics of a complex system is an important feature that makes it possible to evaluate the approach of the system to sharp changes in its state. The basis for the search for precursors of sharp changes is the general idea of increasing the correlation of random fluctuations of the system parameters as it approaches the critical state. The increments in time series of stock values have a pronounced chaotic character and have a large amplitude of individual noises, against which a weak common signal can be detected only on the basis of its correlation in different scalar components of a multidimensional time series. It is known that classical methods of analysis based on the use of correlations between neighboring samples are ineffective in the processing of financial time series, since from the point of view of the correlation theory of random processes, increments in the value of shares formally have all the attributes of white noise (in particular, the “flat spectrum” and “delta-shaped” autocorrelation function). In connection with this, it is proposed to go from analyzing the initial signals to examining the sequences of their nonlinear properties calculated in time fragments of small length. As such properties, the entropy of the wavelet coefficients is used in the decomposition into the Daubechies basis, the multifractal parameters and the autoregressive measure of signal nonstationarity. Measures of synchronous behavior of time series properties in a sliding time window are constructed using the principal component method, moduli values of all pairwise correlation coefficients, and a multiple spectral coherence measure that is a generalization of the quadratic coherence spectrum between two signals. The shares of 16 large Russian companies from the beginning of 2010 to the end of 2016 were studied. Using the proposed method, two synchronization time intervals of the Russian stock market were identified: from mid-December 2013 to mid- March 2014 and from mid-October 2014 to mid-January 2016.
Keywords: financial time series, wavelets, entropy, multi-fractals, predictability, synchronization.Views (last year): 12. Citations: 2 (RSCI). -
Quantitative analysis of “structure – anticancer activity” and rational molecular design of bi-functional VEGFR-2/HDAC-inhibitors
Computer Research and Modeling, 2019, v. 11, no. 5, pp. 911-930Inhibitors of histone deacetylases (HDACi) have considered as a promising class of drugs for the treatment of cancers because of their effects on cell growth, differentiation, and apoptosis. Angiogenesis play an important role in the growth of most solid tumors and the progression of metastasis. The vascular endothelial growth factor (VEGF) is a key angiogenic agent, which is secreted by malignant tumors, which induces the proliferation and the migration of vascular endothelial cells. Currently, the most promising strategy in the fight against cancer is the creation of hybrid drugs that simultaneously act on several physiological targets. In this work, a series of hybrids bearing N-phenylquinazolin-4-amine and hydroxamic acid moieties were studied as dual VEGFR-2/HDAC inhibitors using simplex representation of the molecular structure and Support Vector Machine (SVM). The total sample of 42 compounds was divided into training and test sets. Five-fold cross-validation (5-fold) was used for internal validation. Satisfactory quantitative structure—activity relationship (QSAR) models were constructed (R2test = 0.64–0.87) for inhibitors of HDAC, VEGFR-2 and human breast cancer cell line MCF-7. The interpretation of the obtained QSAR models was carried out. The coordinated effect of different molecular fragments on the increase of antitumor activity of the studied compounds was estimated. Among the substituents of the N-phenyl fragment, the positive contribution of para bromine for all three types of activity can be distinguished. The results of the interpretation were used for molecular design of potential dual VEGFR-2/HDAC inhibitors. For comparative QSAR research we used physicochemical descriptors calculated by the program HYBOT, the method of Random Forest (RF), and on-line version of the expert system OCHEM (https://ochem.eu). In the modeling of OCHEM PyDescriptor descriptors and extreme gradient boosting was chosen. In addition, the models obtained with the help of the expert system OCHEM were used for virtual screening of 300 compounds to select promising VEGFR-2/HDAC inhibitors for further synthesis and testing.
-
Biomechanics of DNA: rotational oscillations of bases
Computer Research and Modeling, 2011, v. 3, no. 3, pp. 319-328Views (last year): 3. Citations: 2 (RSCI).In this paper we study the rotational oscillations of the nitrous bases forming a central pair in a short DNA fragment consisting of three base pairs. A simple mechanical analog of the fragment where the bases are imitated by pendulums and the interactions between pendulums — by springs, has been constructed. We derived Lagrangian of the model system and the nonlinear equations of motions. We found solutions in the homogeneous case when the fragment considered consists of identical base pairs: Adenine-Thymine (AT- pair) or Guanine-Cytosine (GC-pair). The trajectories of the model system in the configuration space were also constructed.
-
Investigation of time to reach consensus on the work of technical committees on standardization based on regular Markov chains
Computer Research and Modeling, 2015, v. 7, no. 4, pp. 941-950Views (last year): 5. Citations: 8 (RSCI).In this paper construct the mathematical model for consensus in technical committees for standardization (TC), based on the consensus model proposed DeGroot. The basic problems of achieving consensus in the development of consensus standards in terms of the proposed model are discussed. The results of statistical modeling characterizing the dependence of time to reach consensus on the number of members of the TC and their authoritarianism are presented. It has been shown that increasing the number of TC experts and authoritarianism negative impact on the time to reach a consensus and increase fragmentation of the TC.
-
Estimation of maximal values of biomass growth yield based on the mass-energy balance of cell metabolism
Computer Research and Modeling, 2019, v. 11, no. 4, pp. 723-750Views (last year): 2.The biomass growth yield is the ratio of the newly synthesized substance of growing cells to the amount of the consumed substrate, the source of matter and energy for cell growth. The yield is a characteristic of the efficiency of substrate conversion to cell biomass. The conversion is carried out by the cell metabolism, which is a complete aggregate of biochemical reactions occurring in the cells.
This work newly considers the problem of maximal cell growth yield prediction basing on balances of the whole living cell metabolism and its fragments called as partial metabolisms (PM). The following PM’s are used for the present consideration. During growth on any substrate we consider i) the standard constructive metabolism (SCM) which consists of identical pathways during growth of various organisms on any substrate. SCM starts from several standard compounds (nodal metabolites): glucose, acetyl-CoA 2-oxoglutarate, erythrose-4-phosphate, oxaloacetate, ribose-5- phosphate, 3-phosphoglycerate, phosphoenolpyruvate, and pyruvate, and ii) the full forward metabolism (FM) — the remaining part of the whole metabolism. The first one consumes high-energy bonds (HEB) formed by the second one. In this work we examine a generalized variant of the FM, when the possible presence of extracellular products, as well as the possibilities of both aerobic and anaerobic growth are taken into account. Instead of separate balances of each nodal metabolite formation as it was made in our previous work, this work deals at once with the whole aggregate of these metabolites. This makes the problem solution more compact and requiring a smaller number of biochemical quantities and substantially less computational time. An equation expressing the maximal biomass yield via specific amounts of HEB formed and consumed by the partial metabolisms has been derived. It includes the specific HEB consumption by SCM which is a universal biochemical parameter applicable to the wide range of organisms and growth substrates. To correctly determine this parameter, the full constructive metabolism and its forward part are considered for the growth of cells on glucose as the mostly studied substrate. We used here the found earlier properties of the elemental composition of lipid and lipid-free fractions of cell biomass. Numerical study of the effect of various interrelations between flows via different nodal metabolites has been made. It showed that the requirements of the SCM in high-energy bonds and NAD(P)H are practically constants. The found HEB-to-formed-biomass coefficient is an efficient tool for finding estimates of maximal biomass yield from substrates for which the primary metabolism is known. Calculation of ATP-to-substrate ratio necessary for the yield estimation has been made using the special computer program package, GenMetPath.
-
An efficient algorithm for ${\mathrm{\LaTeX}}$ documents comparing
Computer Research and Modeling, 2015, v. 7, no. 2, pp. 329-345The problem is constructing the differences that arise on ${\mathrm{\LaTeX}}$ documents editing. Each document is represented as a parse tree whose nodes are called tokens. The smallest possible text representation of the document that does not change the syntax tree is constructed. All of the text is splitted into fragments whose boundaries correspond to tokens. A map of the initial text fragment sequence to the similar sequence of the edited document corresponding to the minimum distance is built with Hirschberg algorithm A map of text characters corresponding to the text fragment sequences map is cunstructed. Tokens, that chars are all deleted, or all inserted, or all not changed, are selected in the parse trees. The map for the trees formed with other tokens is built using Zhang–Shasha algorithm.
Keywords: automation, editing distance, text analysis, lexeme, machine learning, metric, parse tree, syntax tree, token, ${\mathrm{\LaTeX}}$.Views (last year): 2. Citations: 2 (RSCI). -
Numerical modeling of physical processes leading to the destruction of meteoroids in the Earth’s atmosphere
Computer Research and Modeling, 2022, v. 14, no. 4, pp. 835-851Within the framework of the actual problem of comet-asteroid danger, the physical processes causing the destruction and fragmentation of meteor bodies in the Earth’s atmosphere are numerically investigated. Based on the developed physicalmathematical models that determines the movements of space objects of natural origin in the atmosphere and their interaction with it, the fall of three, one of the largest and by some parameters unusual bolides in the history of meteoritics, are considered: Tunguska, Vitim and Chelyabinsk. Their singularity lies in the absence of any material meteorite remains and craters in the area of the alleged crash site for the first two bodies and the non-detection, as it is assumed, of the main mother body for the third body (due to the too small amount of mass of the fallen fragments compared to the estimated mass). The effect of aerodynamic loads and heat flows on these bodies are studied, which leads to intensive surface mass loss and possible mechanical destruction. The velocities of the studied celestial bodies and the change in their masses are determined from the modernized system of equations of the theory of meteoric physics. An important factor that is taken into account here is the variability of the meteorite mass entrainment parameter under the action of heat fluxes (radiation and convective) along the flight path. The process of fragmentation of meteoroids in this paper is considered within the framework of a progressive crushing model based on the statistical theory of strength, taking into account the influence of the scale factor on the ultimate strength of objects. The phenomena and effects arising at various kinematic and physical parameters of each of these bodies are revealed. In particular, the change in the ballistics of their flight in the denser layers of the atmosphere, consisting in the transition from the fall mode to the ascent mode. At the same time, the following scenarios of the event can be realized: 1) the return of the body back to outer space at its residual velocity greater than the second cosmic one; 2) the transition of the body to the orbit of the Earth satellite at a residual velocity greater than the first cosmic one; 3) at lower values of the residual velocity of the body, its return after some time to the fall mode and falling out at a considerable distance from the intended crash site. It is the implementation of one of these three scenarios of the event that explains, for example, the absence of material traces, including craters, in the case of the Tunguska bolide in the vicinity of the forest collapse. Assumptions about the possibility of such scenarios have been made earlier by other authors, and in this paper their implementation is confirmed by the results of numerical calculations.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"




