All issues
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Numerical solution to a two-dimensional nonlinear heat equation using radial basis functions
Computer Research and Modeling, 2022, v. 14, no. 1, pp. 9-22The paper presents a numerical solution to the heat wave motion problem for a degenerate second-order nonlinear parabolic equation with a source term. The nonlinearity is conditioned by the power dependence of the heat conduction coefficient on temperature. The problem for the case of two spatial variables is considered with the boundary condition specifying the heat wave motion law. A new solution algorithm based on an expansion in radial basis functions and the boundary element method is proposed. The solution is constructed stepwise in time with finite difference time approximation. At each time step, a boundary value problem for the Poisson equation corresponding to the original equation at a fixed time is solved. The solution to this problem is constructed iteratively as the sum of a particular solution to the nonhomogeneous equation and a solution to the corresponding homogeneous equation satisfying the boundary conditions. The homogeneous equation is solved by the boundary element method. The particular solution is sought by the collocation method using inhomogeneity expansion in radial basis functions. The calculation algorithm is optimized by parallelizing the computations. The algorithm is implemented as a program written in the C++ language. The parallel computations are organized by using the OpenCL standard, and this allows one to run the same parallel code either on multi-core CPUs or on graphic CPUs. Test cases are solved to evaluate the effectiveness of the proposed solution method and the correctness of the developed computational technique. The calculation results are compared with known exact solutions, as well as with the results we obtained earlier. The accuracy of the solutions and the calculation time are estimated. The effectiveness of using various systems of radial basis functions to solve the problems under study is analyzed. The most suitable system of functions is selected. The implemented complex computational experiment shows higher calculation accuracy of the proposed new algorithm than that of the previously developed one.
-
Lower bounds for conditional gradient type methods for minimizing smooth strongly convex functions
Computer Research and Modeling, 2022, v. 14, no. 2, pp. 213-223In this paper, we consider conditional gradient methods for optimizing strongly convex functions. These are methods that use a linear minimization oracle, which, for a given vector $p \in \mathbb{R}^n$, computes the solution of the subproblem
\[ \text{Argmin}_{x\in X}{\langle p,\,x \rangle}. \]There are a variety of conditional gradient methods that have a linear convergence rate in a strongly convex case. However, in all these methods, the dimension of the problem is included in the rate of convergence, which in modern applications can be very large. In this paper, we prove that in the strongly convex case, the convergence rate of the conditional gradient methods in the best case depends on the dimension of the problem $ n $ as $ \widetilde {\Omega} \left(\!\sqrt {n}\right) $. Thus, the conditional gradient methods may turn out to be ineffective for solving strongly convex optimization problems of large dimensions.
Also, the application of conditional gradient methods to minimization problems of a quadratic form is considered. The effectiveness of the Frank – Wolfe method for solving the quadratic optimization problem in the convex case on a simplex (PageRank) has already been proved. This work shows that the use of conditional gradient methods to solve the minimization problem of a quadratic form in a strongly convex case is ineffective due to the presence of dimension in the convergence rate of these methods. Therefore, the Shrinking Conditional Gradient method is considered. Its difference from the conditional gradient methods is that it uses a modified linear minimization oracle. It's an oracle, which, for a given vector $p \in \mathbb{R}^n$, computes the solution of the subproblem \[ \text{Argmin}\{\langle p, \,x \rangle\colon x\in X, \;\|x-x_0^{}\| \leqslant R \}. \] The convergence rate of such an algorithm does not depend on dimension. Using the Shrinking Conditional Gradient method the complexity (the total number of arithmetic operations) of solving the minimization problem of quadratic form on a $ \infty $-ball is obtained. The resulting evaluation of the method is comparable to the complexity of the gradient method.
Keywords: Frank –Wolfe method, Shrinking Conditional Gradient. -
Optimal threshold selection algorithms for multi-label classification: property study
Computer Research and Modeling, 2022, v. 14, no. 6, pp. 1221-1238Multi-label classification models arise in various areas of life, which is explained by an increasing amount of information that requires prompt analysis. One of the mathematical methods for solving this problem is a plug-in approach, at the first stage of which, for each class, a certain ranking function is built, ordering all objects in some way, and at the second stage, the optimal thresholds are selected, the objects on one side of which are assigned to the current class, and on the other — to the other. Thresholds are chosen to maximize the target quality measure. The algorithms which properties are investigated in this article are devoted to the second stage of the plug-in approach which is the choice of the optimal threshold vector. This step becomes non-trivial if the $F$-measure of average precision and recall is used as the target quality assessment since it does not allow independent threshold optimization in each class. In problems of extreme multi-label classification, the number of classes can reach hundreds of thousands, so the original optimization problem is reduced to the problem of searching a fixed point of a specially introduced transformation $\boldsymbol V$, defined on a unit square on the plane of average precision $P$ and recall $R$. Using this transformation, two algorithms are proposed for optimization: the $F$-measure linearization method and the method of $\boldsymbol V$ domain analysis. The properties of algorithms are studied when applied to multi-label classification data sets of various sizes and origin, in particular, the dependence of the error on the number of classes, on the $F$-measure parameter, and on the internal parameters of methods under study. The peculiarity of both algorithms work when used for problems with the domain of $\boldsymbol V$, containing large linear boundaries, was found. In case when the optimal point is located in the vicinity of these boundaries, the errors of both methods do not decrease with an increase in the number of classes. In this case, the linearization method quite accurately determines the argument of the optimal point, while the method of $\boldsymbol V$ domain analysis — the polar radius.
-
Synthesis of the structure of organised systems as central problem of evolutionary cybernetics
Computer Research and Modeling, 2023, v. 15, no. 5, pp. 1103-1124The article provides approaches to evolutionary modelling of synthesis of organised systems and analyses methodological problems of evolutionary computations of this kind. Based on the analysis of works on evolutionary cybernetics, evolutionary theory, systems theory and synergetics, we conclude that there are open problems in formalising the synthesis of organised systems and modelling their evolution. The article emphasises that the theoretical basis for the practice of evolutionary modelling is the principles of the modern synthetic theory of evolution. Our software project uses a virtual computing environment for machine synthesis of problem solving algorithms. In the process of modelling, we obtained the results on the basis of which we conclude that there are a number of conditions that fundamentally limit the applicability of genetic programming methods in the tasks of synthesis of functional structures. The main limitations are the need for the fitness function to track the step-by-step approach to the solution of the problem and the inapplicability of this approach to the problems of synthesis of hierarchically organised systems. We note that the results obtained in the practice of evolutionary modelling in general for the whole time of its existence, confirm the conclusion the possibilities of genetic programming are fundamentally limited in solving problems of synthesizing the structure of organized systems. As sources of fundamental difficulties for machine synthesis of system structures the article points out the absence of directions for gradient descent in structural synthesis and the absence of regularity of random appearance of new organised structures. The considered problems are relevant for the theory of biological evolution. The article substantiates the statement about the biological specificity of practically possible ways of synthesis of the structure of organised systems. As a theoretical interpretation of the discussed problem, we propose to consider the system-evolutionary concept of P.K.Anokhin. The process of synthesis of functional structures in this context is an adaptive response of organisms to external conditions based on their ability to integrative synthesis of memory, needs and information about current conditions. The results of actual studies are in favour of this interpretation. We note that the physical basis of biological integrativity may be related to the phenomena of non-locality and non-separability characteristic of quantum systems. The problems considered in this paper are closely related to the problem of creating strong artificial intelligence.
-
The iterations’ number estimation for strongly polynomial linear programming algorithms
Computer Research and Modeling, 2024, v. 16, no. 2, pp. 249-285A direct algorithm for solving a linear programming problem (LP), given in canonical form, is considered. The algorithm consists of two successive stages, in which the following LP problems are solved by a direct method: a non-degenerate auxiliary problem at the first stage and some problem equivalent to the original one at the second. The construction of the auxiliary problem is based on a multiplicative version of the Gaussian exclusion method, in the very structure of which there are possibilities: identification of incompatibility and linear dependence of constraints; identification of variables whose optimal values are obviously zero; the actual exclusion of direct variables and the reduction of the dimension of the space in which the solution of the original problem is determined. In the process of actual exclusion of variables, the algorithm generates a sequence of multipliers, the main rows of which form a matrix of constraints of the auxiliary problem, and the possibility of minimizing the filling of the main rows of multipliers is inherent in the very structure of direct methods. At the same time, there is no need to transfer information (basis, plan and optimal value of the objective function) to the second stage of the algorithm and apply one of the ways to eliminate looping to guarantee final convergence.
Two variants of the algorithm for solving the auxiliary problem in conjugate canonical form are presented. The first one is based on its solution by a direct algorithm in terms of the simplex method, and the second one is based on solving a problem dual to it by the simplex method. It is shown that both variants of the algorithm for the same initial data (inputs) generate the same sequence of points: the basic solution and the current dual solution of the vector of row estimates. Hence, it is concluded that the direct algorithm is an algorithm of the simplex method type. It is also shown that the comparison of numerical schemes leads to the conclusion that the direct algorithm allows to reduce, according to the cubic law, the number of arithmetic operations necessary to solve the auxiliary problem, compared with the simplex method. An estimate of the number of iterations is given.
-
Modeling time series trajectories using the Liouville equation
Computer Research and Modeling, 2024, v. 16, no. 3, pp. 585-598This paper presents algorithm for modeling set of trajectories of non-stationary time series, based on a numerical scheme for approximating the sample density of the distribution function in a problem with fixed ends, when the initial distribution for a given number of steps transforms into a certain final distribution, so that at each step the semigroup property of solving the Liouville equation is satisfied. The model makes it possible to numerically construct evolving densities of distribution functions during random switching of states of the system generating the original time series.
The main problem is related to the fact that with the numerical implementation of the left-hand differential derivative in time, the solution becomes unstable, but such approach corresponds to the modeling of evolution. An integrative approach is used while choosing implicit stable schemes with “going into the future”, this does not match the semigroup property at each step. If, on the other hand, some real process is being modeled, in which goal-setting presumably takes place, then it is desirable to use schemes that generate a model of the transition process. Such model is used in the future in order to build a predictor of the disorder, which will allow you to determine exactly what state the process under study is going into, before the process really went into it. The model described in the article can be used as a tool for modeling real non-stationary time series.
Steps of the modeling scheme are described further. Fragments corresponding to certain states are selected from a given time series, for example, trends with specified slope angles and variances. Reference distributions of states are compiled from these fragments. Then the empirical distributions of the duration of the system’s stay in the specified states and the duration of the transition time from state to state are determined. In accordance with these empirical distributions, a probabilistic model of the disorder is constructed and the corresponding trajectories of the time series are modeled.
-
Computational treatment of natural language text for intent detection
Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1539-1554Intent detection plays a crucial role in task-oriented conversational systems. To understand the user’s goal, the system relies on its intent detector to classify the user’s utterance, which may be expressed in different forms of natural language, into intent classes. However, lack of data, and the efficacy of intent detection systems has been hindered by the fact that the user’s intent text is typically characterized by short, general sentences and colloquial expressions. The process of algorithmically determining user intent from a given statement is known as intent detection. The goal of this study is to develop an intent detection model that will accurately classify and detect user intent. The model calculates the similarity score of the three models used to determine their similarities. The proposed model uses Contextual Semantic Search (CSS) capabilities for semantic search, Latent Dirichlet Allocation (LDA) for topic modeling, the Bidirectional Encoder Representations from Transformers (BERT) semantic matching technique, and the combination of LDA and BERT for text classification and detection. The dataset acquired is from the broad twitter corpus (BTC) and comprises various meta data. To prepare the data for analysis, a pre-processing step was applied. A sample of 1432 instances were selected out of the 5000 available datasets because manual annotation is required and could be time-consuming. To compare the performance of the model with the existing model, the similarity scores, precision, recall, f1 score, and accuracy were computed. The results revealed that LDA-BERT achieved an accuracy of 95.88% for intent detection, BERT with an accuracy of 93.84%, and LDA with an accuracy of 92.23%. This shows that LDA-BERT performs better than other models. It is hoped that the novel model will aid in ensuring information security and social media intelligence. For future work, an unsupervised LDA-BERT without any labeled data can be studied with the model.
-
A surrogate neural network method for restoring the flow field from a homogeneous field by iterations in calculations of steady turbulent flows
Computer Research and Modeling, 2025, v. 17, no. 2, pp. 179-197In recent years, the use of neural network models for solving aerodynamics problems has become widespread. These models, trained on a set of previously obtained solutions, predict solutions to new problems. They are, in essence, interpolation algorithms. An alternative approach is to construct a neural network operator. This is a neural network that reproduces a numerical method used to solve a problem. It allows to find the solution in iterations. The paper considers the construction of such an operator using the UNet neural network with a spatial attention mechanism. It solves flow problems on a rectangular uniform grid that is common to a streamlined body and flow field. A correction mechanism is proposed to clarify the obtained solution. The problem of the stability of such an algorithm for solving a stationary problem is analyzed, and a comparison is made with other variants of its construction, including pushforward trick and positional encoding. The issue of selecting a set of iterations for forming a train dataset is considered, and the behavior of the solution is assessed using repeated use of a neural network operator.
A demonstration of the method is provided for the case of flow around a rounded plate with a turbulent flow, with various options for rounding, for fixed parameters of the incoming flow, with Reynolds number $\text{Re} = 10^5$ and Mach number $M = 0.15$. Since flows with these parameters of the incoming flow can be considered incompressible, only velocity components are directly studied. At the same time, the neural network model used to construct the operator has a common decoder for both velocity components. Comparison of flow fields and velocity profiles along the normal and outline of the body, obtained using a neural network operator and numerical methods, is carried out. Analysis is performed both on the plate and rounding. Simulation results confirm that the neural network operator allows finding a solution with high accuracy and stability.
-
The adaptive Gaussian receptive fields for spiking encoding of numeric variables
Computer Research and Modeling, 2025, v. 17, no. 3, pp. 389-400Conversion of numeric data to the spiking form and information losses in this process are serious problems limiting usage of spiking neural networks in applied informational systems. While physical values are represented by numbers, internal representation of information inside spiking neural networks is based on spikes — elementary objects emitted and processed by neurons. This problem is especially hard in the reinforcement learning applications where an agent should learn to behave in the dynamic real world because beside the accuracy of the encoding method, its dynamic characteristics should be considered as well. The encoding algorithm based on the Gaussian receptive fields (GRF) is frequently used. In this method, one numeric variable fed to the network is represented by spike streams emitted by a certain set of network input nodes. The spike frequency in each stream is determined by proximity of the current variable value to the center of the receptive field corresponding to the given input node. In the standard GRF algorithm, the receptive field centers are placed equidistantly. However, it is inefficient in the case of very uneven distribution of the variable encoded. In the present paper, an improved version of this method is proposed which is based on adaptive selection of the Gaussian centers and spike stream frequencies. This improved GRF algorithm is compared with its standard version in terms of amount of information lost in the coding process and of accuracy of classification models built on spike-encoded data. The fraction of information retained in the process of the standard and adaptive GRF encoding is estimated using the direct and reverse encoding procedures applied to a large sample from the triangular probability distribution and counting coinciding bits in the original and restored samples. The comparison based on classification was performed on a task of evaluation of current state in reinforcement learning. For this purpose, the classification models were created by machine learning algorithms of very different nature — nearest neighbors algorithm, random forest and multi-layer perceptron. Superiority of our approach is demonstrated on all these tests.
-
Denoising fluorescent imaging data with two-step truncated HOSVD
Computer Research and Modeling, 2025, v. 17, no. 4, pp. 529-542Fluorescent imaging data are currently widely used in neuroscience and other fields. Genetically encoded sensors, based on fluorescent proteins, provide a wide inventory enabling scientiests to image virtually any process in a living cell and extracellular environment. However, especially due to the need for fast scanning, miniaturization, etc, the imaging data can be severly corrupred with multiplicative heteroscedactic noise, reflecting stochastic nature of photon emission and photomultiplier detectors. Deep learning architectures demonstrate outstanding performance in image segmentation and denoising, however they can require large clean datasets for training, and the actual data transformation is not evident from the network architecture and weight composition. On the other hand, some classical data transforms can provide for similar performance in combination with more clear insight in why and how it works. Here we propose an algorithm for denoising fluorescent dynamical imaging data, which is based on multilinear higher-order singular value decomposition (HOSVD) with optional truncation in rank along each axis and thresholding of the tensor of decomposition coefficients. In parallel, we propose a convenient paradigm for validation of the algorithm performance, based on simulated flurescent data, resulting from biophysical modeling of calcium dynamics in spatially resolved realistic 3D astrocyte templates. This paradigm is convenient in that it allows to vary noise level and its resemblance of the Gaussian noise and that it provides ground truth fluorescent signal that can be used to validate denoising algorithms. The proposed denoising method employs truncated HOSVD twice: first, narrow 3D patches, spanning the whole recording, are processed (local 3D-HOSVD stage), second, 4D groups of 3D patches are collaboratively processed (non-local, 4D-HOSVD stage). The effect of the first pass is twofold: first, a significant part of noise is removed at this stage, second, noise distribution is transformed to be more Gaussian-like due to linear combination of multiple samples in the singular vectors. The effect of the second stage is to further improve SNR. We perform parameter tuning of the second stage to find optimal parameter combination for denoising.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"




