All issues
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
On some stochastic mirror descent methods for constrained online optimization problems
Computer Research and Modeling, 2019, v. 11, no. 2, pp. 205-217Views (last year): 42.The problem of online convex optimization naturally occurs in cases when there is an update of statistical information. The mirror descent method is well known for non-smooth optimization problems. Mirror descent is an extension of the subgradient method for solving non-smooth convex optimization problems in the case of a non-Euclidean distance. This paper is devoted to a stochastic variant of recently proposed Mirror Descent methods for convex online optimization problems with convex Lipschitz (generally, non-smooth) functional constraints. This means that we can still use the value of the functional constraint, but instead of (sub)gradient of the objective functional and the functional constraint, we use their stochastic (sub)gradients. More precisely, assume that on a closed subset of $n$-dimensional vector space, $N$ convex Lipschitz non-smooth functionals are given. The problem is to minimize the arithmetic mean of these functionals with a convex Lipschitz constraint. Two methods are proposed, for solving this problem, using stochastic (sub)gradients: adaptive method (does not require knowledge of Lipschitz constant neither for the objective functional, nor for the functional of constraint) and non-adaptivemethod (requires knowledge of Lipschitz constant for the objective functional and the functional of constraint). Note that it is allowed to calculate the stochastic (sub)gradient of each functional only once. In the case of non-negative regret, we find that the number of non-productive steps is $O$($N$), which indicates the optimality of the proposed methods. We consider an arbitrary proximal structure, which is essential for decisionmaking problems. The results of numerical experiments are presented, allowing to compare the work of adaptive and non-adaptive methods for some examples. It is shown that the adaptive method can significantly improve the number of the found solutions.
-
Experimental study of the dynamics of single and connected in a lattice complex-valued mappings: the architecture and interface of author’s software for modeling
Computer Research and Modeling, 2021, v. 13, no. 6, pp. 1101-1124The paper describes a free software for research in the field of holomorphic dynamics based on the computational capabilities of the MATLAB environment. The software allows constructing not only single complex-valued mappings, but also their collectives as linearly connected, on a square or hexagonal lattice. In the first case, analogs of the Julia set (in the form of escaping points with color indication of the escape velocity), Fatou (with chaotic dynamics highlighting), and the Mandelbrot set generated by one of two free parameters are constructed. In the second case, only the dynamics of a cellular automaton with a complex-valued state of the cells and of all the coefficients in the local transition function is considered. The abstract nature of object-oriented programming makes it possible to combine both types of calculations within a single program that describes the iterated dynamics of one object.
The presented software provides a set of options for the field shape, initial conditions, neighborhood template, and boundary cells neighborhood features. The mapping display type can be specified by a regular expression for the MATLAB interpreter. This paper provides some UML diagrams, a short introduction to the user interface, and some examples.
The following cases are considered as example illustrations containing new scientific knowledge:
1) a linear fractional mapping in the form $Az^{n} +B/z^{n} $, for which the cases $n=2$, $4$, $n>1$, are known. In the portrait of the Fatou set, attention is drawn to the characteristic (for the classical quadratic mapping) figures of <>, showing short-period regimes, components of conventionally chaotic dynamics in the sea;
2) for the Mandelbrot set with a non-standard position of the parameter in the exponent $z(t+1)\Leftarrow z(t)^{\mu } $ sketch calculations reveal some jagged structures and point clouds resembling Cantor's dust, which are not Cantor's bouquets that are characteristic for exponential mapping. Further detailing of these objects with complex topology is required.
-
Optimal threshold selection algorithms for multi-label classification: property study
Computer Research and Modeling, 2022, v. 14, no. 6, pp. 1221-1238Multi-label classification models arise in various areas of life, which is explained by an increasing amount of information that requires prompt analysis. One of the mathematical methods for solving this problem is a plug-in approach, at the first stage of which, for each class, a certain ranking function is built, ordering all objects in some way, and at the second stage, the optimal thresholds are selected, the objects on one side of which are assigned to the current class, and on the other — to the other. Thresholds are chosen to maximize the target quality measure. The algorithms which properties are investigated in this article are devoted to the second stage of the plug-in approach which is the choice of the optimal threshold vector. This step becomes non-trivial if the $F$-measure of average precision and recall is used as the target quality assessment since it does not allow independent threshold optimization in each class. In problems of extreme multi-label classification, the number of classes can reach hundreds of thousands, so the original optimization problem is reduced to the problem of searching a fixed point of a specially introduced transformation $\boldsymbol V$, defined on a unit square on the plane of average precision $P$ and recall $R$. Using this transformation, two algorithms are proposed for optimization: the $F$-measure linearization method and the method of $\boldsymbol V$ domain analysis. The properties of algorithms are studied when applied to multi-label classification data sets of various sizes and origin, in particular, the dependence of the error on the number of classes, on the $F$-measure parameter, and on the internal parameters of methods under study. The peculiarity of both algorithms work when used for problems with the domain of $\boldsymbol V$, containing large linear boundaries, was found. In case when the optimal point is located in the vicinity of these boundaries, the errors of both methods do not decrease with an increase in the number of classes. In this case, the linearization method quite accurately determines the argument of the optimal point, while the method of $\boldsymbol V$ domain analysis — the polar radius.
-
Optimization of geometric analysis strategy in CAD-systems
Computer Research and Modeling, 2024, v. 16, no. 4, pp. 825-840Computer-aided assembly planning for complex products is an important engineering and scientific problem. The assembly sequence and content of assembly operations largely depend on the mechanical structure and geometric properties of a product. An overview of geometric modeling methods that are used in modern computer-aided design systems is provided. Modeling geometric obstacles in assembly using collision detection, motion planning, and virtual reality is very computationally intensive. Combinatorial methods provide only weak necessary conditions for geometric reasoning. The important problem of minimizing the number of geometric tests during the synthesis of assembly operations and processes is considered. A formalization of this problem is based on a hypergraph model of the mechanical structure of the product. This model provides a correct mathematical description of coherent and sequential assembly operations. The key concept of the geometric situation is introduced. This is a configuration of product parts that requires analysis for freedom from obstacles and this analysis gives interpretable results. A mathematical description of geometric heredity during the assembly of complex products is proposed. Two axioms of heredity allow us to extend the results of testing one geometric situation to many other situations. The problem of minimizing the number of geometric tests is posed as a non-antagonistic game between decision maker and nature, in which it is required to color the vertices of an ordered set in two colors. The vertices represent geometric situations, and the color is a metaphor for the result of a collision-free test. The decision maker’s move is to select an uncolored vertex; nature’s answer is its color. The game requires you to color an ordered set in a minimum number of moves by decision maker. The project situation in which the decision maker makes a decision under risk conditions is discussed. A method for calculating the probabilities of coloring the vertices of an ordered set is proposed. The basic pure strategies of rational behavior in this game are described. An original synthetic criterion for making rational decisions under risk conditions has been developed. Two heuristics are proposed that can be used to color ordered sets of high cardinality and complex structure.
-
Computational treatment of natural language text for intent detection
Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1539-1554Intent detection plays a crucial role in task-oriented conversational systems. To understand the user’s goal, the system relies on its intent detector to classify the user’s utterance, which may be expressed in different forms of natural language, into intent classes. However, lack of data, and the efficacy of intent detection systems has been hindered by the fact that the user’s intent text is typically characterized by short, general sentences and colloquial expressions. The process of algorithmically determining user intent from a given statement is known as intent detection. The goal of this study is to develop an intent detection model that will accurately classify and detect user intent. The model calculates the similarity score of the three models used to determine their similarities. The proposed model uses Contextual Semantic Search (CSS) capabilities for semantic search, Latent Dirichlet Allocation (LDA) for topic modeling, the Bidirectional Encoder Representations from Transformers (BERT) semantic matching technique, and the combination of LDA and BERT for text classification and detection. The dataset acquired is from the broad twitter corpus (BTC) and comprises various meta data. To prepare the data for analysis, a pre-processing step was applied. A sample of 1432 instances were selected out of the 5000 available datasets because manual annotation is required and could be time-consuming. To compare the performance of the model with the existing model, the similarity scores, precision, recall, f1 score, and accuracy were computed. The results revealed that LDA-BERT achieved an accuracy of 95.88% for intent detection, BERT with an accuracy of 93.84%, and LDA with an accuracy of 92.23%. This shows that LDA-BERT performs better than other models. It is hoped that the novel model will aid in ensuring information security and social media intelligence. For future work, an unsupervised LDA-BERT without any labeled data can be studied with the model.
-
On the identification of the tip vortex core
Computer Research and Modeling, 2025, v. 17, no. 1, pp. 9-27An overview is given for identification criteria of tip vortices, trailing from lifting surfaces of aircraft. $Q$-distribution is used as the main vortex identification method in this work. According to the definition of Q-criterion, the vortex core is bounded by a surface on which the norm of the vorticity tensor is equal to the norm of the strain-rate tensor. Moreover, following conditions are satisfied inside of the vortex core: (i) net (non-zero) vorticity tensor; (ii) the geometry of the identified vortex core should be Galilean invariant. Based on the existing analytical vortex models, a vortex center of a twodimensional vortex is defined as a point, where the $Q$-distribution reaches a maximum value and it is much greater than the norm of the strain-rate tensor (for an axisymmetric 2D vortex, the norm of the vorticity tensor tends to zero at the vortex center). Since the existence of the vortex axis is discussed by various authors and it seems to be a fairly natural requirement in the analysis of vortices, the above-mentioned conditions (i), (ii) can be supplemented with a third condition (iii): the vortex core in a three-dimensional flow must contain a vortex axis. Flows, having axisymmetric or non-axisymmetric (in particular, elliptic) vortex cores in 2D cross-sections, are analyzed. It is shown that in such cases $Q$-distribution can be used to obtain not only the boundary of the vortex core, but also to determine the axis of the vortex. These concepts are illustrated using the numerical simulation results for a finite span wing flow-field, obtained using the Reynolds-Averaged Navier – Stokes (RANS) equations with $k-\omega$ turbulence model.
-
Denoising fluorescent imaging data with two-step truncated HOSVD
Computer Research and Modeling, 2025, v. 17, no. 4, pp. 529-542Fluorescent imaging data are currently widely used in neuroscience and other fields. Genetically encoded sensors, based on fluorescent proteins, provide a wide inventory enabling scientiests to image virtually any process in a living cell and extracellular environment. However, especially due to the need for fast scanning, miniaturization, etc, the imaging data can be severly corrupred with multiplicative heteroscedactic noise, reflecting stochastic nature of photon emission and photomultiplier detectors. Deep learning architectures demonstrate outstanding performance in image segmentation and denoising, however they can require large clean datasets for training, and the actual data transformation is not evident from the network architecture and weight composition. On the other hand, some classical data transforms can provide for similar performance in combination with more clear insight in why and how it works. Here we propose an algorithm for denoising fluorescent dynamical imaging data, which is based on multilinear higher-order singular value decomposition (HOSVD) with optional truncation in rank along each axis and thresholding of the tensor of decomposition coefficients. In parallel, we propose a convenient paradigm for validation of the algorithm performance, based on simulated flurescent data, resulting from biophysical modeling of calcium dynamics in spatially resolved realistic 3D astrocyte templates. This paradigm is convenient in that it allows to vary noise level and its resemblance of the Gaussian noise and that it provides ground truth fluorescent signal that can be used to validate denoising algorithms. The proposed denoising method employs truncated HOSVD twice: first, narrow 3D patches, spanning the whole recording, are processed (local 3D-HOSVD stage), second, 4D groups of 3D patches are collaboratively processed (non-local, 4D-HOSVD stage). The effect of the first pass is twofold: first, a significant part of noise is removed at this stage, second, noise distribution is transformed to be more Gaussian-like due to linear combination of multiple samples in the singular vectors. The effect of the second stage is to further improve SNR. We perform parameter tuning of the second stage to find optimal parameter combination for denoising.
-
Two-stage single ROW methods with complex coefficients for autonomous systems of ODE
Computer Research and Modeling, 2010, v. 2, no. 1, pp. 19-32Citations: 1 (RSCI).The basic subset of two-stage Rosenbrock schemes with complex coefficients for numerical solution of autonomous systems of ordinary differential equations (ODE) has been considered. Numerical realization of such schemes requires one LU-decomposition, two computations of right side function and one computation of Jacoby matrix of the system per one step. The full theoretical investigation of accuracy and stability of such schemes have been done. New A-stable methods of the 3-rd order of accuracy with different properties have been constructed. There are high order L-decremented schemes as well as schemes with simple estimation of the main term of truncation error which is necessary for automatic evaluation of time step. Testing of new methods has been performed.
-
Views (last year): 2.
The paper demonstrates a fractal system of thin plates connected with hinges. The system can be studied using the methods of mechanics of solids with internal degrees of freedom. The structure is deployable — initially it is close to a small diameter one-dimensional manifold that occupies significant volume after deployment. The geometry of solids is studied using the method of the moving hedron. The relations enabling to define the geometry of the introduced manifolds are derived based on the Cartan structure equations. The proof substantially makes use of the fact that the fractal consists of thin plates that are not long compared to the sizes of the system. The mechanics is described for the solids with rigid plastic hinges between the plates, when the hinges are made of shape memory material. Based on the ultimate load theorems, estimates are performed to specify internal pressure that is required to deploy the package into a three-dimensional structure, and heat input needed to return the system into its initial state.
-
Analytical solution and computer simulation of the task of Rician distribution’s parameters in limiting cases of large and small values of signal-to-noise ratio
Computer Research and Modeling, 2015, v. 7, no. 2, pp. 227-242Views (last year): 2.The paper provides a solution of a task of calculating the parameters of a Rician distributed signal on the basis of the maximum likelihood principle in limiting cases of large and small values of the signal-tonoise ratio. The analytical formulas are obtained for the solution of the maximum likelihood equations’ system for the required signal and noise parameters for both the one-parameter approximation, when only one parameter is being calculated on the assumption that the second one is known a-priori, and for the two-parameter task, when both parameters are a-priori unknown. The direct calculation of required signal and noise parameters by formulas allows escaping the necessity of time resource consuming numerical solving the nonlinear equations’ s system and thus optimizing the duration of computer processing of signals and images. There are presented the results of computer simulation of a task confirming the theoretical conclusions. The task is meaningful for the purposes of Rician data processing, in particular, magnetic-resonance visualization.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"




