All issues
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Calculation of transverse wave speed in preloaded fibres under an impact
Computer Research and Modeling, 2022, v. 14, no. 4, pp. 887-897The paper considers the problem of transverse impact on a thin preloaded fiber. The commonly accepted theory of transverse impact on a thin fiber is based on the classical works of Rakhmatulin and Smith. The simple relations obtained from the Rakhmatulin – Smith theory are widely used in engineering practice. However, there are numerous evidences that experimental results may differ significantly from estimations based on these relations. A brief overview of the factors that cause the differences is given in this article.
This paper focuses on the shear wave velocity, as it is the only feature that can be directly observed and measured using high-speed cameras or similar methods. The influence of the fiber preload on the wave speed is considered. This factor is important, since it inevitably arises in the experimental results. The reliable fastening and precise positioning of the fiber during the experiments requires its preload. This work shows that the preload significantly affects the shear wave velocity in the impacted fiber.
Numerical calculations were performed for Kevlar 29 and Spectra 1000 yarns. Shear wave velocities are obtained for different levels of initial tension. A direct comparison of numerical results and analytical estimations with experimental data is presented. The speed of the transverse wave in free and preloaded fibers differed by a factor of two for the setup parameters considered. This fact demonstrates that measurements based on high-speed imaging and analysis of the observed shear waves should take into account the preload of the fibers.
This paper proposes a formula for a quick estimation of the shear wave velocity in preloaded fibers. The formula is obtained from the basic relations of the Rakhmatulin – Smith theory under the assumption of a large initial deformation of the fiber. The formula can give significantly better results than the classical approximation, this fact is demonstrated using the data for preloaded Kevlar 29 and Spectra 1000. The paper also shows that direct numerical calculation has better corresponding with the experimental data than any of the considered analytical estimations.
-
Identification of the author of the text by segmentation method
Computer Research and Modeling, 2022, v. 14, no. 5, pp. 1199-1210The paper describes a method for recognizing authors of literary texts by the proximity of fragments into which a separate text is divided to the standard of the author. The standard is the empirical frequency distribution of letter combinations, built on a training sample, which included expertly selected reliably known works of this author. A set of standards of different authors forms a library, within which the problem of identifying the author of an unknown text is solved. The proximity between texts is understood in the sense of the norm in L1 for the frequency vector of letter combinations, which is constructed for each fragment and for the text as a whole. The author of an unknown text is assigned the one whose standard is most often chosen as the closest for the set of fragments into which the text is divided. The length of the fragment is optimized based on the principle of the maximum difference in distances from fragments to standards in the problem of recognition of «friend–foe». The method was tested on the corpus of domestic and foreign (translated) authors. 1783 texts of 100 authors with a total volume of about 700 million characters were collected. In order to exclude the bias in the selection of authors, authors whose surnames began with the same letter were considered. In particular, for the letter L, the identification error was 12%. Along with a fairly high accuracy, this method has another important property: it allows you to estimate the probability that the standard of the author of the text in question is missing in the library. This probability can be estimated based on the results of the statistics of the nearest standards for small fragments of text. The paper also examines statistical digital portraits of writers: these are joint empirical distributions of the probability that a certain proportion of the text is identified at a given level of trust. The practical importance of these statistics is that the carriers of the corresponding distributions practically do not overlap for their own and other people’s standards, which makes it possible to recognize the reference distribution of letter combinations at a high level of confidence.
-
Efficient and error-free information hiding in the hybrid domain of digital images using metaheuristic optimization
Computer Research and Modeling, 2023, v. 15, no. 1, pp. 197-210Data hiding in digital images is a promising direction of cybersecurity. Digital steganography methods provide imperceptible transmission of secret data over an open communication channel. The information embedding efficiency depends on the embedding imperceptibility, capacity, and robustness. These quality criteria are mutually inverse, and the improvement of one indicator usually leads to the deterioration of the others. A balance between them can be achieved using metaheuristic optimization. Metaheuristics are a class of optimization algorithms that find an optimal, or close to an optimal solution for a variety of problems, including those that are difficult to formalize, by simulating various natural processes, for example, the evolution of species or the behavior of animals. In this study, we propose an approach to data hiding in the hybrid spatial-frequency domain of digital images based on metaheuristic optimization. Changing a block of image pixels according to some change matrix is considered as an embedding operation. We select the change matrix adaptively for each block using metaheuristic optimization algorithms. In this study, we compare the performance of three metaheuristics such as genetic algorithm, particle swarm optimization, and differential evolution to find the best change matrix. Experimental results showed that the proposed approach provides high imperceptibility of embedding, high capacity, and error-free extraction of embedded information. At the same time, storage of change matrices for each block is not required for further data extraction. This improves user experience and reduces the chance of an attacker discovering the steganographic attachment. Metaheuristics provided an increase in imperceptibility indicator, estimated by the PSNR metric, and the capacity of the previous algorithm for embedding information into the coefficients of the discrete cosine transform using the QIM method [Evsutin, Melman, Meshcheryakov, 2021] by 26.02% and 30.18%, respectively, for the genetic algorithm, 26.01% and 19.39% for particle swarm optimization, 27.30% and 28.73% for differential evolution.
-
Nonsmooth Distributed Min-Max Optimization Using the Smoothing Technique
Computer Research and Modeling, 2023, v. 15, no. 2, pp. 469-480Distributed saddle point problems (SPPs) have numerous applications in optimization, matrix games and machine learning. For example, the training of generated adversarial networks is represented as a min-max optimization problem, and training regularized linear models can be reformulated as an SPP as well. This paper studies distributed nonsmooth SPPs with Lipschitz-continuous objective functions. The objective function is represented as a sum of several components that are distributed between groups of computational nodes. The nodes, or agents, exchange information through some communication network that may be centralized or decentralized. A centralized network has a universal information aggregator (a server, or master node) that directly communicates to each of the agents and therefore can coordinate the optimization process. In a decentralized network, all the nodes are equal, the server node is not present, and each agent only communicates to its immediate neighbors.
We assume that each of the nodes locally holds its objective and can compute its value at given points, i. e. has access to zero-order oracle. Zero-order information is used when the gradient of the function is costly, not possible to compute or when the function is not differentiable. For example, in reinforcement learning one needs to generate a trajectory to evaluate the current policy. This policy evaluation process can be interpreted as the computation of the function value. We propose an approach that uses a smoothing technique, i. e., applies a first-order method to the smoothed version of the initial function. It can be shown that the stochastic gradient of the smoothed function can be viewed as a random two-point gradient approximation of the initial function. Smoothing approaches have been studied for distributed zero-order minimization, and our paper generalizes the smoothing technique on SPPs.
Keywords: convex optimization, distributed optimization. -
Analysis of Brownian and molecular dynamics trajectories of to reveal the mechanisms of protein-protein interactions
Computer Research and Modeling, 2023, v. 15, no. 3, pp. 723-738The paper proposes a set of fairly simple analysis algorithms that can be used to analyze a wide range of protein-protein interactions. In this work, we jointly use the methods of Brownian and molecular dynamics to describe the process of formation of a complex of plastocyanin and cytochrome f proteins in higher plants. In the diffusion-collision complex, two clusters of structures were revealed, the transition between which is possible with the preservation of the position of the center of mass of the molecules and is accompanied only by a rotation of plastocyanin by 134 degrees. The first and second clusters of structures of collisional complexes differ in that in the first cluster with a positively charged region near the small domain of cytochrome f, only the “lower” plastocyanin region contacts, while in the second cluster, both negatively charged regions. The “upper” negatively charged region of plastocyanin in the first cluster is in contact with the amino acid residue of lysine K122. When the final complex is formed, the plastocyanin molecule rotates by 69 degrees around an axis passing through both areas of electrostatic contact. With this rotation, water is displaced from the regions located near the cofactors of the molecules and formed by hydrophobic amino acid residues. This leads to the appearance of hydrophobic contacts, a decrease in the distance between the cofactors to a distance of less than 1.5 nm, and further stabilization of the complex in a position suitable for electron transfer. Characteristics such as contact matrices, rotation axes during the transition between states, and graphs of changes in the number of contacts during the modeling process make it possible to determine the key amino acid residues involved in the formation of the complex and to reveal the physicochemical mechanisms underlying this process.
-
Molecular dynamics of tubulin protofilaments and the effect of taxol on their bending deformation
Computer Research and Modeling, 2024, v. 16, no. 2, pp. 503-512Despite the widespread use of cancer chemotherapy drugs, the molecular mechanisms of action of many of them remain unclear. Some of these drugs, such as taxol, are known to affect the dynamics of microtubule assembly and stop the process of cell division in prophase-prometaphase. Recently, new spatial structures of microtubules and individual tubulin oligomers have emerged associated with various regulatory proteins and cancer chemotherapy drugs. However, knowledge of the spatial structure in itself does not provide information about the mechanism of action of drugs.
In this work, we applied the molecular dynamics method to study the behavior of taxol-bound tubulin oligomers and used our previously developed method for analyzing the conformation of tubulin protofilaments, based on the calculation of modified Euler angles. Recent structures of microtubule fragments have demonstrated that tubulin protofilaments bend not in the radial direction, as many researchers assume, but at an angle of approximately 45◦ from the radial direction. However, in the presence of taxol, the bending direction shifts closer to the radial direction. There was no significant difference between the mean bending and torsion angles of the studied tubulin structures when bound to the various natural regulatory ligands, guanosine triphosphate and guanosine diphosphate. The intra-dimer bending angle was found to be greater than the interdimer bending angle in all analyzed trajectories. This indicates that the bulk of the deformation energy is stored within the dimeric tubulin subunits and not between them. Analysis of the structures of the latest generation of tubulins indicated that the presence of taxol in the tubulin beta subunit pocket allosterically reduces the torsional rigidity of the tubulin oligomer, which could explain the underlying mechanism of taxol’s effect on microtubule dynamics. Indeed, a decrease in torsional rigidity makes it possible to maintain lateral connections between protofilaments, and therefore should lead to the stabilization of microtubules, which is what is observed in experiments. The results of the work shed light on the phenomenon of dynamic instability of microtubules and allow to come closer to understanding the molecular mechanisms of cell division.
-
On some mirror descent methods for strongly convex programming problems with Lipschitz functional constraints
Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1727-1746The paper is devoted to one approach to constructing subgradient methods for strongly convex programming problems with several functional constraints. More precisely, the strongly convex minimization problem with several strongly convex (inequality-type) constraints is considered, and first-order optimization methods for this class of problems are proposed. The special feature of the proposed methods is the possibility of using the strong convexity parameters of the violated functional constraints at nonproductive iterations, in theoretical estimates of the quality of the produced solution by the methods. The main task, to solve the considered problem, is to propose a subgradient method with adaptive rules for selecting steps and stopping rule of the method. The key idea of the proposed methods in this paper is to combine two approaches: a scheme with switching on productive and nonproductive steps and recently proposed modifications of mirror descent for convex programming problems, allowing to ignore some of the functional constraints on nonproductive steps of the algorithms. In the paper, it was described a subgradient method with switching by productive and nonproductive steps for strongly convex programming problems in the case where the objective function and functional constraints satisfy the Lipschitz condition. An analog of the proposed subgradient method, a mirror descent scheme for problems with relatively Lipschitz and relatively strongly convex objective functions and constraints is also considered. For the proposed methods, it obtained theoretical estimates of the quality of the solution, they indicate the optimality of these methods from the point of view of lower oracle estimates. In addition, since in many problems, the operation of finding the exact subgradient vector is quite expensive, then for the class of problems under consideration, analogs of the mentioned above methods with the replacement of the usual subgradient of the objective function or functional constraints by the $\delta$-subgradient were investigated. The noted approach can save computational costs of the method by refusing to require the availability of the exact value of the subgradient at the current point. It is shown that the quality estimates of the solution change by $O(\delta)$. The results of numerical experiments illustrating the advantages of the proposed methods in comparison with some previously known ones are also presented.
-
Calculation of magnetic properties of nanostructured films by means of the parallel Monte-Carlo
Computer Research and Modeling, 2013, v. 5, no. 4, pp. 693-703Views (last year): 4. Citations: 1 (RSCI).Images of surface topography of ultrathin magnetic films have been used for Monte Carlo simulations in the framework of the ferromagnetic Ising model to study the hysteresis and thermal properties of nanomaterials. For high performance calculations was used super-scalable parallel algorithm for the finding of the equilibrium configuration. The changing of a distribution of spins on the surface during the reversal of the magnetization and the dynamics of nanodomain structure of thin magnetic films under the influence of changing external magnetic field was investigated.
-
Comparative analysis of statistical methods of scientific publications classification in medicine
Computer Research and Modeling, 2020, v. 12, no. 4, pp. 921-933In this paper the various methods of machine classification of scientific texts by thematic sections on the example of publications in specialized medical journals published by Springer are compared. The corpus of texts was studied in five sections: pharmacology/toxicology, cardiology, immunology, neurology and oncology. We considered both classification methods based on the analysis of annotations and keywords, and classification methods based on the processing of actual texts. Methods of Bayesian classification, reference vectors, and reference letter combinations were applied. It is shown that the method of classification with the best accuracy is based on creating a library of standards of letter trigrams that correspond to texts of a certain subject. It is turned out that for this corpus the Bayesian method gives an error of about 20%, the support vector machine has error of order 10%, and the proximity of the distribution of three-letter text to the standard theme gives an error of about 5%, which allows to rank these methods to the use of artificial intelligence in the task of text classification by industry specialties. It is important that the support vector method provides the same accuracy when analyzing annotations as when analyzing full texts, which is important for reducing the number of operations for large text corpus.
-
Assessing the validity of clustering of panel data by Monte Carlo methods (using as example the data of the Russian regional economy)
Computer Research and Modeling, 2020, v. 12, no. 6, pp. 1501-1513The paper considers a method for studying panel data based on the use of agglomerative hierarchical clustering — grouping objects based on the similarities and differences in their features into a hierarchy of clusters nested into each other. We used 2 alternative methods for calculating Euclidean distances between objects — the distance between the values averaged over observation interval, and the distance using data for all considered years. Three alternative methods for calculating the distances between clusters were compared. In the first case, the distance between the nearest elements from two clusters is considered to be distance between these clusters, in the second — the average over pairs of elements, in the third — the distance between the most distant elements. The efficiency of using two clustering quality indices, the Dunn and Silhouette index, was studied to select the optimal number of clusters and evaluate the statistical significance of the obtained solutions. The method of assessing statistical reliability of cluster structure consisted in comparing the quality of clustering on a real sample with the quality of clustering on artificially generated samples of panel data with the same number of objects, features and lengths of time series. Generation was made from a fixed probability distribution. At the same time, simulation methods imitating Gaussian white noise and random walk were used. Calculations with the Silhouette index showed that a random walk is characterized not only by spurious regression, but also by “spurious clustering”. Clustering was considered reliable for a given number of selected clusters if the index value on the real sample turned out to be greater than the value of the 95% quantile for artificial data. A set of time series of indicators characterizing production in the regions of the Russian Federation was used as a sample of real data. For these data only Silhouette shows reliable clustering at the level p < 0.05. Calculations also showed that index values for real data are generally closer to values for random walks than for white noise, but it have significant differences from both. Since three-dimensional feature space is used, the quality of clustering was also evaluated visually. Visually, one can distinguish clusters of points located close to each other, also distinguished as clusters by the applied hierarchical clustering algorithm.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"




