All issues
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Ensemble building and statistical mechanics methods for MHC-peptide binding prediction
Computer Research and Modeling, 2020, v. 12, no. 6, pp. 1383-1395The proteins of the Major Histocompatibility Complex (MHC) play a key role in the functioning of the adaptive immune system, and the identification of peptides that bind to them is an important step in the development of vaccines and understanding the mechanisms of autoimmune diseases. Today, there are a number of methods for predicting the binding of a particular MHC allele to a peptide. One of the best such methods is NetMHCpan-4.0, which is based on an ensemble of artificial neural networks. This paper presents a methodology for qualitatively improving the underlying neural network underlying NetMHCpan-4.0. The proposed method uses the ensemble construction technique and adds as input an estimate of the Potts model taken from static mechanics, which is a generalization of the Ising model. In the general case, the model reflects the interaction of spins in the crystal lattice. Within the framework of the proposed method, the model is used to better represent the physical nature of the interaction of proteins included in the complex. To assess the interaction of the MHC + peptide complex, we use a two-dimensional Potts model with 20 states (corresponding to basic amino acids). Solving the inverse problem using data on experimentally confirmed interacting pairs, we obtain the values of the parameters of the Potts model, which we then use to evaluate a new pair of MHC + peptide, and supplement this value with the input data of the neural network. This approach, combined with the ensemble construction technique, allows for improved prediction accuracy, in terms of the positive predictive value (PPV) metric, compared to the baseline model.
-
Approach to Estimating the Dynamics of the Industry Consolidation Level
Computer Research and Modeling, 2023, v. 15, no. 1, pp. 129-140In this article we propose a new approach to the analysis of econometric industry parameters for the industry consolidation level. The research is based on the simple industry automatic control model. The state of the industry is measured by quarterly obtained econometric parameters from each industry’s company provided by the tax control regulator. An approach to analysis of the industry, which does not provide for tracking the economy of each company, but explores the parameters of the set of all companies as a whole, is proposed. Quarterly obtained econometric parameters from each industry’s company are Income, Quantity of employers, Taxes, and Income from Software Licenses. The ABC analysis method was modified by ABCD analysis (D — companies with zero-level impact to industry metrics) and used to make the results obtained for different indicators comparable. Pareto charts were formed for the set of econometric indicators.
To estimate the industry monopolization, the Herfindahl – Hirschman index was calculated for the most sensitive companies metrics. Using the HHI approach, it was proved that COVID-19 does not lead to changes in the monopolization of the Russian IT industry.
As the most visually obvious approach to the industry visualization, scattering diagrams in combination with the Pareto graph colors were proposed. The affect of the accreditation procedure is clearly observed by scattering diagram in combination with red/black dots for accredited and nonaccredited companies respectively.
The last reported result is the proposal to use the Licenses End-to-End Product Identification as the market structure control instrument. It is the basis to avoid the multiple accounting of the licenses reselling within the chain of software distribution.
The results of research could be the basis for future IT industry analysis and simulation on the agent based approach.
-
High-throughput identification of hydride phase-change kinetics models
Computer Research and Modeling, 2020, v. 12, no. 1, pp. 171-183Metal hydrides are an interesting class of chemical compounds that can reversibly bind a large amount of hydrogen and are, therefore, of interest for energy applications. Understanding the factors affecting the kinetics of hydride formation and decomposition is especially important. Features of the material, experimental setup and conditions affect the mathematical description of the processes, which can undergo significant changes during the processing of experimental data. The article proposes a general approach to numerical modeling of the formation and decomposition of metal hydrides and solving inverse problems of estimating material parameters from measurement data. The models are divided into two classes: diffusive ones, that take into account the gradient of hydrogen concentration in the metal lattice, and models with fast diffusion. The former are more complex and take the form of non-classical boundary value problems of parabolic type. A rather general approach to the grid solution of such problems is described. The second ones are solved relatively simply, but can change greatly when model assumptions change. Our experience in processing experimental data shows that a flexible software tool is needed; a tool that allows, on the one hand, building models from standard blocks, freely changing them if necessary, and, on the other hand, avoiding the implementation of routine algorithms. It also should be adapted for high-performance systems of different paradigms. These conditions are satisfied by the HIMICOS library presented in the paper, which has been tested on a large number of experimental data. It allows simulating the kinetics of formation and decomposition of metal hydrides, as well as related tasks, at three levels of abstraction. At the low level, the user defines the interface procedures, such as calculating the time layer based on the previous layer or the entire history, calculating the observed value and the independent variable from the task variables, comparing the curve with the reference. Special algorithms can be used for solving quite general parabolic-type boundary value problems with free boundaries and with various quasilinear (i.e., linear with respect to the derivative only) boundary conditions, as well as calculating the distance between the curves in different metric spaces and with different normalization. This is the middle level of abstraction. At the high level, it is enough to choose a ready tested model for a particular material and modify it in relation to the experimental conditions.
-
Modeling of rheological characteristics of aqueous suspensions based on nanoscale silicon dioxide particles
Computer Research and Modeling, 2024, v. 16, no. 5, pp. 1217-1252The rheological behavior of aqueous suspensions based on nanoscale silicon dioxide particles strongly depends on the dynamic viscosity, which affects directly the use of nanofluids. The purpose of this work is to develop and validate models for predicting dynamic viscosity from independent input parameters: silicon dioxide concentration SiO2, pH acidity, and shear rate $\gamma$. The influence of the suspension composition on its dynamic viscosity is analyzed. Groups of suspensions with statistically homogeneous composition have been identified, within which the interchangeability of compositions is possible. It is shown that at low shear rates, the rheological properties of suspensions differ significantly from those obtained at higher speeds. Significant positive correlations of the dynamic viscosity of the suspension with SiO2 concentration and pH acidity were established, and negative correlations with the shear rate $\gamma$. Regression models with regularization of the dependence of the dynamic viscosity $\eta$ on the concentrations of SiO2, NaOH, H3PO4, surfactant (surfactant), EDA (ethylenediamine), shear rate γ were constructed. For more accurate prediction of dynamic viscosity, the models using algorithms of neural network technologies and machine learning (MLP multilayer perceptron, RBF radial basis function network, SVM support vector method, RF random forest method) were trained. The effectiveness of the constructed models was evaluated using various statistical metrics, including the average absolute approximation error (MAE), the average quadratic error (MSE), the coefficient of determination $R^2$, and the average percentage of absolute relative deviation (AARD%). The RF model proved to be the best model in the training and test samples. The contribution of each component to the constructed model is determined. It is shown that the concentration of SiO2 has the greatest influence on the dynamic viscosity, followed by pH acidity and shear rate γ. The accuracy of the proposed models is compared to the accuracy of models previously published. The results confirm that the developed models can be considered as a practical tool for studying the behavior of nanofluids, which use aqueous suspensions based on nanoscale particles of silicon dioxide.
-
An efficient algorithm for ${\mathrm{\LaTeX}}$ documents comparing
Computer Research and Modeling, 2015, v. 7, no. 2, pp. 329-345The problem is constructing the differences that arise on ${\mathrm{\LaTeX}}$ documents editing. Each document is represented as a parse tree whose nodes are called tokens. The smallest possible text representation of the document that does not change the syntax tree is constructed. All of the text is splitted into fragments whose boundaries correspond to tokens. A map of the initial text fragment sequence to the similar sequence of the edited document corresponding to the minimum distance is built with Hirschberg algorithm A map of text characters corresponding to the text fragment sequences map is cunstructed. Tokens, that chars are all deleted, or all inserted, or all not changed, are selected in the parse trees. The map for the trees formed with other tokens is built using Zhang–Shasha algorithm.
Keywords: automation, editing distance, text analysis, lexeme, machine learning, metric, parse tree, syntax tree, token, ${\mathrm{\LaTeX}}$.Views (last year): 2. Citations: 2 (RSCI). -
Microtubule protofilament bending characterization
Computer Research and Modeling, 2020, v. 12, no. 2, pp. 435-443This work is devoted to the analysis of conformational changes in tubulin dimers and tetramers, in particular, the assessment of the bending of microtubule protofilaments. Three recently exploited approaches for estimating the bend of tubulin protofilaments are reviewed: (1) measurement of the angle between the vector passing through the H7 helices in $\alpha$ and $\beta$ tubulin monomers in the straight structure and the same vector in the curved structure of tubulin; (2) measurement of the angle between the vector, connecting the centers of mass of the subunit and the associated GTP nucleotide, and the vector, connecting the centers of mass of the same nucleotide and the adjacent tubulin subunit; (3) measurement of the three rotation angles of the bent tubulin subunit relative to the straight subunit. Quantitative estimates of the angles calculated at the intra- and inter-dimer interfaces of tubulin in published crystal structures, calculated in accordance with the three metrics, are presented. Intra-dimer angles of tubulin in one structure, measured by the method (3), as well as measurements by this method of the intra-dimer angles in different structures, were more similar, which indicates a lower sensitivity of the method to local changes in tubulin conformation and characterizes the method as more robust. Measuring the angle of curvature between H7-helices (method 1) produces somewhat underestimated values of the curvature per dimer. Method (2), while at first glance generating the bending angle values, consistent the with estimates of curved protofilaments from cryoelectron microscopy, significantly overestimates the angles in the straight structures. For the structures of tubulin tetramers in complex with the stathmin protein, the bending angles calculated with all three metrics varied quite significantly for the first and second dimers (up to 20% or more), which indicates the sensitivity of all metrics to slight variations in the conformation of tubulin dimers within these complexes. A detailed description of the procedures for measuring the bending of tubulin protofilaments, as well as identifying the advantages and disadvantages of various metrics, will increase the reproducibility and clarity of the analysis of tubulin structures in the future, as well as it will hopefully make it easier to compare the results obtained by various scientific groups.
-
Stochastic optimization in digital pre-distortion of the signal
Computer Research and Modeling, 2022, v. 14, no. 2, pp. 399-416In this paper, we test the performance of some modern stochastic optimization methods and practices with respect to the digital pre-distortion problem, which is a valuable part of processing signal on base stations providing wireless communication. In the first part of our study, we focus on the search for the best performing method and its proper modifications. In the second part, we propose the new, quasi-online, testing framework that allows us to fit our modeling results with the behavior of real-life DPD prototype, retest some selected of practices considered in the previous section and approve the advantages of the method appearing to be the best under real-life conditions. For the used model, the maximum achieved improvement in depth is 7% in the standard regime and 5% in the online regime (metric itself is of logarithmic scale). We also achieve a halving of the working time preserving 3% and 6% improvement in depth for the standard and online regime, respectively. All comparisons are made to the Adam method, which was highlighted as the best stochastic method for DPD problem in [Pasechnyuk et al., 2021], and to the Adamax method, which is the best in the proposed online regime.
-
Development of and research on an algorithm for distinguishing features in Twitter publications for a classification problem with known markup
Computer Research and Modeling, 2023, v. 15, no. 1, pp. 171-183Social media posts play an important role in demonstration of financial market state, and their analysis is a powerful tool for trading. The article describes the result of a study of the impact of social media activities on the movement of the financial market. The top authoritative influencers are selected. Twitter posts are used as data. Such texts usually include slang and abbreviations, so methods for preparing primary text data, including Stanza, regular expressions are presented. Two approaches to the representation of a point in time in the format of text data are considered. The difference of the influence of a single tweet or a whole package consisting of tweets collected over a certain period of time is investigated. A statistical approach in the form of frequency analysis is also considered, metrics defined by the significance of a particular word when identifying the relationship between price changes and Twitter posts are introduced. Frequency analysis involves the study of the occurrence distributions of various words and bigrams in the text for positive, negative or general trends. To build the markup, changes in the market are processed into a binary vector using various parameters, thus setting the task of binary classification. The parameters for Binance candlesticks are sorted out for better description of the movement of the cryptocurrency market, their variability is also explored in this article. Sentiment is studied using Stanford Core NLP. The result of statistical analysis is relevant to feature selection for further binary or multiclass classification tasks. The presented methods of text analysis contribute to the increase of the accuracy of models designed to solve natural language processing problems by selecting words, improving the quality of vectorization. Such algorithms are often used in automated trading strategies to predict the price of an asset, the trend of its movement.
-
Classification of pest-damaged coniferous trees in unmanned aerial vehicles images using convolutional neural network models
Computer Research and Modeling, 2024, v. 16, no. 5, pp. 1271-1294This article considers the task of multiclass classification of coniferous trees with varying degrees of damage by insect pests on images obtained using unmanned aerial vehicles (UAVs). We propose the use of convolutional neural networks (CNNs) for the classification of fir trees Abies sibirica and Siberian pine trees Pinus sibirica in unmanned aerial vehicles (UAV) imagery. In our approach, we develop three CNN models based on the classical U-Net architecture, designed for pixel-wise classification of images (semantic segmentation). The first model, Mo-U-Net, incorporates several changes to the classical U-Net model. The second and third models, MSC-U-Net and MSC-Res-U-Net, respectively, form ensembles of three Mo-U-Net models, each varying in depth and input image sizes. Additionally, the MSC-Res-U-Net model includes the integration of residual blocks. To validate our approach, we have created two datasets of UAV images depicting trees affected by pests, specifically Abies sibirica and Pinus sibirica, and trained the proposed three CNN models utilizing mIoULoss and Focal Loss as loss functions. Subsequent evaluation focused on the effectiveness of each trained model in classifying damaged trees. The results obtained indicate that when mIoULoss served as the loss function, the proposed models fell short of practical applicability in the forestry industry, failing to achieve classification accuracy above the threshold value of 0.5 for individual classes of both tree species according to the IoU metric. However, under Focal Loss, the MSC-Res-U-Net and Mo-U-Net models, in contrast to the third proposed model MSC-U-Net, exhibited high classification accuracy (surpassing the threshold value of 0.5) for all classes of Abies sibirica and Pinus sibirica trees. Thus, these results underscore the practical significance of the MSC-Res-U-Net and Mo-U-Net models for forestry professionals, enabling accurate classification and early detection of pest outbreaks in coniferous trees.
-
Semantic structuring of text documents based on patterns of natural language entities
Computer Research and Modeling, 2022, v. 14, no. 5, pp. 1185-1197The technology of creating patterns from natural language words (concepts) based on text data in the bag of words model is considered. Patterns are used to reduce the dimension of the original space in the description of documents and search for semantically related words by topic. The process of dimensionality reduction is implemented through the formation of patterns of latent features. The variety of structures of document relations is investigated in order to divide them into themes in the latent space.
It is considered that a given set of documents (objects) is divided into two non-overlapping classes, for the analysis of which it is necessary to use a common dictionary. The belonging of words to a common vocabulary is initially unknown. Class objects are considered as opposition to each other. Quantitative parameters of oppositionality are determined through the values of the stability of each feature and generalized assessments of objects according to non-overlapping sets of features.
To calculate the stability, the feature values are divided into non-intersecting intervals, the optimal boundaries of which are determined by a special criterion. The maximum stability is achieved under the condition that the boundaries of each interval contain values of one of the two classes.
The composition of features in sets (patterns of words) is formed from a sequence ordered by stability values. The process of formation of patterns and latent features based on them is implemented according to the rules of hierarchical agglomerative grouping.
A set of latent features is used for cluster analysis of documents using metric grouping algorithms. The analysis applies the coefficient of content authenticity based on the data on the belonging of documents to classes. The coefficient is a numerical characteristic of the dominance of class representatives in groups.
To divide documents into topics, it is proposed to use the union of groups in relation to their centers. As patterns for each topic, a sequence of words ordered by frequency of occurrence from a common dictionary is considered.
The results of a computational experiment on collections of abstracts of scientific dissertations are presented. Sequences of words from the general dictionary on 4 topics are formed.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"




