All issues
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Ensemble building and statistical mechanics methods for MHC-peptide binding prediction
Computer Research and Modeling, 2020, v. 12, no. 6, pp. 1383-1395The proteins of the Major Histocompatibility Complex (MHC) play a key role in the functioning of the adaptive immune system, and the identification of peptides that bind to them is an important step in the development of vaccines and understanding the mechanisms of autoimmune diseases. Today, there are a number of methods for predicting the binding of a particular MHC allele to a peptide. One of the best such methods is NetMHCpan-4.0, which is based on an ensemble of artificial neural networks. This paper presents a methodology for qualitatively improving the underlying neural network underlying NetMHCpan-4.0. The proposed method uses the ensemble construction technique and adds as input an estimate of the Potts model taken from static mechanics, which is a generalization of the Ising model. In the general case, the model reflects the interaction of spins in the crystal lattice. Within the framework of the proposed method, the model is used to better represent the physical nature of the interaction of proteins included in the complex. To assess the interaction of the MHC + peptide complex, we use a two-dimensional Potts model with 20 states (corresponding to basic amino acids). Solving the inverse problem using data on experimentally confirmed interacting pairs, we obtain the values of the parameters of the Potts model, which we then use to evaluate a new pair of MHC + peptide, and supplement this value with the input data of the neural network. This approach, combined with the ensemble construction technique, allows for improved prediction accuracy, in terms of the positive predictive value (PPV) metric, compared to the baseline model.
-
Methodological approach to modeling and forecasting the impact of the spatial heterogeneity of the COVID-19 spread on the economic development of Russian regions
Computer Research and Modeling, 2021, v. 13, no. 3, pp. 629-648The article deals with the development of a methodological approach to forecasting and modeling the socioeconomic consequences of viral epidemics in conditions of heterogeneous economic development of territorial systems. The relevance of the research stems from the need for rapid mechanisms of public management and stabilization of adverse epidemiological situation, taking into account the spatial heterogeneity of the spread of COVID-19, accompanied by a concentration of infection in large metropolitan areas and territories with high economic activity. The aim of the work is to substantiate a methodology to assess the spatial heterogeneity of the spread of coronavirus infection, find poles of its growth, emerging spatial clusters and zones of their influence with the assessment of inter-territorial relationships, as well as simulate the effects of worsening epidemiological situation on the dynamics of economic development of regional systems. The peculiarity of the developed approach is the spatial clustering of regional systems by the level of COVID-19 incidence, conducted using global and local spatial autocorrelation indices, various spatial weight matrices, and L.Anselin mutual influence matrix based on the statistical information of the Russian Federal State Statistics Service. The study revealed a spatial cluster characterized by high levels of infection with COVID-19 with a strong zone of influence and stable interregional relationships with surrounding regions, as well as formed growth poles which are potential poles of further spread of coronavirus infection. Regression analysis using panel data not only confirmed the impact of COVID-19 incidence on the average number of employees in enterprises, the level of average monthly nominal wages, but also allowed to form a model for scenario prediction of the consequences of the spread of coronavirus infection. The results of this study can be used to form mechanisms to contain the coronavirus infection and stabilize socio-economic at macroeconomic and regional level and restore the economy of territorial systems, depending on the depth of the spread of infection and the level of economic damage caused.
-
Tensor methods for strongly convex strongly concave saddle point problems and strongly monotone variational inequalities
Computer Research and Modeling, 2022, v. 14, no. 2, pp. 357-376In this paper we propose high-order (tensor) methods for two types of saddle point problems. Firstly, we consider the classic min-max saddle point problem. Secondly, we consider the search for a stationary point of the saddle point problem objective by its gradient norm minimization. Obviously, the stationary point does not always coincide with the optimal point. However, if we have a linear optimization problem with linear constraints, the algorithm for gradient norm minimization becomes useful. In this case we can reconstruct the solution of the optimization problem of a primal function from the solution of gradient norm minimization of dual function. In this paper we consider both types of problems with no constraints. Additionally, we assume that the objective function is $\mu$-strongly convex by the first argument, $\mu$-strongly concave by the second argument, and that the $p$-th derivative of the objective is Lipschitz-continous.
For min-max problems we propose two algorithms. Since we consider strongly convex a strongly concave problem, the first algorithm uses the existing tensor method for regular convex concave saddle point problems and accelerates it with the restarts technique. The complexity of such an algorithm is linear. If we additionally assume that our objective is first and second order Lipschitz, we can improve its performance even more. To do this, we can switch to another existing algorithm in its area of quadratic convergence. Thus, we get the second algorithm, which has a global linear convergence rate and a local quadratic convergence rate.
Finally, in convex optimization there exists a special methodology to solve gradient norm minimization problems by tensor methods. Its main idea is to use existing (near-)optimal algorithms inside a special framework. I want to emphasize that inside this framework we do not necessarily need the assumptions of strong convexity, because we can regularize the convex objective in a special way to make it strongly convex. In our article we transfer this framework on convex-concave objective functions and use it with our aforementioned algorithm with a global linear convergence and a local quadratic convergence rate.
Since the saddle point problem is a particular case of the monotone variation inequality problem, the proposed methods will also work in solving strongly monotone variational inequality problems.
-
Methodology of aircraft icing calculation in a wide range of climate and speed parameters. Applicability within the NLG-25 airworthiness standards
Computer Research and Modeling, 2023, v. 15, no. 4, pp. 957-978Certifying a transport airplane for the flights under icing conditions in Russia was carried out within the framework of the requirements of Annex С to the AP-25 Aviation Rules. In force since 2023 to replace AP-25 the new Russian certification document “Airworthiness Standards” (NLG-25) proposes the introduction of Appendix O. A feature of Appendix O is the need to carry out calculations in conditions of high liquid water content and with large water drops (500 microns or more). With such parameters of the dispersed flow, such physical processes as the disruption and splashing of a water film when large drops enter it become decisive. The flow of a dispersed medium under such conditions is essentially polydisperse. This paper describes the modifications of the IceVision technique implemented on the basis of the FlowVision software package for the ice accretion calculations within the framework of Appendix O.
The main difference between the IceVision method and the known approaches is the use of the Volume of fluid (VOF) technology to the shape of ice changes tracking. The external flow around the aircraft is calculated simultaneously with the growth of ice and its heating. Ice is explicitly incorporated in the computational domain; the heat transfer equation is solved in it. Unlike the Lagrangian approaches, the Euler computational grid is not completely rebuilt in the IceVision technique: only the cells containing the contact surface are changed.
The IceVision 2.0 version accounts for stripping the film, as well as bouncing and splashing of falling drops at the surfaces of the aircraft and ice. The diameter of secondary droplets is calculated using known empirical correlations. The speed of the water film flow over the surface is determined taking into account the action of aerodynamic forces, gravity, hydrostatic pressure gradient and surface tension force. The result of taking into account surface tension is the effect of contraction of the film, which leads to the formation of water flows in the form of rivulets and ice deposits in the form of comb-like growths. An energy balance relation is fulfilled on the ice surface that takes into account the energy of falling drops, heat exchange between ice and air, the heat of crystallization, evaporation, sublimation and condensation. The paper presents the results of solving benchmark and model problems, demonstrating the effectiveness of the IceVision technique and the reliability of the obtained results.
-
Biomathematical system of the nucleic acids description
Computer Research and Modeling, 2020, v. 12, no. 2, pp. 417-434The article is devoted to the application of various methods of mathematical analysis, search for patterns and studying the composition of nucleotides in DNA sequences at the genomic level. New methods of mathematical biology that made it possible to detect and visualize the hidden ordering of genetic nucleotide sequences located in the chromosomes of cells of living organisms described. The research was based on the work on algebraic biology of the doctor of physical and mathematical sciences S. V. Petukhov, who first introduced and justified new algebras and hypercomplex numerical systems describing genetic phenomena. This paper describes a new phase in the development of matrix methods in genetics for studying the properties of nucleotide sequences (and their physicochemical parameters), built on the principles of finite geometry. The aim of the study is to demonstrate the capabilities of new algorithms and discuss the discovered properties of genetic DNA and RNA molecules. The study includes three stages: parameterization, scaling, and visualization. Parametrization is the determination of the parameters taken into account, which are based on the structural and physicochemical properties of nucleotides as elementary components of the genome. Scaling plays the role of “focusing” and allows you to explore genetic structures at various scales. Visualization includes the selection of the axes of the coordinate system and the method of visual display. The algorithms presented in this work are put forward as a new toolkit for the development of research software for the analysis of long nucleotide sequences with the ability to display genomes in parametric spaces of various dimensions. One of the significant results of the study is that new criteria were obtained for the classification of the genomes of various living organisms to identify interspecific relationships. The new concept allows visually and numerically assessing the variability of the physicochemical parameters of nucleotide sequences. This concept also allows one to substantiate the relationship between the parameters of DNA and RNA molecules with fractal geometric mosaics, reveals the ordering and symmetry of polynucleotides, as well as their noise immunity. The results obtained justified the introduction of new terms: “genometry” as a methodology of computational strategies and “genometrica” as specific parameters of a particular genome or nucleotide sequence. In connection with the results obtained, biosemiotics and hierarchical levels of organization of living matter are raised.
-
Comparing the effectiveness of computer mass appraisal methods
Computer Research and Modeling, 2015, v. 7, no. 1, pp. 185-196Views (last year): 2.Location-based models — one of areas of CAMA (computer-assisted mass apriasal) building. When taking into account the location of the object using spatial autoregressive models structure of models (type of spatial autocorrelation, choice of “nearest neighbors”) cannot always be determined before its construction. Moreover, in practice there are situations where more efficient methods are taking into account different rates depending on the type of the object from its location. In this regard there are important issues in spatial methods area:
– fields of methods efficacy;
– sensitivity of the methods on the choice of the type of spatial model and on the selected number of nearest neighbors.
This article presents a methodology for assessing the effectiveness of computer evaluation of real estate objects. There are results of approbation on methods based on location information of the objects.
-
A novel method of stylometry based on the statistic of numerals
Computer Research and Modeling, 2017, v. 9, no. 5, pp. 837-850A new method of statistical analysis of texts is suggested. The frequency distribution of the first significant digits in numerals of English-language texts is considered. We have taken into account cardinal as well as ordinal numerals expressed both in figures, and verbally. To identify the author’s use of numerals, we previously deleted from the text all idiomatic expressions and set phrases accidentally containing numerals, as well as itemizations and page numbers, etc. Benford’s law is found to hold approximately for the frequencies of various first significant digits of compound literary texts by different authors; a marked predominance of the digit 1 is observed. In coherent authorial texts, characteristic deviations from Benford’s law arise which are statistically stable significant author peculiarities that allow, under certain conditions, to consider the problem of authorship and distinguish between texts by different authors. The text should be large enough (at least about 200 kB). At the end of $\{1, 2, \ldots, 9\}$ digits row, the frequency distribution is subject to strong fluctuations and thus unrepresentative for our purpose. The aim of the theoretical explanation of the observed empirical regularity is not intended, which, however, does not preclude the applicability of the proposed methodology for text attribution. The approach suggested and the conclusions are backed by the examples of the computer analysis of works by W.M. Thackeray, M. Twain, R. L. Stevenson, J. Joyce, sisters Bront¨e, and J.Austen. On the basis of technique suggested, we examined the authorship of a text earlier ascribed to L. F. Baum (the result agrees with that obtained by different means). We have shown that the authorship of Harper Lee’s “To Kill a Mockingbird” pertains to her, whereas the primary draft, “Go Set a Watchman”, seems to have been written in collaboration with Truman Capote. All results are confirmed on the basis of parametric Pearson’s chi-squared test as well as non-parametric Mann –Whitney U test and Kruskal –Wallis test.
Keywords: text attribution, first significant digit of numerals.Views (last year): 10. -
Methods of evaluating the effectiveness of systems for computing resources monitoring
Computer Research and Modeling, 2012, v. 4, no. 3, pp. 661-668Views (last year): 2. Citations: 2 (RSCI).This article discusses the contribution of computing resources monitoring system to the work of a distributed computing system. Method of evaluation of this contribution and performance monitoring system based on measures of certainty the state-controlled system is proposed. The application of this methodology in the design and development of local monitoring of the Central Information and Computing Complex, Joint Institute for Nuclear Research is listed.
-
Schools on mathematical biology 1973–1992
Computer Research and Modeling, 2016, v. 8, no. 2, pp. 411-422Views (last year): 2.This is a brief review of the subjects, and an impression of some talks, which were given at the Schools on modelling complex biological systems. Those Schools reflected a logical progress in this way of thinking in our country and provided a place for collective “brain-storming” inspired by prominent scientists of the last century, such as A. A. Lyapunov, N. V. Timofeeff-Ressovsky, A. M. Molchanov. At the Schools, general issues of methodology of mathematical modeling in biology and ecology were raised in the form of heated debates, the fundamental principles for how the structure of matter is organized and how complex biological systems function and evolve were discussed. The Schools served as an important sample of interdisciplinary actions by the scientists of distinct perceptions of the World, or distinct approaches and modes to reach the boundaries of the Unknown, rather than of different specializations. What was bringing together the mathematicians and biologists attending the Schools was the common understanding that the alliance should be fruitful. Reported in the issues of School proceedings, the presentations, discussions, and reflections have not yet lost their relevance so far and might serve as certain guidance for the new generation of scientists.
-
Changepoint detection on financial data using deep learning approach
Computer Research and Modeling, 2024, v. 16, no. 2, pp. 555-575The purpose of this study is to develop a methodology for change points detection in time series, including financial data. The theoretical basis of the study is based on the pieces of research devoted to the analysis of structural changes in financial markets, description of the proposed algorithms for detecting change points and peculiarities of building classical and deep machine learning models for solving this type of problems. The development of such tools is of interest to investors and other stakeholders, providing them with additional approaches to the effective analysis of financial markets and interpretation of available data.
To address the research objective, a neural network was trained. In the course of the study several ways of training sample formation were considered, differing in the nature of statistical parameters. In order to improve the quality of training and obtain more accurate results, a methodology for feature generation was developed for the formation of features that serve as input data for the neural network. These features, in turn, were derived from an analysis of mathematical expectations and standard deviations of time series data over specific intervals. The potential for combining these features to achieve more stable results is also under investigation.
The results of model experiments were analyzed to compare the effectiveness of the proposed model with other existing changepoint detection algorithms that have gained widespread usage in practical applications. A specially generated dataset, developed using proprietary methods, was utilized as both training and testing data. Furthermore, the model, trained on various features, was tested on daily data from the S&P 500 index to assess its effectiveness in a real financial context.
As the principles of the model’s operation are described, possibilities for its further improvement are considered, including the modernization of the proposed model’s structure, optimization of training data generation, and feature formation. Additionally, the authors are tasked with advancing existing concepts for real-time changepoint detection.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"