Результаты поиска по 'identifiability analysis':
Найдено статей: 42
  1. Varshavskiy A.E.
    A model for analyzing income inequality based on a finite functional sequence (adequacy and application problems)
    Computer Research and Modeling, 2022, v. 14, no. 3, pp. 675-689

    The paper considers the adequacy of the model developed earlier by the author for the analysis of income inequality and based on an empirically confirmed hypothesis that the relative (to the income of the richest group) income values of 20% population groups in total income can be represented as a finite functional sequence, each member of which depends on one parameter — a specially defined indicator of inequality. It is shown that in addition to the existing methods of inequality analysis, the model makes it possible to estimate with the help of analytical expressions the income shares of 20%, 10% and smaller groups of the population for different levels of inequality, as well as to identify how they change with the growth of inequality, to estimate the level of inequality for known ratios between the incomes of different groups of the population, etc.

    The paper provides a more detailed confirmation of the proposed model adequacy in comparison with the previously obtained results of statistical analysis of empirical data on the distribution of income between the 20% and 10% population groups. It is based on the analysis of certain ratios between the values of quintiles and deciles according to the proposed model. The verification of these ratios was carried out using a set of data for a large number of countries and the estimates obtained confirm the sufficiently high accuracy of the model.

    Data are presented that confirm the possibility of using the model to analyze the dependence of income distribution by population groups on the level of inequality, as well as to estimate the inequality indicator for income ratios between different groups, including variants when the income of the richest 20% is equal to the income of the poor 60 %, income of the middle class 40% or income of the rest 80% of the population, as well as when the income of the richest 10% is equal to the income of the poor 40 %, 50% or 60%, to the income of various middle class groups, etc., as well as for cases, when the distribution of income obeys harmonic proportions and when the quintiles and deciles corresponding to the middle class reach a maximum. It is shown that the income shares of the richest middle class groups are relatively stable and have a maximum at certain levels of inequality.

    The results obtained with the help of the model can be used to determine the standards for developing a policy of gradually increasing the level of progressive taxation in order to move to the level of inequality typical of countries with social oriented economy.

  2. Stepanyan I.V.
    Biomathematical system of the nucleic acids description
    Computer Research and Modeling, 2020, v. 12, no. 2, pp. 417-434

    The article is devoted to the application of various methods of mathematical analysis, search for patterns and studying the composition of nucleotides in DNA sequences at the genomic level. New methods of mathematical biology that made it possible to detect and visualize the hidden ordering of genetic nucleotide sequences located in the chromosomes of cells of living organisms described. The research was based on the work on algebraic biology of the doctor of physical and mathematical sciences S. V. Petukhov, who first introduced and justified new algebras and hypercomplex numerical systems describing genetic phenomena. This paper describes a new phase in the development of matrix methods in genetics for studying the properties of nucleotide sequences (and their physicochemical parameters), built on the principles of finite geometry. The aim of the study is to demonstrate the capabilities of new algorithms and discuss the discovered properties of genetic DNA and RNA molecules. The study includes three stages: parameterization, scaling, and visualization. Parametrization is the determination of the parameters taken into account, which are based on the structural and physicochemical properties of nucleotides as elementary components of the genome. Scaling plays the role of “focusing” and allows you to explore genetic structures at various scales. Visualization includes the selection of the axes of the coordinate system and the method of visual display. The algorithms presented in this work are put forward as a new toolkit for the development of research software for the analysis of long nucleotide sequences with the ability to display genomes in parametric spaces of various dimensions. One of the significant results of the study is that new criteria were obtained for the classification of the genomes of various living organisms to identify interspecific relationships. The new concept allows visually and numerically assessing the variability of the physicochemical parameters of nucleotide sequences. This concept also allows one to substantiate the relationship between the parameters of DNA and RNA molecules with fractal geometric mosaics, reveals the ordering and symmetry of polynucleotides, as well as their noise immunity. The results obtained justified the introduction of new terms: “genometry” as a methodology of computational strategies and “genometrica” as specific parameters of a particular genome or nucleotide sequence. In connection with the results obtained, biosemiotics and hierarchical levels of organization of living matter are raised.

  3. Safiullina L.F., Gubaydullin I.M.
    Analysis of the identifiability of the mathematical model of propane pyrolysis
    Computer Research and Modeling, 2021, v. 13, no. 5, pp. 1045-1057

    The article presents the numerical modeling and study of the kinetic model of propane pyrolysis. The study of the reaction kinetics is a necessary stage in modeling the dynamics of the gas flow in the reactor.

    The kinetic model of propane pyrolysis is a nonlinear system of ordinary differential equations of the first order with parameters, the role of which is played by the reaction rate constants. Math modeling of processes is based on the use of the mass conservation law. To solve an initial (forward) problem, implicit methods for solving stiff ordinary differential equation systems are used. The model contains 60 input kinetic parameters and 17 output parameters corresponding to the reaction substances, of which only 9 are observable. In the process of solving the problem of estimating parameters (inverse problem), there is a question of non-uniqueness of the set of parameters that satisfy the experimental data. Therefore, before solving the inverse problem, the possibility of determining the parameters of the model is analyzed (analysis of identifiability).

    To analyze identifiability, we use the orthogonal method, which has proven itself well for analyzing models with a large number of parameters. The algorithm is based on the analysis of the sensitivity matrix by the methods of differential and linear algebra, which shows the degree of dependence of the unknown parameters of the models on the given measurements. The analysis of sensitivity and identifiability showed that the parameters of the model are stably determined from a given set of experimental data. The article presents a list of model parameters from most to least identifiable. Taking into account the analysis of the identifiability of the mathematical model, restrictions were introduced on the search for less identifiable parameters when solving the inverse problem.

    The inverse problem of estimating the parameters was solved using a genetic algorithm. The article presents the found optimal values of the kinetic parameters. A comparison of the experimental and calculated dependences of the concentrations of propane, main and by-products of the reaction on temperature for different flow rates of the mixture is presented. The conclusion about the adequacy of the constructed mathematical model is made on the basis of the correspondence of the results obtained to physicochemical laws and experimental data.

  4. Borisova L.R., Kuznetsova A.V., Sergeeva N.V., Sen'ko O.V.
    Comparison of Arctic zone RF companies with different Polar Index ratings by economic criteria with the help of machine learning tools
    Computer Research and Modeling, 2020, v. 12, no. 1, pp. 201-215

    The paper presents a comparative analysis of the enterprises of the Arctic Zone of the Russian Federation (AZ RF) on economic indicators in accordance with the rating of the Polar index. This study includes numerical data of 193 enterprises located in the AZ RF. Machine learning methods are applied, both standard, from open source, and own original methods — the method of Optimally Reliable Partitions (ORP), the method of Statistically Weighted Syndromes (SWS). Held split, indicating the maximum value of the functional quality, this study used the simplest family of different one-dimensional partition with a single boundary point, as well as a collection of different two-dimensional partition with one boundary point on each of the two combining variables. Permutation tests allow not only to evaluate the reliability of the data of the revealed regularities, but also to exclude partitions with excessive complexity from the set of the revealed regularities. Patterns connected the class number and economic indicators are revealed using the SDT method on one-dimensional indicators. The regularities which are revealed within the framework of the simplest one-dimensional model with one boundary point and with significance not worse than p < 0.001 are also presented in the given study. The so-called sliding control method was used for reliable evaluation of such diagnostic ability. As a result of these studies, a set of methods that had sufficient effectiveness was identified. The collective method based on the results of several machine learning methods showed the high importance of economic indicators for the division of enterprises in accordance with the rating of the Polar index. Our study proved and showed that those companies that entered the top Rating of the Polar index are generally recognized by financial indicators among all companies in the Arctic Zone. However it would be useful to supplement the list of indicators with ecological and social criteria.

  5. Kovalenko I.B., Dreval V.D., Fedorov V.A., Kholina E.G., Gudimchuk N.B.
    Microtubule protofilament bending characterization
    Computer Research and Modeling, 2020, v. 12, no. 2, pp. 435-443

    This work is devoted to the analysis of conformational changes in tubulin dimers and tetramers, in particular, the assessment of the bending of microtubule protofilaments. Three recently exploited approaches for estimating the bend of tubulin protofilaments are reviewed: (1) measurement of the angle between the vector passing through the H7 helices in $\alpha$ and $\beta$ tubulin monomers in the straight structure and the same vector in the curved structure of tubulin; (2) measurement of the angle between the vector, connecting the centers of mass of the subunit and the associated GTP nucleotide, and the vector, connecting the centers of mass of the same nucleotide and the adjacent tubulin subunit; (3) measurement of the three rotation angles of the bent tubulin subunit relative to the straight subunit. Quantitative estimates of the angles calculated at the intra- and inter-dimer interfaces of tubulin in published crystal structures, calculated in accordance with the three metrics, are presented. Intra-dimer angles of tubulin in one structure, measured by the method (3), as well as measurements by this method of the intra-dimer angles in different structures, were more similar, which indicates a lower sensitivity of the method to local changes in tubulin conformation and characterizes the method as more robust. Measuring the angle of curvature between H7-helices (method 1) produces somewhat underestimated values of the curvature per dimer. Method (2), while at first glance generating the bending angle values, consistent the with estimates of curved protofilaments from cryoelectron microscopy, significantly overestimates the angles in the straight structures. For the structures of tubulin tetramers in complex with the stathmin protein, the bending angles calculated with all three metrics varied quite significantly for the first and second dimers (up to 20% or more), which indicates the sensitivity of all metrics to slight variations in the conformation of tubulin dimers within these complexes. A detailed description of the procedures for measuring the bending of tubulin protofilaments, as well as identifying the advantages and disadvantages of various metrics, will increase the reproducibility and clarity of the analysis of tubulin structures in the future, as well as it will hopefully make it easier to compare the results obtained by various scientific groups.

  6. Gesture recognition is an urgent challenge in developing systems of human-machine interfaces. We analyzed machine learning methods for gesture classification based on electromyographic muscle signals to identify the most effective one. Methods such as the naive Bayesian classifier (NBC), logistic regression, decision tree, random forest, gradient boosting, support vector machine (SVM), $k$-nearest neighbor algorithm, and ensembles (NBC and decision tree, NBC and gradient boosting, gradient boosting and decision tree) were considered. Electromyography (EMG) was chosen as a method of obtaining information about gestures. This solution does not require the location of the hand in the field of view of the camera and can be used to recognize finger movements. To test the effectiveness of the selected methods of gesture recognition, a device was developed for recording the EMG signal, which includes three electrodes and an EMG sensor connected to the microcontroller and the power supply. The following gestures were chosen: clenched fist, “thumb up”, “Victory”, squeezing an index finger and waving a hand from right to left. Accuracy, precision, recall and execution time were used to evaluate the effectiveness of classifiers. These parameters were calculated for three options for the location of EMG electrodes on the forearm. According to the test results, the most effective methods are $k$-nearest neighbors’ algorithm, random forest and the ensemble of NBC and gradient boosting, the average accuracy of ensemble for three electrode positions was 81.55%. The position of the electrodes was also determined at which machine learning methods achieve the maximum accuracy. In this position, one of the differential electrodes is located at the intersection of the flexor digitorum profundus and flexor pollicis longus, the second — above the flexor digitorum superficialis.

  7. Makarov I.S., Bagantsova E.R., Iashin P.A., Kovaleva M.D., Gorbachev R.A.
    Development of and research on an algorithm for distinguishing features in Twitter publications for a classification problem with known markup
    Computer Research and Modeling, 2023, v. 15, no. 1, pp. 171-183

    Social media posts play an important role in demonstration of financial market state, and their analysis is a powerful tool for trading. The article describes the result of a study of the impact of social media activities on the movement of the financial market. The top authoritative influencers are selected. Twitter posts are used as data. Such texts usually include slang and abbreviations, so methods for preparing primary text data, including Stanza, regular expressions are presented. Two approaches to the representation of a point in time in the format of text data are considered. The difference of the influence of a single tweet or a whole package consisting of tweets collected over a certain period of time is investigated. A statistical approach in the form of frequency analysis is also considered, metrics defined by the significance of a particular word when identifying the relationship between price changes and Twitter posts are introduced. Frequency analysis involves the study of the occurrence distributions of various words and bigrams in the text for positive, negative or general trends. To build the markup, changes in the market are processed into a binary vector using various parameters, thus setting the task of binary classification. The parameters for Binance candlesticks are sorted out for better description of the movement of the cryptocurrency market, their variability is also explored in this article. Sentiment is studied using Stanford Core NLP. The result of statistical analysis is relevant to feature selection for further binary or multiclass classification tasks. The presented methods of text analysis contribute to the increase of the accuracy of models designed to solve natural language processing problems by selecting words, improving the quality of vectorization. Such algorithms are often used in automated trading strategies to predict the price of an asset, the trend of its movement.

  8. Zenkov A.V.
    A novel method of stylometry based on the statistic of numerals
    Computer Research and Modeling, 2017, v. 9, no. 5, pp. 837-850

    A new method of statistical analysis of texts is suggested. The frequency distribution of the first significant digits in numerals of English-language texts is considered. We have taken into account cardinal as well as ordinal numerals expressed both in figures, and verbally. To identify the author’s use of numerals, we previously deleted from the text all idiomatic expressions and set phrases accidentally containing numerals, as well as itemizations and page numbers, etc. Benford’s law is found to hold approximately for the frequencies of various first significant digits of compound literary texts by different authors; a marked predominance of the digit 1 is observed. In coherent authorial texts, characteristic deviations from Benford’s law arise which are statistically stable significant author peculiarities that allow, under certain conditions, to consider the problem of authorship and distinguish between texts by different authors. The text should be large enough (at least about 200 kB). At the end of $\{1, 2, \ldots, 9\}$ digits row, the frequency distribution is subject to strong fluctuations and thus unrepresentative for our purpose. The aim of the theoretical explanation of the observed empirical regularity is not intended, which, however, does not preclude the applicability of the proposed methodology for text attribution. The approach suggested and the conclusions are backed by the examples of the computer analysis of works by W.M. Thackeray, M. Twain, R. L. Stevenson, J. Joyce, sisters Bront¨e, and J.Austen. On the basis of technique suggested, we examined the authorship of a text earlier ascribed to L. F. Baum (the result agrees with that obtained by different means). We have shown that the authorship of Harper Lee’s “To Kill a Mockingbird” pertains to her, whereas the primary draft, “Go Set a Watchman”, seems to have been written in collaboration with Truman Capote. All results are confirmed on the basis of parametric Pearson’s chi-squared test as well as non-parametric Mann –Whitney U test and Kruskal –Wallis test.

    Views (last year): 10.
  9. Mitin N.A., Orlov Y.N.
    Statistical analysis of bigrams of specialized texts
    Computer Research and Modeling, 2020, v. 12, no. 1, pp. 243-254

    The method of the stochastic matrix spectrum analysis is used to build an indicator that allows to determine the subject of scientific texts without keywords usage. This matrix is a matrix of conditional probabilities of bigrams, built on the statistics of the alphabet characters in the text without spaces, numbers and punctuation marks. Scientific texts are classified according to the mutual arrangement of invariant subspaces of the matrix of conditional probabilities of pairs of letter combinations. The separation indicator is the value of the cosine of the angle between the right and left eigenvectors corresponding to the maximum and minimum eigenvalues. The computational algorithm uses a special representation of the dichotomy parameter, which is the integral of the square norm of the resolvent of the stochastic matrix of bigrams along the circumference of a given radius in the complex plane. The tendency of the integral to infinity testifies to the approximation of the integration circuit to the eigenvalue of the matrix. The paper presents the typical distribution of the indicator of identification of specialties. For statistical analysis were analyzed dissertations on the main 19 specialties without taking into account the classification within the specialty, 20 texts for the specialty. It was found that the empirical distributions of the cosine of the angle for the mathematical and Humanities specialties do not have a common domain, so they can be formally divided by the value of this indicator without errors. Although the body of texts was not particularly large, nevertheless, in the case of arbitrary selection of dissertations, the identification error at the level of 2 % seems to be a very good result compared to the methods based on semantic analysis. It was also found that it is possible to make a text pattern for each of the specialties in the form of a reference matrix of bigrams, in the vicinity of which in the norm of summable functions it is possible to accurately identify the theme of the written scientific work, without using keywords. The proposed method can be used as a comparative indicator of greater or lesser severity of the scientific text or as an indicator of compliance of the text to a certain scientific level.

  10. Malkov S.Yu., Davydova O.I.
    Modernization as a global process: the experience of mathematical modeling
    Computer Research and Modeling, 2021, v. 13, no. 4, pp. 859-873

    The article analyzes empirical data on the long-term demographic and economic dynamics of the countries of the world for the period from the beginning of the 19th century to the present. Population and GDP of a number of countries of the world for the period 1500–2016 were selected as indicators characterizing the long-term demographic and economic dynamics of the countries of the world. Countries were chosen in such a way that they included representatives with different levels of development (developed and developing countries), as well as countries from different regions of the world (North America, South America, Europe, Asia, Africa). A specially developed mathematical model was used for modeling and data processing. The presented model is an autonomous system of differential equations that describes the processes of socio-economic modernization, including the process of transition from an agrarian society to an industrial and post-industrial one. The model contains the idea that the process of modernization begins with the emergence of an innovative sector in a traditional society, developing on the basis of new technologies. The population is gradually moving from the traditional sector to the innovation sector. Modernization is completed when most of the population moves to the innovation sector.

    Statistical methods of data processing and Big Data methods, including hierarchical clustering were used. Using the developed algorithm based on the random descent method, the parameters of the model were identified and verified on the basis of empirical series, and the model was tested using statistical data reflecting the changes observed in developed and developing countries during the period of modernization taking place over the past centuries. Testing the model has demonstrated its high quality — the deviations of the calculated curves from statistical data are usually small and occur during periods of wars and economic crises. Thus, the analysis of statistical data on the long-term demographic and economic dynamics of the countries of the world made it possible to determine general patterns and formalize them in the form of a mathematical model. The model will be used to forecast demographic and economic dynamics in different countries of the world.

Pages: « first previous next

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"