All issues
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Statistical analysis of bigrams of specialized texts
Computer Research and Modeling, 2020, v. 12, no. 1, pp. 243-254text-align: justify;">The method of the stochastic matrix spectrum analysis is used to build an indicator that allows to determine the subject of scientific texts without keywords usage. This matrix is a matrix of conditional probabilities of bigrams, built on the statistics of the alphabet characters in the text without spaces, numbers and punctuation marks. Scientific texts are classified according to the mutual arrangement of invariant subspaces of the matrix of conditional probabilities of pairs of letter combinations. The separation indicator is the value of the cosine of the angle between the right and left eigenvectors corresponding to the maximum and minimum eigenvalues. The computational algorithm uses a special representation of the dichotomy parameter, which is the integral of the square norm of the resolvent of the stochastic matrix of bigrams along the circumference of a given radius in the complex plane. The tendency of the integral to infinity testifies to the approximation of the integration circuit to the eigenvalue of the matrix. The paper presents the typical distribution of the indicator of identification of specialties. For statistical analysis were analyzed dissertations on the main 19 specialties without taking into account the classification within the specialty, 20 texts for the specialty. It was found that the empirical distributions of the cosine of the angle for the mathematical and Humanities specialties do not have a common domain, so they can be formally divided by the value of this indicator without errors. Although the body of texts was not particularly large, nevertheless, in the case of arbitrary selection of dissertations, the identification error at the level of 2 % seems to be a very good result compared to the methods based on semantic analysis. It was also found that it is possible to make a text pattern for each of the specialties in the form of a reference matrix of bigrams, in the vicinity of which in the norm of summable functions it is possible to accurately identify the theme of the written scientific work, without using keywords. The proposed method can be used as a comparative indicator of greater or lesser severity of the scientific text or as an indicator of compliance of the text to a certain scientific level.
-
Comparative analysis of statistical methods of scientific publications classification in medicine
Computer Research and Modeling, 2020, v. 12, no. 4, pp. 921-933text-align: justify;">In this paper the various methods of machine classification of scientific texts by thematic sections on the example of publications in specialized medical journals published by Springer are compared. The corpus of texts was studied in five sections: pharmacology/toxicology, cardiology, immunology, neurology and oncology. We considered both classification methods based on the analysis of annotations and keywords, and classification methods based on the processing of actual texts. Methods of Bayesian classification, reference vectors, and reference letter combinations were applied. It is shown that the method of classification with the best accuracy is based on creating a library of standards of letter trigrams that correspond to texts of a certain subject. It is turned out that for this corpus the Bayesian method gives an error of about 20%, the support vector machine has error of order 10%, and the proximity of the distribution of three-letter text to the standard theme gives an error of about 5%, which allows to rank these methods to the use of artificial intelligence in the task of text classification by industry specialties. It is important that the support vector method provides the same accuracy when analyzing annotations as when analyzing full texts, which is important for reducing the number of operations for large text corpus.
-
The use of syntax trees in order to automate the correction of LaTeX documents
Computer Research and Modeling, 2012, v. 4, no. 4, pp. 871-883Citations: 5 (RSCI).text-align: justify;">The problem is to automate the correction of LaTeX documents. Each document is represented as a parse tree. The modified Zhang-Shasha algorithm is used to construct a mapping of tree vertices of the original document to the tree vertices of the edited document, which corresponds to the minimum editing distance. Vertex to vertex maps form the training set, which is used to generate rules for automatic correction. The statistics of the applicability to the edited documents is collected for each rule. It is used for quality assessment and improvement of the rules.
-
Additive regularizarion of topic models with fast text vectorizartion
Computer Research and Modeling, 2020, v. 12, no. 6, pp. 1515-1528text-align: justify;">The probabilistic topic model of a text document collection finds two matrices: a matrix of conditional probabilities of topics in documents and a matrix of conditional probabilities of words in topics. Each document is represented by a multiset of words also called the “bag of words”, thus assuming that the order of words is not important for revealing the latent topics of the document. Under this assumption, the problem is reduced to a low-rank non-negative matrix factorization governed by likelihood maximization. In general, this problem is ill-posed having an infinite set of solutions. In order to regularize the solution, a weighted sum of optimization criteria is added to the log-likelihood. When modeling large text collections, storing the first matrix seems to be impractical, since its size is proportional to the number of documents in the collection. At the same time, the topical vector representation (embedding) of documents is necessary for solving many text analysis tasks, such as information retrieval, clustering, classification, and summarization of texts. In practice, the topical embedding is calculated for a document “on-the-fly”, which may require dozens of iterations over all the words of the document. In this paper, we propose a way to calculate a topical embedding quickly, by one pass over document words. For this, an additional constraint is introduced into the model in the form of an equation, which calculates the first matrix from the second one in linear time. Although formally this constraint is not an optimization criterion, in fact it plays the role of a regularizer and can be used in combination with other regularizers within the additive regularization framework ARTM. Experiments on three text collections have shown that the proposed method improves the model in terms of sparseness, difference, logLift and coherence measures of topic quality. The open source libraries BigARTM and TopicNet were used for the experiments.
-
Rank analysis of the criminal codes of the Russian Federation, the Federal Republic of Germany and the People’s Republic of China
Computer Research and Modeling, 2022, v. 14, no. 4, pp. 969-981text-align: justify;">When making decisions in various fields of human activity, it is often required to create text documents. Traditionally, the study of texts is engaged in linguistics, which in a broad sense can be understood as a part of semiotics — the science of signs and sign systems, while semiotic objects are of different types. The method of rank distributions is widely used for the quantitative study of sign systems. Rank distribution is a set of item names sorted in descending order by frequency of occurrence. For frequency-rank distributions, researchers often use the term «power-law distributions».
text-align: justify;">In this paper, the rank distribution method is used to analyze the Criminal Code of various countries. The general idea of the approach to solving this problem is to consider the code as a text document, in which the sign is the measure of punishment for certain crimes. The document is presented as a list of occurrences of a specific word (character) and its derivatives (word forms). The combination of all these signs characters forms a punishment dictionary, for which the occurrence frequency of each punishment in the code text is calculated. This allows us to transform the constructed dictionary into a frequency dictionary of punishments and conduct its further research using the V. P. Maslov approach, proposed to analyze the linguistics problems. This approach introduces the concept of the virtual frequency of crime occurrence, which is an assessment measure of the real harm to society and the consequences of the crime committed in various spheres of human life. On this path, the paper proposes a parametrization of the rank distribution to analyze the punishment dictionary of the Special Part of the Criminal Code of the Russian Federation concerning punishments for economic crimes. Various versions of the code are considered, and the constructed model was shown to reflect objectively undertaken over time by legislators its changes for the better. For the Criminal Codes in force in the Federal Republic of Germany and the People’s Republic of China, the texts including similar offenses and analogous to the Russian special section of the Special Part were studied. The rank distributions obtained in the article for the corresponding frequency dictionaries of codes coincide with those obtained by V. P. Maslov’s law, which essentially clarifies Zipf’s law. This allows us to conclude both the good text organization and the adequacy of the selected punishments for crimes.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"