All issues
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Quadratic Padé Approximation: Numerical Aspects and Applications
Computer Research and Modeling, 2019, v. 11, no. 6, pp. 1017-1031Padé approximation is a useful tool for extracting singularity information from a power series. A linear Padé approximant is a rational function and can provide estimates of pole and zero locations in the complex plane. A quadratic Padé approximant has square root singularities and can, therefore, provide additional information such as estimates of branch point locations. In this paper, we discuss numerical aspects of computing quadratic Padé approximants as well as some applications. Two algorithms for computing the coefficients in the approximant are discussed: a direct method involving the solution of a linear system (well-known in the mathematics community) and a recursive method (well-known in the physics community). We compare the accuracy of these two methods when implemented in floating-point arithmetic and discuss their pros and cons. In addition, we extend Luke’s perturbation analysis of linear Padé approximation to the quadratic case and identify the problem of spurious branch points in the quadratic approximant, which can cause a significant loss of accuracy. A possible remedy for this problem is suggested by noting that these troublesome points can be identified by the recursive method mentioned above. Another complication with the quadratic approximant arises in choosing the appropriate branch. One possibility, which is to base this choice on the linear approximant, is discussed in connection with an example due to Stahl. It is also known that the quadratic method is capable of providing reasonable approximations on secondary sheets of the Riemann surface, a fact we illustrate here by means of an example. Two concluding applications show the superiority of the quadratic approximant over its linear counterpart: one involving a special function (the Lambert $W$-function) and the other a nonlinear PDE (the continuation of a solution of the inviscid Burgers equation into the complex plane).
-
Automated citation graph building from a corpora of scientific documents
Computer Research and Modeling, 2012, v. 4, no. 4, pp. 707-719Views (last year): 5. Citations: 1 (RSCI).In this paper the problem of automated building of a citation graph from a collection of scientific documents is considered as a sequence of machine learning tasks. The overall data processing technology is described which consists of six stages: preprocessing, metainformation extraction, bibliography lists extraction, splitting bibliography lists into separate bibliography records, standardization of each bibliography record, and record linkage. The goal of this paper is to provide a survey of approaches and algorithms suitable for each stage, motivate the choice of the best combination of algorithms, and adapt some of them for multilingual bibliographies processing. For some of the tasks new algorithms and heuristics are proposed and evaluated on the mixed English and Russian documents corpora.
-
Simulation of the gas condensate reservoir depletion
Computer Research and Modeling, 2020, v. 12, no. 5, pp. 1081-1095One of problems in developing the gas condensate fields lies on the fact that the condensed hydrocarbons in the gas-bearing layer can get stuck in the pores of the formation and hence cannot be extracted. In this regard, research is underway to increase the recoverability of hydrocarbons in such fields. This research includes a wide range of studies on mathematical simulations of the passage of gas condensate mixtures through a porous medium under various conditions.
In the present work, within the classical approach based on the Darcy law and the law of continuity of flows, we formulate an initial-boundary value problem for a system of nonlinear differential equations that describes a depletion of a multicomponent gas-condensate mixture in porous reservoir. A computational scheme is developed on the basis of the finite-difference approximation and the fourth order Runge .Kutta method. The scheme can be used for simulations both in the spatially one-dimensional case, corresponding to the conditions of the laboratory experiment, and in the two-dimensional case, when it comes to modeling a flat gas-bearing formation with circular symmetry.
The computer implementation is based on the combination of C++ and Maple tools, using the MPI parallel programming technique to speed up the calculations. The calculations were performed on the HybriLIT cluster of the Multifunctional Information and Computing Complex of the Laboratory of Information Technologies of the Joint Institute for Nuclear Research.
Numerical results are compared with the experimental data on the pressure dependence of output of a ninecomponent hydrocarbon mixture obtained at a laboratory facility (VNIIGAZ, Ukhta). The calculations were performed for two types of porous filler in the laboratory model of the formation: terrigenous filler at 25 .„R and carbonate one at 60 .„R. It is shown that the approach developed ensures an agreement of the numerical results with experimental data. By fitting of numerical results to experimental data on the depletion of the laboratory reservoir, we obtained the values of the parameters that determine the inter-phase transition coefficient for the simulated system. Using the same parameters, a computer simulation of the depletion of a thin gas-bearing layer in the circular symmetry approximation was carried out.
-
A framework for medical image segmentation based on measuring diversity of pixel’s intensity utilizing interval approach
Computer Research and Modeling, 2021, v. 13, no. 5, pp. 1059-1066Segmentation of medical image is one of the most challenging tasks in analysis of medical image. It classifies the organs pixels or lesions from medical images background like MRI or CT scans, that is to provide critical information about the human organ’s volumes and shapes. In scientific imaging field, medical imaging is considered one of the most important topics due to the rapid and continuing progress in computerized medical image visualization, advances in analysis approaches and computer-aided diagnosis. Digital image processing becomes more important in healthcare field due to the growing use of direct digital imaging systems for medical diagnostics. Due to medical imaging techniques, approaches of image processing are now applicable in medicine. Generally, various transformations will be needed to extract image data. Also, a digital image can be considered an approximation of a real situation includes some uncertainty derived from the constraints on the process of vision. Since information on the level of uncertainty will influence an expert’s attitude. To address this challenge, we propose novel framework involving interval concept that consider a good tool for dealing with the uncertainty, In the proposed approach, the medical images are transformed into interval valued representation approach and entropies are defined for an image object and background. Then we determine a threshold for lower-bound image and for upper-bound image, and then calculate the mean value for the final output results. To demonstrate the effectiveness of the proposed framework, we evaluate it by using synthetic image and its ground truth. Experimental results showed how performance of the segmentation-based entropy threshold can be enhanced using proposed approach to overcome ambiguity.
-
Extracting knowledge from text messages: overview and state-of-the-art
Computer Research and Modeling, 2021, v. 13, no. 6, pp. 1291-1315In general, solving the information explosion problem can be delegated to systems for automatic processing of digital data. These systems are intended for recognizing, sorting, meaningfully processing and presenting data in formats readable and interpretable by humans. The creation of intelligent knowledge extraction systems that handle unstructured data would be a natural solution in this area. At the same time, the evident progress in these tasks for structured data contrasts with the limited success of unstructured data processing, and, in particular, document processing. Currently, this research area is undergoing active development and investigation. The present paper is a systematic survey on both Russian and international publications that are dedicated to the leading trend in automatic text data processing: Text Mining (TM). We cover the main tasks and notions of TM, as well as its place in the current AI landscape. Furthermore, we analyze the complications that arise during the processing of texts written in natural language (NLP) which are weakly structured and often provide ambiguous linguistic information. We describe the stages of text data preparation, cleaning, and selecting features which, alongside the data obtained via morphological, syntactic, and semantic analysis, constitute the input for the TM process. This process can be represented as mapping a set of text documents to «knowledge». Using the case of stock trading, we demonstrate the formalization of the problem of making a trade decision based on a set of analytical recommendations. Examples of such mappings are methods of Information Retrieval (IR), text summarization, sentiment analysis, document classification and clustering, etc. The common point of all tasks and techniques of TM is the selection of word forms and their derivatives used to recognize content in NL symbol sequences. Considering IR as an example, we examine classic types of search, such as searching for word forms, phrases, patterns and concepts. Additionally, we consider the augmentation of patterns with syntactic and semantic information. Next, we provide a general description of all NLP instruments: morphological, syntactic, semantic and pragmatic analysis. Finally, we end the paper with a comparative analysis of modern TM tools which can be helpful for selecting a suitable TM platform based on the user’s needs and skills.
-
Efficient and error-free information hiding in the hybrid domain of digital images using metaheuristic optimization
Computer Research and Modeling, 2023, v. 15, no. 1, pp. 197-210Data hiding in digital images is a promising direction of cybersecurity. Digital steganography methods provide imperceptible transmission of secret data over an open communication channel. The information embedding efficiency depends on the embedding imperceptibility, capacity, and robustness. These quality criteria are mutually inverse, and the improvement of one indicator usually leads to the deterioration of the others. A balance between them can be achieved using metaheuristic optimization. Metaheuristics are a class of optimization algorithms that find an optimal, or close to an optimal solution for a variety of problems, including those that are difficult to formalize, by simulating various natural processes, for example, the evolution of species or the behavior of animals. In this study, we propose an approach to data hiding in the hybrid spatial-frequency domain of digital images based on metaheuristic optimization. Changing a block of image pixels according to some change matrix is considered as an embedding operation. We select the change matrix adaptively for each block using metaheuristic optimization algorithms. In this study, we compare the performance of three metaheuristics such as genetic algorithm, particle swarm optimization, and differential evolution to find the best change matrix. Experimental results showed that the proposed approach provides high imperceptibility of embedding, high capacity, and error-free extraction of embedded information. At the same time, storage of change matrices for each block is not required for further data extraction. This improves user experience and reduces the chance of an attacker discovering the steganographic attachment. Metaheuristics provided an increase in imperceptibility indicator, estimated by the PSNR metric, and the capacity of the previous algorithm for embedding information into the coefficients of the discrete cosine transform using the QIM method [Evsutin, Melman, Meshcheryakov, 2021] by 26.02% and 30.18%, respectively, for the genetic algorithm, 26.01% and 19.39% for particle swarm optimization, 27.30% and 28.73% for differential evolution.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"