Результаты поиска по 'data analysis':
Найдено статей: 137
  1. Vavilova D.D., Ketova K.V., Zerari R.
    Computer modeling of the gross regional product dynamics: a comparative analysis of neural network models
    Computer Research and Modeling, 2025, v. 17, no. 6, pp. 1219-1236

    Analysis of regional economic indicators plays a crucial role in management and development planning, with Gross Regional Product (GRP) serving as one of the key indicators of economic activity. The application of artificial intelligence, including neural network technologies, enables significant improvements in the accuracy and reliability of forecasts of economic processes. This study compares three neural network algorithm models for predicting the GRP of a typical region of the Russian Federation — the Udmurt Republic — based on time series data from 2000 to 2023. The selected models include a neural network with the Bat Algorithm (BA-LSTM), a neural network model based on backpropagation error optimized with a Genetic Algorithm (GA-BPNN), and a neural network model of Elman optimized using the Particle Swarm Optimization algorithm (PSO-Elman). The research involved stages of neural network modeling such as data preprocessing, training model, and comparative analysis based on accuracy and forecast quality metrics. This approach allows for evaluating the advantages and limitations of each model in the context of GRP forecasting, as well as identifying the most promising directions for further research. The utilization of modern neural network methods opens new opportunities for automating regional economic analysis and improving the quality of forecast assessments, which is especially relevant when data are limited and for rapid decision-making. The study uses factors such as the amount of production capital, the average annual number of labor resources, the share of high-tech and knowledge-intensive industries in GRP, and an inflation indicator as input data for predicting GRP. The high accuracy of the predictions achieved by including these factors in the neural network models confirms the strong correlation between these factors and GRP. The results demonstrate the exceptional accuracy of the BA-LSTM neural network model on validation data: the coefficient of determination was 0.82, and the mean absolute percentage error was 4.19%. The high performance and reliability of this model confirm its capacity to predict effectively the dynamics of the GRP. During the forecast period up to 2030, the Udmurt Republic is expected to experience an annual increase in Gross Regional Product (GRP) of +4.6% in current prices or +2.5% in comparable 2023 prices. By 2030, the GRP is projected to reach 1264.5 billion rubles.

  2. Bogdanov A.V., P. Sone K. Ko, Zaya K.
    Performance of the OpenMP and MPI implementations on ultrasparc system
    Computer Research and Modeling, 2015, v. 7, no. 3, pp. 485-491

    This paper targets programmers and developers interested in utilizing parallel programming techniques to enhance application performance. The Oracle Solaris Studio software provides state-of-the-art optimizing and parallelizing compilers for C, C++ and Fortran, an advanced debugger, and optimized mathematical and performance libraries. Also included are an extremely powerful performance analysis tool for profiling serial and parallel applications, a thread analysis tool to detect data races and deadlock in memory parallel programs, and an Integrated Development Environment (IDE). The Oracle Message Passing Toolkit software provides the high-performance MPI libraries and associated run-time environment needed for message passing applications that can run on a single system or across multiple compute systems connected with high performance networking, including Gigabit Ethernet, 10 Gigabit Ethernet, InfiniBand and Myrinet. Examples of OpenMP and MPI are provided throughout the paper, including their usage via the Oracle Solaris Studio and Oracle Message Passing Toolkit products for development and deployment of both serial and parallel applications on SPARC and x86/x64 based systems. Throughout this paper it is demonstrated how to develop and deploy an application parallelized with OpenMP and/or MPI.

    Views (last year): 2.
  3. Orel V.R., Tambovtseva R.V., Firsova E.A.
    Effects of the heart contractility and its vascular load on the heart rate in athlets
    Computer Research and Modeling, 2017, v. 9, no. 2, pp. 323-329

    Heart rate (HR) is the most affordable indicator for measuring. In order to control the individual response to physical exercises of different load types heart rate is measured when the athletes perform different types of muscular work (strength machines, various types of training and competitive exercises). The magnitude of heart rate and its dynamics during muscular work and recovery can be objectively judged on the functional status of the cardiovascular system of an athlete, the level of its individual physical performance, as well as an adaptive response to a particular exercise. However, the heart rate is not an independent determinant of the physical condition of an athlete. HR size is formed by the interaction of the basic physiological mechanisms underlying cardiac hemodynamic ejection mode. Heart rate depends on one hand, on contractility of the heart, the venous return, the volumes of the atria and ventricles of the heart and from vascular heart load, the main components of which are elastic and peripheral resistance of the arterial system on the other hand. The values of arterial system vascular resistances depend on the power of muscular work and its duration. HR sensitivity to changes in heart load and vascular contraction was determined in athletes by pair regression analysis simultaneously recorded heart rate data, and peripheral $(R)$ and elastic $(E_a)$ resistance (heart vascular load), and the power $(W)$ of heartbeats (cardiac contractility). The coefficients of sensitivity and pair correlation between heart rate indicators and vascular load and contractility of left ventricle of the heart were determined in athletes at rest and during the muscular work on the cycle ergometer. It is shown that increase in both ergometer power load and heart rate is accompanied by the increase of correlation coefficients and coefficients of the heart rate sensitivity to $R$, $E_a$ and $W$.

    Views (last year): 5. Citations: 1 (RSCI).
  4. Borisova L.R., Kuznetsova A.V., Sergeeva N.V., Sen'ko O.V.
    Comparison of Arctic zone RF companies with different Polar Index ratings by economic criteria with the help of machine learning tools
    Computer Research and Modeling, 2020, v. 12, no. 1, pp. 201-215

    The paper presents a comparative analysis of the enterprises of the Arctic Zone of the Russian Federation (AZ RF) on economic indicators in accordance with the rating of the Polar index. This study includes numerical data of 193 enterprises located in the AZ RF. Machine learning methods are applied, both standard, from open source, and own original methods — the method of Optimally Reliable Partitions (ORP), the method of Statistically Weighted Syndromes (SWS). Held split, indicating the maximum value of the functional quality, this study used the simplest family of different one-dimensional partition with a single boundary point, as well as a collection of different two-dimensional partition with one boundary point on each of the two combining variables. Permutation tests allow not only to evaluate the reliability of the data of the revealed regularities, but also to exclude partitions with excessive complexity from the set of the revealed regularities. Patterns connected the class number and economic indicators are revealed using the SDT method on one-dimensional indicators. The regularities which are revealed within the framework of the simplest one-dimensional model with one boundary point and with significance not worse than p < 0.001 are also presented in the given study. The so-called sliding control method was used for reliable evaluation of such diagnostic ability. As a result of these studies, a set of methods that had sufficient effectiveness was identified. The collective method based on the results of several machine learning methods showed the high importance of economic indicators for the division of enterprises in accordance with the rating of the Polar index. Our study proved and showed that those companies that entered the top Rating of the Polar index are generally recognized by financial indicators among all companies in the Arctic Zone. However it would be useful to supplement the list of indicators with ecological and social criteria.

  5. Gesture recognition is an urgent challenge in developing systems of human-machine interfaces. We analyzed machine learning methods for gesture classification based on electromyographic muscle signals to identify the most effective one. Methods such as the naive Bayesian classifier (NBC), logistic regression, decision tree, random forest, gradient boosting, support vector machine (SVM), $k$-nearest neighbor algorithm, and ensembles (NBC and decision tree, NBC and gradient boosting, gradient boosting and decision tree) were considered. Electromyography (EMG) was chosen as a method of obtaining information about gestures. This solution does not require the location of the hand in the field of view of the camera and can be used to recognize finger movements. To test the effectiveness of the selected methods of gesture recognition, a device was developed for recording the EMG signal, which includes three electrodes and an EMG sensor connected to the microcontroller and the power supply. The following gestures were chosen: clenched fist, “thumb up”, “Victory”, squeezing an index finger and waving a hand from right to left. Accuracy, precision, recall and execution time were used to evaluate the effectiveness of classifiers. These parameters were calculated for three options for the location of EMG electrodes on the forearm. According to the test results, the most effective methods are $k$-nearest neighbors’ algorithm, random forest and the ensemble of NBC and gradient boosting, the average accuracy of ensemble for three electrode positions was 81.55%. The position of the electrodes was also determined at which machine learning methods achieve the maximum accuracy. In this position, one of the differential electrodes is located at the intersection of the flexor digitorum profundus and flexor pollicis longus, the second — above the flexor digitorum superficialis.

  6. Segmentation of medical image is one of the most challenging tasks in analysis of medical image. It classifies the organs pixels or lesions from medical images background like MRI or CT scans, that is to provide critical information about the human organ’s volumes and shapes. In scientific imaging field, medical imaging is considered one of the most important topics due to the rapid and continuing progress in computerized medical image visualization, advances in analysis approaches and computer-aided diagnosis. Digital image processing becomes more important in healthcare field due to the growing use of direct digital imaging systems for medical diagnostics. Due to medical imaging techniques, approaches of image processing are now applicable in medicine. Generally, various transformations will be needed to extract image data. Also, a digital image can be considered an approximation of a real situation includes some uncertainty derived from the constraints on the process of vision. Since information on the level of uncertainty will influence an expert’s attitude. To address this challenge, we propose novel framework involving interval concept that consider a good tool for dealing with the uncertainty, In the proposed approach, the medical images are transformed into interval valued representation approach and entropies are defined for an image object and background. Then we determine a threshold for lower-bound image and for upper-bound image, and then calculate the mean value for the final output results. To demonstrate the effectiveness of the proposed framework, we evaluate it by using synthetic image and its ground truth. Experimental results showed how performance of the segmentation-based entropy threshold can be enhanced using proposed approach to overcome ambiguity.

  7. Musaev A.A., Grigoriev D.A.
    Extracting knowledge from text messages: overview and state-of-the-art
    Computer Research and Modeling, 2021, v. 13, no. 6, pp. 1291-1315

    In general, solving the information explosion problem can be delegated to systems for automatic processing of digital data. These systems are intended for recognizing, sorting, meaningfully processing and presenting data in formats readable and interpretable by humans. The creation of intelligent knowledge extraction systems that handle unstructured data would be a natural solution in this area. At the same time, the evident progress in these tasks for structured data contrasts with the limited success of unstructured data processing, and, in particular, document processing. Currently, this research area is undergoing active development and investigation. The present paper is a systematic survey on both Russian and international publications that are dedicated to the leading trend in automatic text data processing: Text Mining (TM). We cover the main tasks and notions of TM, as well as its place in the current AI landscape. Furthermore, we analyze the complications that arise during the processing of texts written in natural language (NLP) which are weakly structured and often provide ambiguous linguistic information. We describe the stages of text data preparation, cleaning, and selecting features which, alongside the data obtained via morphological, syntactic, and semantic analysis, constitute the input for the TM process. This process can be represented as mapping a set of text documents to «knowledge». Using the case of stock trading, we demonstrate the formalization of the problem of making a trade decision based on a set of analytical recommendations. Examples of such mappings are methods of Information Retrieval (IR), text summarization, sentiment analysis, document classification and clustering, etc. The common point of all tasks and techniques of TM is the selection of word forms and their derivatives used to recognize content in NL symbol sequences. Considering IR as an example, we examine classic types of search, such as searching for word forms, phrases, patterns and concepts. Additionally, we consider the augmentation of patterns with syntactic and semantic information. Next, we provide a general description of all NLP instruments: morphological, syntactic, semantic and pragmatic analysis. Finally, we end the paper with a comparative analysis of modern TM tools which can be helpful for selecting a suitable TM platform based on the user’s needs and skills.

  8. Popov A.B.
    Nonextensive Tsallis statistics of contract system of prime contractors and subcontractors in defense industry
    Computer Research and Modeling, 2022, v. 14, no. 5, pp. 1163-1183

    In this work, we analyze the system of contracts made by Russian defense enterprises in the process of state defense order execution. We conclude that methods of statistical mechanics can be applied to the description of the given system. Following the original grand-canonical ensemble approach, we can create the statistical ensemble under investigation as a set of instant snapshots of indistinguishable contracts having individual values. We show that due to government regulations of contract prices the contract system can be described in terms of nonextensive Tsallis statistics. We have found that probability distributions of contract prices correspond to deformed Bose – Einstein distributions obtained using nonextensive Tsallis entropy. This conclusion is true both in the case of the whole set of contracts and in the case of the contracts made by an individual defense company as a seller.

    In order to analyze how deformed Bose – Einstein distributions fit the empirical contract price distributions we compare the corresponding cumulative distribution functions. We conclude that annual distributions of individual sales which correspond to each company’s contract (order) can be used as relevant data for contract price distributions analysis. The empirical cumulative distribution functions for the individual sales ranking of Concern CSRI Elektropribor, one of the leading Russian defense companies, are analyzed for the period 2007–2021. The theoretical cumulative distribution functions, obtained using deformed Bose – Einstein distributions in the case of «rare contract gas» limit, fit well to the empirical cumulative distribution functions. The fitted values for the entropic index show that the degree of nonextensivity of the system under investigations is rather high. It is shown that the characteristic prices of distributions can be estimated by weighing the values of annual individual sales with the escort probabilities. Given that the fitted values of chemical potential are equal to zero, we suggest that «gas of contracts» can be compared to photon gas in which the number of particles is not conserved.

  9. Makarov I.S., Bagantsova E.R., Iashin P.A., Kovaleva M.D., Gorbachev R.A.
    Development of and research on an algorithm for distinguishing features in Twitter publications for a classification problem with known markup
    Computer Research and Modeling, 2023, v. 15, no. 1, pp. 171-183

    Social media posts play an important role in demonstration of financial market state, and their analysis is a powerful tool for trading. The article describes the result of a study of the impact of social media activities on the movement of the financial market. The top authoritative influencers are selected. Twitter posts are used as data. Such texts usually include slang and abbreviations, so methods for preparing primary text data, including Stanza, regular expressions are presented. Two approaches to the representation of a point in time in the format of text data are considered. The difference of the influence of a single tweet or a whole package consisting of tweets collected over a certain period of time is investigated. A statistical approach in the form of frequency analysis is also considered, metrics defined by the significance of a particular word when identifying the relationship between price changes and Twitter posts are introduced. Frequency analysis involves the study of the occurrence distributions of various words and bigrams in the text for positive, negative or general trends. To build the markup, changes in the market are processed into a binary vector using various parameters, thus setting the task of binary classification. The parameters for Binance candlesticks are sorted out for better description of the movement of the cryptocurrency market, their variability is also explored in this article. Sentiment is studied using Stanford Core NLP. The result of statistical analysis is relevant to feature selection for further binary or multiclass classification tasks. The presented methods of text analysis contribute to the increase of the accuracy of models designed to solve natural language processing problems by selecting words, improving the quality of vectorization. Such algorithms are often used in automated trading strategies to predict the price of an asset, the trend of its movement.

  10. Shakhgeldyan K.I., Kuksin N.S., Domzhalov I.G., Pak R.L., Geltser B.I.
    Random forest of risk factors as a predictive tool for adverse events in clinical medicine
    Computer Research and Modeling, 2025, v. 17, no. 5, pp. 987-1004

    The aim of study was to develop an ensemble machine learning method for constructing interpretable predictive models and to validate it using the example of predicting in-hospital mortality (IHM) in patients with ST-segment elevation myocardial infarction (STEMI).

    A retrospective cohort study was conducted using data from 5446 electronic medical records of STEMI patients who underwent percutaneous coronary intervention (PCI). Patients were divided into two groups: 335 (6.2%) patients who died during hospitalization and 5111 (93.8%) patients with a favourable in-hospital outcome. A pool of potential predictors was formed using statistical methods. Through multimetric categorization (minimizing p-values, maximizing the area under the ROC curve (AUC), and SHAP value analysis), decision trees, and multivariable logistic regression (MLR), predictors were transformed into risk factors for IHM. Predictive models for IHM were developed using MLR, Random Forest Risk Factors (RandFRF), Stochastic Gradient Boosting (XGboost), Random Forest (RF), Adaptive boosting, Gradient Boosting, Light Gradient-Boosting Machine, Categorical Boosting (CatBoost), Explainable Boosting Machine and Stacking methods.

    Authors developed the RandFRF method, which integrates the predictive outcomes of modified decision trees, identifies risk factors and ranks them based on their contribution to the risk of adverse outcomes. RandFRF enables the development of predictive models with high discriminative performance (AUC 0.908), comparable to models based on CatBoost and Stacking (AUC 0.904 and 0.908, respectively). In turn, risk factors provide clinicians with information on the patient’s risk group classification and the extent of their impact on the probability of IHM. The risk factors identified by RandFRF can serve not only as rationale for the prediction results but also as a basis for developing more accurate models.

Pages: « first previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"