Результаты поиска по 'sets of results':
Найдено статей: 131
  1. Makarov I.S., Bagantsova E.R., Iashin P.A., Kovaleva M.D., Gorbachev R.A.
    Development of and research on an algorithm for distinguishing features in Twitter publications for a classification problem with known markup
    Computer Research and Modeling, 2023, v. 15, no. 1, pp. 171-183

    Social media posts play an important role in demonstration of financial market state, and their analysis is a powerful tool for trading. The article describes the result of a study of the impact of social media activities on the movement of the financial market. The top authoritative influencers are selected. Twitter posts are used as data. Such texts usually include slang and abbreviations, so methods for preparing primary text data, including Stanza, regular expressions are presented. Two approaches to the representation of a point in time in the format of text data are considered. The difference of the influence of a single tweet or a whole package consisting of tweets collected over a certain period of time is investigated. A statistical approach in the form of frequency analysis is also considered, metrics defined by the significance of a particular word when identifying the relationship between price changes and Twitter posts are introduced. Frequency analysis involves the study of the occurrence distributions of various words and bigrams in the text for positive, negative or general trends. To build the markup, changes in the market are processed into a binary vector using various parameters, thus setting the task of binary classification. The parameters for Binance candlesticks are sorted out for better description of the movement of the cryptocurrency market, their variability is also explored in this article. Sentiment is studied using Stanford Core NLP. The result of statistical analysis is relevant to feature selection for further binary or multiclass classification tasks. The presented methods of text analysis contribute to the increase of the accuracy of models designed to solve natural language processing problems by selecting words, improving the quality of vectorization. Such algorithms are often used in automated trading strategies to predict the price of an asset, the trend of its movement.

  2. Petrov A.P., Podlipskaia O.G., Podlipskii O.K.
    Modeling the dynamics of political positions: network density and the chances of minority
    Computer Research and Modeling, 2024, v. 16, no. 3, pp. 785-796

    In some cases, information warfare results in almost whole population accepting one of two contesting points of view and rejecting the other. In other cases, however, the “majority party” gets only a small advantage over the “minority party”. The relevant question is which network characteristics of a population contribute to the minority being able to maintain some significant numbers. Given that some societies are more connected than others, in the sense that they have a higher density of social ties, this question is specified as follows: how does the density of social ties affect the chances of a minority to maintain a significant number? Does a higher density contribute to a landslide victory of majority, or to resistance of minority? To address this issue, we consider information warfare between two parties, called the Left and the Right, in the population, which is represented as a network, the nodes of which are individuals, and the connections correspond to their acquaintance and describe mutual influence. At each of the discrete points in time, each individual decides which party to support based on their attitude, i. e. predisposition to the Left or Right party and taking into account the influence of his network ties. The influence means here that each tie sends a cue with a certain probability to the individual in question in favor of the party that themselves currently support. If the tie switches their party affiliation, they begin to agitate the individual in question for their “new” party. Such processes create dynamics, i. e. the process of changing the partisanship of individuals. The duration of the warfare is exogenously set, with the final time point roughly associated with the election day. The described model is numerically implemented on a scale-free network. Numerical experiments have been carried out for various values of network density. Because of the presence of stochastic elements in the model, 200 runs were conducted for each density value, for each of which the final number of supporters of each of the parties was calculated. It is found that with higher density, the chances increase that the winner will cover almost the entire population. Conversely, low network density contributes to the chances of a minority to maintain significant numbers.

  3. Ansori Moch.F., Al Jasir H., Sihombing A.H., Putra S.M., Nurfaizah D.A., Nurulita E.
    Assessing the impact of deposit benchmark interest rate on banking loan dynamics
    Computer Research and Modeling, 2024, v. 16, no. 4, pp. 1023-1032

    Deposit benchmark interest rates are a policy implemented by banking regulators to calculate the interest rates offered to depositors, maintaining equitable and competitive rates within the financial industry. It functions as a benchmark for determining the pricing of different banking products, expenses, and financial choices. The benchmark rate will have a direct impact on the amount of money deposited, which in turn will determine the amount of money available for lending.We are motivated to analyze the influence of deposit benchmark interest rates on the dynamics of banking loans. This study examines the issue using a difference equation of banking loans. In this process, the decision on the loan amount in the next period is influenced by both the present loan volume and the information on its marginal profit. An analysis is made of the loan equilibrium point and its stability. We also analyze the bifurcations that arise in the model. To ensure a stable banking loan, it is necessary to set the benchmark rate higher than the flip value and lower than the transcritical bifurcation values. The confirmation of this result is supported by the bifurcation diagram and its associated Lyapunov exponent. Insufficient deposit benchmark interest rates might lead to chaotic dynamics in banking lending. Additionally, a bifurcation diagram with two parameters is also shown. We do numerical sensitivity analysis by examining contour plots of the stability requirements, which vary with the deposit benchmark interest rate and other parameters. In addition, we examine a nonstandard difference approach for the previous model, assess its stability, and make a comparison with the standard model. The outcome of our study can provide valuable insights to the banking regulator in making informed decisions regarding deposit benchmark interest rates, taking into account several other banking factors.

  4. Zenkov A.V.
    A novel method of stylometry based on the statistic of numerals
    Computer Research and Modeling, 2017, v. 9, no. 5, pp. 837-850

    A new method of statistical analysis of texts is suggested. The frequency distribution of the first significant digits in numerals of English-language texts is considered. We have taken into account cardinal as well as ordinal numerals expressed both in figures, and verbally. To identify the author’s use of numerals, we previously deleted from the text all idiomatic expressions and set phrases accidentally containing numerals, as well as itemizations and page numbers, etc. Benford’s law is found to hold approximately for the frequencies of various first significant digits of compound literary texts by different authors; a marked predominance of the digit 1 is observed. In coherent authorial texts, characteristic deviations from Benford’s law arise which are statistically stable significant author peculiarities that allow, under certain conditions, to consider the problem of authorship and distinguish between texts by different authors. The text should be large enough (at least about 200 kB). At the end of $\{1, 2, \ldots, 9\}$ digits row, the frequency distribution is subject to strong fluctuations and thus unrepresentative for our purpose. The aim of the theoretical explanation of the observed empirical regularity is not intended, which, however, does not preclude the applicability of the proposed methodology for text attribution. The approach suggested and the conclusions are backed by the examples of the computer analysis of works by W.M. Thackeray, M. Twain, R. L. Stevenson, J. Joyce, sisters Bront¨e, and J.Austen. On the basis of technique suggested, we examined the authorship of a text earlier ascribed to L. F. Baum (the result agrees with that obtained by different means). We have shown that the authorship of Harper Lee’s “To Kill a Mockingbird” pertains to her, whereas the primary draft, “Go Set a Watchman”, seems to have been written in collaboration with Truman Capote. All results are confirmed on the basis of parametric Pearson’s chi-squared test as well as non-parametric Mann –Whitney U test and Kruskal –Wallis test.

    Views (last year): 10.
  5. Tsybulin V.G., Khosaeva Z.K.
    Mathematical model of political differentiation under social tension
    Computer Research and Modeling, 2019, v. 11, no. 5, pp. 999-1012

    We comsider a model of the dynamics a political system of several parties, accompanied and controlled by the growth of social tension. A system of nonlinear ordinary differential equations is proposed with respect to fractions and an additional scalar variable characterizing the magnitude of tension in society the change of each party is proportional to the current value multiplied by a coefficient that consists of an influx of novice, a flow from competing parties, and a loss due to the growth of social tension. The change in tension is made up of party contributions and own relaxation. The number of parties is fixed, there are no mechanisms in the model for combining existing or the birth of new parties.

    To study of possible scenarios of the dynamic processes of the model we derive an approach based on the selection of conditions under which this problem belongs to the class of cosymmetric systems. For the case of two parties, it is shown that in the system under consideration may have two families of equilibria, as well as a family of limit cycles. The existence of cosymmetry for a system of differential equations is ensured by the presence of additional constraints on the parameters, and in this case, the emergence of continuous families of stationary and nonstationary solutions is possible. To analyze the scenarios of cosymmetry breaking, an approach based on the selective function is applied. In the case of one political party, there is no multistability, one stable solution corresponds to each set of parameters. For the case of two parties, it is shown that in the system under consideration may have two families of equilibria, as well as a family of limit cycles. The results of numerical experiments demonstrating the destruction of the families and the implementation of various scenarios leading to the stabilization of the political system with the coexistence of both parties or to the disappearance of one of the parties, when part of the population ceases to support one of the parties and becomes indifferent are presented.

    This model can be used to predict the inter-party struggle during the election campaign. In this case necessary to take into account the dependence of the coefficients of the system on time.

  6. Ignatev N.A., Tuliev U.Y.
    Semantic structuring of text documents based on patterns of natural language entities
    Computer Research and Modeling, 2022, v. 14, no. 5, pp. 1185-1197

    The technology of creating patterns from natural language words (concepts) based on text data in the bag of words model is considered. Patterns are used to reduce the dimension of the original space in the description of documents and search for semantically related words by topic. The process of dimensionality reduction is implemented through the formation of patterns of latent features. The variety of structures of document relations is investigated in order to divide them into themes in the latent space.

    It is considered that a given set of documents (objects) is divided into two non-overlapping classes, for the analysis of which it is necessary to use a common dictionary. The belonging of words to a common vocabulary is initially unknown. Class objects are considered as opposition to each other. Quantitative parameters of oppositionality are determined through the values of the stability of each feature and generalized assessments of objects according to non-overlapping sets of features.

    To calculate the stability, the feature values are divided into non-intersecting intervals, the optimal boundaries of which are determined by a special criterion. The maximum stability is achieved under the condition that the boundaries of each interval contain values of one of the two classes.

    The composition of features in sets (patterns of words) is formed from a sequence ordered by stability values. The process of formation of patterns and latent features based on them is implemented according to the rules of hierarchical agglomerative grouping.

    A set of latent features is used for cluster analysis of documents using metric grouping algorithms. The analysis applies the coefficient of content authenticity based on the data on the belonging of documents to classes. The coefficient is a numerical characteristic of the dominance of class representatives in groups.

    To divide documents into topics, it is proposed to use the union of groups in relation to their centers. As patterns for each topic, a sequence of words ordered by frequency of occurrence from a common dictionary is considered.

    The results of a computational experiment on collections of abstracts of scientific dissertations are presented. Sequences of words from the general dictionary on 4 topics are formed.

  7. Tomonin Y.D., Tominin V.D., Borodich E.D., Kovalev D.A., Dvurechensky P.E., Gasnikov A.V., Chukanov S.V.
    On Accelerated Methods for Saddle-Point Problems with Composite Structure
    Computer Research and Modeling, 2023, v. 15, no. 2, pp. 433-467

    We consider strongly-convex-strongly-concave saddle-point problems with general non-bilinear objective and different condition numbers with respect to the primal and dual variables. First, we consider such problems with smooth composite terms, one of which has finite-sum structure. For this setting we propose a variance reduction algorithm with complexity estimates superior to the existing bounds in the literature. Second, we consider finite-sum saddle-point problems with composite terms and propose several algorithms depending on the properties of the composite terms. When the composite terms are smooth we obtain better complexity bounds than the ones in the literature, including the bounds of a recently proposed nearly-optimal algorithms which do not consider the composite structure of the problem. If the composite terms are prox-friendly, we propose a variance reduction algorithm that, on the one hand, is accelerated compared to existing variance reduction algorithms and, on the other hand, provides in the composite setting similar complexity bounds to the nearly-optimal algorithm which is designed for noncomposite setting. Besides, our algorithms allow one to separate the complexity bounds, i. e. estimate, for each part of the objective separately, the number of oracle calls that is sufficient to achieve a given accuracy. This is important since different parts can have different arithmetic complexity of the oracle, and it is desired to call expensive oracles less often than cheap oracles. The key thing to all these results is our general framework for saddle-point problems, which may be of independent interest. This framework, in turn is based on our proposed Accelerated Meta-Algorithm for composite optimization with probabilistic inexact oracles and probabilistic inexactness in the proximal mapping, which may be of independent interest as well.

  8. Skvortsova D.A., Chuvilgin E.L., Smirnov A.V., Romanov N.O.
    Development of a hybrid simulation model of the assembly shop
    Computer Research and Modeling, 2023, v. 15, no. 5, pp. 1359-1379

    In the presented work, a hybrid optimal simulation model of an assembly shop in the AnyLogic environment has been developed, which allows you to select the parameters of production systems. To build a hybrid model of the investigative approach, discrete-event modeling and aggressive modeling are combined into a single model with an integrating interaction. Within the framework of this work, a mechanism for the development of a production system consisting of several participants-agents is described. An obvious agent corresponds to a class in which a set of agent parameters is specified. In the simulation model, three main groups of operations performed sequentially were taken into account, and the logic for working with rejected sets was determined. The product assembly process is a process that occurs in a multi-phase open-loop system of redundant service with waiting. There are also signs of a closed system — scrap flows for reprocessing. When creating a distribution system in the segment, it is mandatory to use control over the execution of requests in a FIFO queue. For the functional assessment of the production system, the simulation model includes several functional functions that describe the number of finished products, the average time of preparation of products, the number and percentage of rejects, the simulation result for the study, as well as functional variables in which the calculated utilization factors will be used. A series of modeling experiments were carried out in order to study the behavior of the agents of the system in terms of the overall performance indicators of the production system. During the experiment, it was found that the indicator of the average preparation time of the product is greatly influenced by such parameters as: the average speed of the set of products, the average time to complete operations. At a given limitation interval, we managed to select a set of parameters that managed to achieve the largest possible operation of the assembly line. This experiment implements the basic principle of agent-based modeling — decentralized agents make a personal contribution and affect the operation of the entire simulated system as a whole. As a result of the experiments, thanks to the selection of a large set of parameters, it was possible to achieve high performance indicators of the assembly shop, namely: to increase the productivity indicator by 60%; reduce the average assembly time of products by 38%.

  9. Fialko N.S., Olshevets M.M., Lakhno V.D.
    Numerical study of the Holstein model in different thermostats
    Computer Research and Modeling, 2024, v. 16, no. 2, pp. 489-502

    Based on the Holstein Hamiltonian, the dynamics of the charge introduced into the molecular chain of sites was modeled at different temperatures. In the calculation, the temperature of the chain is set by the initial data ¡ª random Gaussian distributions of velocities and site displacements. Various options for the initial charge density distribution are considered. Long-term calculations show that the system moves to fluctuations near a new equilibrium state. For the same initial velocities and displacements, the average kinetic energy, and, accordingly, the temperature of the T chain, varies depending on the initial distribution of the charge density: it decreases when a polaron is introduced into the chain, or increases if at the initial moment the electronic part of the energy is maximum. A comparison is made with the results obtained previously in the model with a Langevin thermostat. In both cases, the existence of a polaron is determined by the thermal energy of the entire chain.

    According to the simulation results, the transition from the polaron mode to the delocalized state occurs in the same range of thermal energy values of a chain of $N$ sites ~ $NT$ for both thermostat options, with an additional adjustment: for the Hamiltonian system the temperature does not correspond to the initially set one, but is determined after long-term calculations from the average kinetic energy of the chain.

    In the polaron region, the use of different methods for simulating temperature leads to a number of significant differences in the dynamics of the system. In the region of the delocalized state of charge, for high temperatures, the results averaged over a set of trajectories in a system with a random force and the results averaged over time for a Hamiltonian system are close, which does not contradict the ergodic hypothesis. From a practical point of view, for large temperatures T ≈ 300 K, when simulating charge transfer in homogeneous chains, any of these options for setting the thermostat can be used.

  10. Varaponov V.V., Savkina N.V., Diachkovsky A.S., Chupashev A.V.
    Calculation of aerodynamic factor of front resistance of a body in subsonic and transonic modes of movement by means of an ANSYS Fluent package
    Computer Research and Modeling, 2012, v. 4, no. 4, pp. 845-853

    The gas-dynamics approach to the calculation of the aerodynamic characteristics of modern aircraft makes it necessary to consider the complex and extensive set of tasks requiring the development of new methods for their solution. Drag coefficient for two bodies in subsonic and transonic flow regimes was calculated using ANSYS Fluent software. Numeric solution and results of the experiment are in good agreement; calculation error does not exceed 3 %.

    Views (last year): 6. Citations: 5 (RSCI).
Pages: « first previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"