Результаты поиска по 'practical application':
Найдено статей: 51
  1. Varshavsky L.E.
    Iterative decomposition methods in modelling the development of oligopolistic markets
    Computer Research and Modeling, 2025, v. 17, no. 6, pp. 1237-1256

    One of the principles of forming a competitive market environment is to create conditions for economic agents to implement Nash – Cournot optimal strategies. With the standard approach to determining Nash – Cournot optimal market strategies, economic agents must have complete information about the indicators and dynamic characteristics of all market participants. Which is not true.

    In this regard, to find Nash – Cournot optimal solutions in dynamic models, it is necessary to have a coordinator who has complete information about the participants. However, in the case of a large number of game participants, even if the coordinator has the necessary information, computational difficulties arise associated with the need to solve a large number of coupled equations (in the case of linear dynamic games — Riccati matrix equations).

    In this regard, there is a need to decompose the general problem of determining optimal strategies for market participants into private (local) problems. Approaches based on the iterative decomposition of coupled matrix Riccati equations and the solution of local Riccati equations were studied for linear dynamic games with a quadratic criterion. This article considers a simpler approach to the iterative determination of the Nash – Cournot equilibrium in an oligopoly, by decomposition using operational calculus (operator method).

    The proposed approach is based on the following procedure. A virtual coordinator, which has information about the parameters of the inverse demand function, forms prices for the prospective period. Oligopolists, given fixed price dynamics, determine their strategies in accordance with a slightly modified optimality criterion. The optimal volumes of production of the oligopolists are sent to the coordinator, who, based on the iterative algorithm, adjusts the price dynamics at the previous step.

    The proposed procedure is illustrated by the example of a static and dynamic model of rational behavior of oligopoly participants who maximize the net present value (NPV). Using the methods of operational calculus (and in particular, the inverse Z-transformation), conditions are found under which the iterative procedure leads to equilibrium levels of price and production volumes in the case of linear dynamic games with both quadratic and nonlinear (concave) optimization criteria.

    The approach considered is used in relation to examples of duopoly, triopoly, duopoly on the market with a differentiated product, duopoly with interacting oligopolists with a linear inverse demand function. Comparison of the results of calculating the dynamics of price and production volumes of oligopolists for the considered examples based on coupled equations of the matrix Riccati equations in Matlab (in the table — Riccati), as well as in accordance with the proposed iterative method in the widely available Excel system shows their practical identity.

    In addition, the application of the proposed iterative procedure is illustrated by the example of a duopoly with a nonlinear demand function.

  2. Aronov I.Z., Maksimova O.V.
    Modeling consensus building in conditions of dominance in a social group
    Computer Research and Modeling, 2021, v. 13, no. 5, pp. 1067-1078

    In many social groups, for example, in technical committees for standardization, at the international, regional and national levels, in European communities, managers of ecovillages, social movements (occupy), international organizations, decision-making is based on the consensus of the group members. Instead of voting, where the majority wins over the minority, consensus allows for a solution that each member of the group supports, or at least considers acceptable. This approach ensures that all group members’ opinions, ideas and needs are taken into account. At the same time, it is noted that reaching consensus takes a long time, since it is necessary to ensure agreement within the group, regardless of its size. It was shown that in some situations the number of iterations (agreements, negotiations) is very significant. Moreover, in the decision-making process, there is always a risk of blocking the decision by the minority in the group, which not only delays the decisionmaking time, but makes it impossible. Typically, such a minority is one or two odious people in the group. At the same time, such a member of the group tries to dominate in the discussion, always remaining in his opinion, ignoring the position of other colleagues. This leads to a delay in the decision-making process, on the one hand, and a deterioration in the quality of consensus, on the other, since only the opinion of the dominant member of the group has to be taken into account. To overcome the crisis in this situation, it was proposed to make a decision on the principle of «consensus minus one» or «consensus minus two», that is, do not take into account the opinion of one or two odious members of the group.

    The article, based on modeling consensus using the model of regular Markov chains, examines the question of how much the decision-making time according to the «consensus minus one» rule is reduced, when the position of the dominant member of the group is not taken into account.

    The general conclusion that follows from the simulation results is that the rule of thumb for making decisions on the principle of «consensus minus one» has a corresponding mathematical justification. The simulation results showed that the application of the «consensus minus one» rule can reduce the time to reach consensus in the group by 76–95%, which is important for practice.

    The average number of agreements hyperbolically depends on the average authoritarianism of the group members (excluding the authoritarian one), which means the possibility of delaying the agreement process at high values of the authoritarianism of the group members.

  3. Salem N., Hudaib A., Al-Tarawneh K., Salem H., Tareef A., Salloum H., Mazzara M.
    A survey on the application of large language models in software engineering
    Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1715-1726

    Large Language Models (LLMs) are transforming software engineering by bridging the gap between natural language and programming languages. These models have revolutionized communication within development teams and the Software Development Life Cycle (SDLC) by enabling developers to interact with code using natural language, thereby improving workflow efficiency. This survey examines the impact of LLMs across various stages of the SDLC, including requirement gathering, system design, coding, debugging, testing, and documentation. LLMs have proven to be particularly useful in automating repetitive tasks such as code generation, refactoring, and bug detection, thus reducing manual effort and accelerating the development process. The integration of LLMs into the development process offers several advantages, including the automation of error correction, enhanced collaboration, and the ability to generate high-quality, functional code based on natural language input. Additionally, LLMs assist developers in understanding and implementing complex software requirements and design patterns. This paper also discusses the evolution of LLMs from simple code completion tools to sophisticated models capable of performing high-level software engineering tasks. However, despite their benefits, there are challenges associated with LLM adoption, such as issues related to model accuracy, interpretability, and potential biases. These limitations must be addressed to ensure the reliable deployment of LLMs in production environments. The paper concludes by identifying key areas for future research, including improving the adaptability of LLMs to specific software domains, enhancing their contextual understanding, and refining their capabilities to generate semantically accurate and efficient code. This survey provides valuable insights into the evolving role of LLMs in software engineering, offering a foundation for further exploration and practical implementation.

  4. Shpitonkov M.I.
    Application of correlation adaptometry technique to sports and biomedical research
    Computer Research and Modeling, 2017, v. 9, no. 2, pp. 345-354

    The paper outlines the approaches to mathematical modeling correlation adaptometry techniques widely used in biology and medicine. The analysis is based on models employed in descriptions of structured biological systems. It is assumed that the distribution density of the biological population numbers satisfies the equation of Kolmogorov-Fokker-Planck. Using this technique evaluated the effectiveness of treatment of patients with obesity. All patients depending on the obesity degree and the comorbidity nature were divided into three groups. Shows a decrease in weight of the correlation graph computed from the measured in the patients of the indicators that characterizes the effectiveness of the treatment for all studied groups. This technique was also used to assess the intensity of the training loads in academic rowing three age groups. It was shown that with the highest voltage worked with athletes for youth group. Also, using the technique of correlation adaptometry evaluated the effectiveness of the treatment of hormone replacement therapy in women. All the patients depending on the assigned drug were divided into four groups. In the standard analysis of the dynamics of mean values of indicators, it was shown that in the course of the treatment were observed normalization of the averages for all groups of patients. However, using the technique of correlation adaptometry it was found that during the first six months the weight of the correlation graph was decreasing and during the second six months the weight increased for all study groups. This indicates the excessive length of the annual course of hormone replacement therapy and the practicality of transition to a semiannual rate.

    Views (last year): 10.
  5. Malkov S.Yu., Shpyrko O.A., Davydova O.I.
    Features of social interactions: the basic model
    Computer Research and Modeling, 2024, v. 16, no. 5, pp. 1323-1335

    The paper presents the results of research on the creation of a mathematical model of moral choice based on the development of the approach proposed by V. A. Lefebvre. Unlike V. A. Lefebvre, who considered a very speculative situation of a subject’s moral choice between abstract “good” and “evil” under pressure from the outside world, taking into account the subjective perception of this pressure by the subject, our study considers a more mundane and practically significant situation. The case is considered when the subject, when making decisions, is guided by his individual perception of the outside world (which may be distorted, for example, due to external purposeful informational influence on the subject and manipulation of his consciousness), and “good” and “evil” are not abstract, but are conditioned by a value system adopted in a particular society under consideration and tied to a specific ideology/religion, which may be different for different societies.

    As a result of the conducted research, a basic mathematical model has been developed, and special cases of its application have been considered. Some patterns related to moral choice are revealed, and their formal description is given. In particular, the situation of manipulation of consciousness is considered in the language of the model, the law of reducing the “morality” of a society consisting of so-called free subjects (that is, those who strive to act in accordance with their intentions and correspond in their actions to the image of their “I”) is formulated.

  6. Ablaev S.S., Makarenko D.V., Stonyakin F.S., Alkousa M.S., Baran I.V.
    Subgradient methods for non-smooth optimization problems with some relaxation of sharp minimum
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 473-495

    Non-smooth optimization often arises in many applied problems. The issues of developing efficient computational procedures for such problems in high-dimensional spaces are very topical. First-order methods (subgradient methods) are well applicable here, but in fairly general situations they lead to low speed guarantees for large-scale problems. One of the approaches to this type of problem can be to identify a subclass of non-smooth problems that allow relatively optimistic results on the rate of convergence. For example, one of the options for additional assumptions can be the condition of a sharp minimum, proposed in the late 1960s by B. T. Polyak. In the case of the availability of information about the minimal value of the function for Lipschitz-continuous problems with a sharp minimum, it turned out to be possible to propose a subgradient method with a Polyak step-size, which guarantees a linear rate of convergence in the argument. This approach made it possible to cover a number of important applied problems (for example, the problem of projecting onto a convex compact set). However, both the condition of the availability of the minimal value of the function and the condition of a sharp minimum itself look rather restrictive. In this regard, in this paper, we propose a generalized condition for a sharp minimum, somewhat similar to the inexact oracle proposed recently by Devolder – Glineur – Nesterov. The proposed approach makes it possible to extend the class of applicability of subgradient methods with the Polyak step-size, to the situation of inexact information about the value of the minimum, as well as the unknown Lipschitz constant of the objective function. Moreover, the use of local analogs of the global characteristics of the objective function makes it possible to apply the results of this type to wider classes of problems. We show the possibility of applying the proposed approach to strongly convex nonsmooth problems, also, we make an experimental comparison with the known optimal subgradient method for such a class of problems. Moreover, there were obtained some results connected to the applicability of the proposed technique to some types of problems with convexity relaxations: the recently proposed notion of weak $\beta$-quasi-convexity and ordinary quasiconvexity. Also in the paper, we study a generalization of the described technique to the situation with the assumption that the $\delta$-subgradient of the objective function is available instead of the usual subgradient. For one of the considered methods, conditions are found under which, in practice, it is possible to escape the projection of the considered iterative sequence onto the feasible set of the problem.

  7. Shaheen L., Rasheed B., Mazzara M.
    Tree species detection using hyperspectral and Lidar data: A novel self-supervised learning approach
    Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1747-1763

    Accurate tree identification is essential for ecological monitoring, biodiversity assessment, and forest management. Traditional manual survey methods are labor-intensive and ineffective over large areas. Advances in remote sensing technologies including lidar and hyperspectral imaging improve automated, exact detection in many fields.

    Nevertheless, these technologies typically require extensive labeled data and manual feature engineering, which restrict scalability. This research proposes a new method of Self-Supervised Learning (SSL) with the SimCLR framework to enhance the classification of tree species using unlabelled data. SSL model automatically discovers strong features by merging the spectral data from hyperspectral data with the structural data from LiDAR, eliminating the need for manual intervention.

    We evaluate the performance of the SSL model against traditional classifiers, including Random Forest (RF), Support Vector Machines (SVM), and Supervised Learning methods, using a dataset from the ECODSE competition, which comprises both labeled and unlabeled samples of tree species in Florida’s Ordway-Swisher Biological Station. The SSL method has been demonstrated to be significantly more effective than traditional methods, with a validation accuracy of 97.5% compared to 95.56% for Semi-SSL and 95.03% for CNN in Supervised Learning.

    Subsampling experiments showed that the SSL technique is still effective with less labeled data, with the model achieving good accuracy even with only 20% labeled data points. This conclusion demonstrates SSL’s practical applications in circumstances with insufficient labeled data, such as large-scale forest monitoring.

  8. Moiseev N.A., Nazarova D.I., Semina N.S., Maksimov D.A.
    Changepoint detection on financial data using deep learning approach
    Computer Research and Modeling, 2024, v. 16, no. 2, pp. 555-575

    The purpose of this study is to develop a methodology for change points detection in time series, including financial data. The theoretical basis of the study is based on the pieces of research devoted to the analysis of structural changes in financial markets, description of the proposed algorithms for detecting change points and peculiarities of building classical and deep machine learning models for solving this type of problems. The development of such tools is of interest to investors and other stakeholders, providing them with additional approaches to the effective analysis of financial markets and interpretation of available data.

    To address the research objective, a neural network was trained. In the course of the study several ways of training sample formation were considered, differing in the nature of statistical parameters. In order to improve the quality of training and obtain more accurate results, a methodology for feature generation was developed for the formation of features that serve as input data for the neural network. These features, in turn, were derived from an analysis of mathematical expectations and standard deviations of time series data over specific intervals. The potential for combining these features to achieve more stable results is also under investigation.

    The results of model experiments were analyzed to compare the effectiveness of the proposed model with other existing changepoint detection algorithms that have gained widespread usage in practical applications. A specially generated dataset, developed using proprietary methods, was utilized as both training and testing data. Furthermore, the model, trained on various features, was tested on daily data from the S&P 500 index to assess its effectiveness in a real financial context.

    As the principles of the model’s operation are described, possibilities for its further improvement are considered, including the modernization of the proposed model’s structure, optimization of training data generation, and feature formation. Additionally, the authors are tasked with advancing existing concepts for real-time changepoint detection.

  9. Romanetz I.A., Atopkov V.A., Guria G.T.
    Topological basis of ECG classification
    Computer Research and Modeling, 2012, v. 4, no. 4, pp. 895-915

    A new approach to the identification of hardly perceptible diagnostically significant changes in electrocardiograms is suggested. The approach is based on the analysis of topological transformations in wavelet spectra associated with electrocardiograms. Possible practical application of the approach developed is discussed.

    Views (last year): 17. Citations: 4 (RSCI).
  10. Voronov R.E., Maslennikov E.M., Beznosikov A.N.
    Communication-efficient solution of distributed variational inequalities using biased compression, data similarity and local updates
    Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1813-1827

    Variational inequalities constitute a broad class of problems with applications in a number of fields, including game theory, economics, and machine learning. Today’s practical applications of VIs are becoming increasingly computationally demanding. It is therefore necessary to employ distributed computations to solve such problems in a reasonable time. In this context, workers have to exchange data with each other, which creates a communication bottleneck. There are three main techniques to reduce the cost and the number of communications: the similarity of local operators, the compression of messages and the use of local steps on devices. There is an algorithm that uses all of these techniques to solve the VI problem and outperforms all previous methods in terms of communication complexity. However, this algorithm is limited to unbiased compression. Meanwhile, biased (contractive) compression leads to better results in practice, but it requires additional modifications within an algorithm and more effort to prove the convergence. In this work, we develop a new algorithm that solves distributed VI problems using data similarity, contractive compression and local steps on devices, derive the theoretical convergence of such an algorithm, and perform some experiments to show the applicability of the method.

Pages: « first previous next

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"