Processing math: 100%
Результаты поиска по 'practical application':
Найдено статей: 49
  1. Salem N., Hudaib A., Al-Tarawneh K., Salem H., Tareef A., Salloum H., Mazzara M.
    A survey on the application of large language models in software engineering
    Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1715-1726

    Large Language Models (LLMs) are transforming software engineering by bridging the gap between natural language and programming languages. These models have revolutionized communication within development teams and the Software Development Life Cycle (SDLC) by enabling developers to interact with code using natural language, thereby improving workflow efficiency. This survey examines the impact of LLMs across various stages of the SDLC, including requirement gathering, system design, coding, debugging, testing, and documentation. LLMs have proven to be particularly useful in automating repetitive tasks such as code generation, refactoring, and bug detection, thus reducing manual effort and accelerating the development process. The integration of LLMs into the development process offers several advantages, including the automation of error correction, enhanced collaboration, and the ability to generate high-quality, functional code based on natural language input. Additionally, LLMs assist developers in understanding and implementing complex software requirements and design patterns. This paper also discusses the evolution of LLMs from simple code completion tools to sophisticated models capable of performing high-level software engineering tasks. However, despite their benefits, there are challenges associated with LLM adoption, such as issues related to model accuracy, interpretability, and potential biases. These limitations must be addressed to ensure the reliable deployment of LLMs in production environments. The paper concludes by identifying key areas for future research, including improving the adaptability of LLMs to specific software domains, enhancing their contextual understanding, and refining their capabilities to generate semantically accurate and efficient code. This survey provides valuable insights into the evolving role of LLMs in software engineering, offering a foundation for further exploration and practical implementation.

  2. Shpitonkov M.I.
    Application of correlation adaptometry technique to sports and biomedical research
    Computer Research and Modeling, 2017, v. 9, no. 2, pp. 345-354

    The paper outlines the approaches to mathematical modeling correlation adaptometry techniques widely used in biology and medicine. The analysis is based on models employed in descriptions of structured biological systems. It is assumed that the distribution density of the biological population numbers satisfies the equation of Kolmogorov-Fokker-Planck. Using this technique evaluated the effectiveness of treatment of patients with obesity. All patients depending on the obesity degree and the comorbidity nature were divided into three groups. Shows a decrease in weight of the correlation graph computed from the measured in the patients of the indicators that characterizes the effectiveness of the treatment for all studied groups. This technique was also used to assess the intensity of the training loads in academic rowing three age groups. It was shown that with the highest voltage worked with athletes for youth group. Also, using the technique of correlation adaptometry evaluated the effectiveness of the treatment of hormone replacement therapy in women. All the patients depending on the assigned drug were divided into four groups. In the standard analysis of the dynamics of mean values of indicators, it was shown that in the course of the treatment were observed normalization of the averages for all groups of patients. However, using the technique of correlation adaptometry it was found that during the first six months the weight of the correlation graph was decreasing and during the second six months the weight increased for all study groups. This indicates the excessive length of the annual course of hormone replacement therapy and the practicality of transition to a semiannual rate.

    Views (last year): 10.
  3. Malkov S.Yu., Shpyrko O.A., Davydova O.I.
    Features of social interactions: the basic model
    Computer Research and Modeling, 2024, v. 16, no. 5, pp. 1323-1335

    The paper presents the results of research on the creation of a mathematical model of moral choice based on the development of the approach proposed by V. A. Lefebvre. Unlike V. A. Lefebvre, who considered a very speculative situation of a subject’s moral choice between abstract “good” and “evil” under pressure from the outside world, taking into account the subjective perception of this pressure by the subject, our study considers a more mundane and practically significant situation. The case is considered when the subject, when making decisions, is guided by his individual perception of the outside world (which may be distorted, for example, due to external purposeful informational influence on the subject and manipulation of his consciousness), and “good” and “evil” are not abstract, but are conditioned by a value system adopted in a particular society under consideration and tied to a specific ideology/religion, which may be different for different societies.

    As a result of the conducted research, a basic mathematical model has been developed, and special cases of its application have been considered. Some patterns related to moral choice are revealed, and their formal description is given. In particular, the situation of manipulation of consciousness is considered in the language of the model, the law of reducing the “morality” of a society consisting of so-called free subjects (that is, those who strive to act in accordance with their intentions and correspond in their actions to the image of their “I”) is formulated.

  4. Ablaev S.S., Makarenko D.V., Stonyakin F.S., Alkousa M.S., Baran I.V.
    Subgradient methods for non-smooth optimization problems with some relaxation of sharp minimum
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 473-495

    Non-smooth optimization often arises in many applied problems. The issues of developing efficient computational procedures for such problems in high-dimensional spaces are very topical. First-order methods (subgradient methods) are well applicable here, but in fairly general situations they lead to low speed guarantees for large-scale problems. One of the approaches to this type of problem can be to identify a subclass of non-smooth problems that allow relatively optimistic results on the rate of convergence. For example, one of the options for additional assumptions can be the condition of a sharp minimum, proposed in the late 1960s by B. T. Polyak. In the case of the availability of information about the minimal value of the function for Lipschitz-continuous problems with a sharp minimum, it turned out to be possible to propose a subgradient method with a Polyak step-size, which guarantees a linear rate of convergence in the argument. This approach made it possible to cover a number of important applied problems (for example, the problem of projecting onto a convex compact set). However, both the condition of the availability of the minimal value of the function and the condition of a sharp minimum itself look rather restrictive. In this regard, in this paper, we propose a generalized condition for a sharp minimum, somewhat similar to the inexact oracle proposed recently by Devolder – Glineur – Nesterov. The proposed approach makes it possible to extend the class of applicability of subgradient methods with the Polyak step-size, to the situation of inexact information about the value of the minimum, as well as the unknown Lipschitz constant of the objective function. Moreover, the use of local analogs of the global characteristics of the objective function makes it possible to apply the results of this type to wider classes of problems. We show the possibility of applying the proposed approach to strongly convex nonsmooth problems, also, we make an experimental comparison with the known optimal subgradient method for such a class of problems. Moreover, there were obtained some results connected to the applicability of the proposed technique to some types of problems with convexity relaxations: the recently proposed notion of weak β-quasi-convexity and ordinary quasiconvexity. Also in the paper, we study a generalization of the described technique to the situation with the assumption that the δ-subgradient of the objective function is available instead of the usual subgradient. For one of the considered methods, conditions are found under which, in practice, it is possible to escape the projection of the considered iterative sequence onto the feasible set of the problem.

  5. Shaheen L., Rasheed B., Mazzara M.
    Tree species detection using hyperspectral and Lidar data: A novel self-supervised learning approach
    Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1747-1763

    Accurate tree identification is essential for ecological monitoring, biodiversity assessment, and forest management. Traditional manual survey methods are labor-intensive and ineffective over large areas. Advances in remote sensing technologies including lidar and hyperspectral imaging improve automated, exact detection in many fields.

    Nevertheless, these technologies typically require extensive labeled data and manual feature engineering, which restrict scalability. This research proposes a new method of Self-Supervised Learning (SSL) with the SimCLR framework to enhance the classification of tree species using unlabelled data. SSL model automatically discovers strong features by merging the spectral data from hyperspectral data with the structural data from LiDAR, eliminating the need for manual intervention.

    We evaluate the performance of the SSL model against traditional classifiers, including Random Forest (RF), Support Vector Machines (SVM), and Supervised Learning methods, using a dataset from the ECODSE competition, which comprises both labeled and unlabeled samples of tree species in Florida’s Ordway-Swisher Biological Station. The SSL method has been demonstrated to be significantly more effective than traditional methods, with a validation accuracy of 97.5% compared to 95.56% for Semi-SSL and 95.03% for CNN in Supervised Learning.

    Subsampling experiments showed that the SSL technique is still effective with less labeled data, with the model achieving good accuracy even with only 20% labeled data points. This conclusion demonstrates SSL’s practical applications in circumstances with insufficient labeled data, such as large-scale forest monitoring.

  6. Moiseev N.A., Nazarova D.I., Semina N.S., Maksimov D.A.
    Changepoint detection on financial data using deep learning approach
    Computer Research and Modeling, 2024, v. 16, no. 2, pp. 555-575

    The purpose of this study is to develop a methodology for change points detection in time series, including financial data. The theoretical basis of the study is based on the pieces of research devoted to the analysis of structural changes in financial markets, description of the proposed algorithms for detecting change points and peculiarities of building classical and deep machine learning models for solving this type of problems. The development of such tools is of interest to investors and other stakeholders, providing them with additional approaches to the effective analysis of financial markets and interpretation of available data.

    To address the research objective, a neural network was trained. In the course of the study several ways of training sample formation were considered, differing in the nature of statistical parameters. In order to improve the quality of training and obtain more accurate results, a methodology for feature generation was developed for the formation of features that serve as input data for the neural network. These features, in turn, were derived from an analysis of mathematical expectations and standard deviations of time series data over specific intervals. The potential for combining these features to achieve more stable results is also under investigation.

    The results of model experiments were analyzed to compare the effectiveness of the proposed model with other existing changepoint detection algorithms that have gained widespread usage in practical applications. A specially generated dataset, developed using proprietary methods, was utilized as both training and testing data. Furthermore, the model, trained on various features, was tested on daily data from the S&P 500 index to assess its effectiveness in a real financial context.

    As the principles of the model’s operation are described, possibilities for its further improvement are considered, including the modernization of the proposed model’s structure, optimization of training data generation, and feature formation. Additionally, the authors are tasked with advancing existing concepts for real-time changepoint detection.

  7. Romanetz I.A., Atopkov V.A., Guria G.T.
    Topological basis of ECG classification
    Computer Research and Modeling, 2012, v. 4, no. 4, pp. 895-915

    A new approach to the identification of hardly perceptible diagnostically significant changes in electrocardiograms is suggested. The approach is based on the analysis of topological transformations in wavelet spectra associated with electrocardiograms. Possible practical application of the approach developed is discussed.

    Views (last year): 17. Citations: 4 (RSCI).
  8. Voronov R.E., Maslennikov E.M., Beznosikov A.N.
    Communication-efficient solution of distributed variational inequalities using biased compression, data similarity and local updates
    Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1813-1827

    Variational inequalities constitute a broad class of problems with applications in a number of fields, including game theory, economics, and machine learning. Today’s practical applications of VIs are becoming increasingly computationally demanding. It is therefore necessary to employ distributed computations to solve such problems in a reasonable time. In this context, workers have to exchange data with each other, which creates a communication bottleneck. There are three main techniques to reduce the cost and the number of communications: the similarity of local operators, the compression of messages and the use of local steps on devices. There is an algorithm that uses all of these techniques to solve the VI problem and outperforms all previous methods in terms of communication complexity. However, this algorithm is limited to unbiased compression. Meanwhile, biased (contractive) compression leads to better results in practice, but it requires additional modifications within an algorithm and more effort to prove the convergence. In this work, we develop a new algorithm that solves distributed VI problems using data similarity, contractive compression and local steps on devices, derive the theoretical convergence of such an algorithm, and perform some experiments to show the applicability of the method.

  9. Yudin N.E., Gasnikov A.V.
    Regularization and acceleration of Gauss – Newton method
    Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1829-1840

    We propose a family of Gauss –Newton methods for solving optimization problems and systems of nonlinear equations based on the ideas of using the upper estimate of the norm of the residual of the system of nonlinear equations and quadratic regularization. The paper presents a development of the «Three Squares Method» scheme with the addition of a momentum term to the update rule of the sought parameters in the problem to be solved. The resulting scheme has several remarkable properties. First, the paper algorithmically describes a whole parametric family of methods that minimize functionals of a special kind: compositions of the residual of a nonlinear equation and an unimodal functional. Such a functional, entirely consistent with the «gray box» paradigm in the problem description, combines a large number of solvable problems related to applications in machine learning, with the regression problems. Secondly, the obtained family of methods is described as a generalization of several forms of the Levenberg –Marquardt algorithm, allowing implementation in non-Euclidean spaces as well. The algorithm describing the parametric family of Gauss –Newton methods uses an iterative procedure that performs an inexact parametrized proximal mapping and shift using a momentum term. The paper contains a detailed analysis of the efficiency of the proposed family of Gauss – Newton methods; the derived estimates take into account the number of external iterations of the algorithm for solving the main problem, the accuracy and computational complexity of the local model representation and oracle computation. Sublinear and linear convergence conditions based on the Polak – Lojasiewicz inequality are derived for the family of methods. In both observed convergence regimes, the Lipschitz property of the residual of the nonlinear system of equations is locally assumed. In addition to the theoretical analysis of the scheme, the paper studies the issues of its practical implementation. In particular, in the experiments conducted for the suboptimal step, the schemes of effective calculation of the approximation of the best step are given, which makes it possible to improve the convergence of the method in practice in comparison with the original «Three Square Method». The proposed scheme combines several existing and frequently used in practice modifications of the Gauss –Newton method, in addition, the paper proposes a monotone momentum modification of the family of developed methods, which does not slow down the search for a solution in the worst case and demonstrates in practice an improvement in the convergence of the method.

Pages: « first previous

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"