Loading [MathJax]/jax/output/HTML-CSS/jax.js
Результаты поиска по 'error function':
Найдено статей: 39
  1. Salenek I.A., Seliverstov Y.A., Seliverstov S.A., Sofronova E.A.
    Improving the quality of route generation in SUMO based on data from detectors using reinforcement learning
    Computer Research and Modeling, 2024, v. 16, no. 1, pp. 137-146

    This work provides a new approach for constructing high-precision routes based on data from transport detectors inside the SUMO traffic modeling package. Existing tools such as flowrouter and routeSampler have a number of disadvantages, such as the lack of interaction with the network in the process of building routes. Our rlRouter uses multi-agent reinforcement learning (MARL), where the agents are incoming lanes and the environment is the road network. By performing actions to launch vehicles, agents receive a reward for matching data from transport detectors. Parameter Sharing DQN with the LSTM backbone of the Q-function was used as an algorithm for multi-agent reinforcement learning.

    Since the rlRouter is trained inside the SUMO simulation, it can restore routes better by taking into account the interaction of vehicles within the network with each other and with the network infrastructure. We have modeled diverse traffic situations on three different junctions in order to compare the performance of SUMO’s routers with the rlRouter. We used Mean Absoluter Error (MAE) as the measure of the deviation from both cumulative detectors and routes data. The rlRouter achieved the highest compliance with the data from the detectors. We also found that by maximizing the reward for matching detectors, the resulting routes also get closer to the real ones. Despite the fact that the routes recovered using rlRouter are superior to the routes obtained using SUMO tools, they do not fully correspond to the real ones, due to the natural limitations of induction-loop detectors. To achieve more plausible routes, it is necessary to equip junctions with other types of transport counters, for example, camera detectors.

  2. Marchanko L.N., Kasianok Y.A., Gaishun V.E., Bruttan I.V.
    Modeling of rheological characteristics of aqueous suspensions based on nanoscale silicon dioxide particles
    Computer Research and Modeling, 2024, v. 16, no. 5, pp. 1217-1252

    The rheological behavior of aqueous suspensions based on nanoscale silicon dioxide particles strongly depends on the dynamic viscosity, which affects directly the use of nanofluids. The purpose of this work is to develop and validate models for predicting dynamic viscosity from independent input parameters: silicon dioxide concentration SiO2, pH acidity, and shear rate γ. The influence of the suspension composition on its dynamic viscosity is analyzed. Groups of suspensions with statistically homogeneous composition have been identified, within which the interchangeability of compositions is possible. It is shown that at low shear rates, the rheological properties of suspensions differ significantly from those obtained at higher speeds. Significant positive correlations of the dynamic viscosity of the suspension with SiO2 concentration and pH acidity were established, and negative correlations with the shear rate γ. Regression models with regularization of the dependence of the dynamic viscosity η on the concentrations of SiO2, NaOH, H3PO4, surfactant (surfactant), EDA (ethylenediamine), shear rate γ were constructed. For more accurate prediction of dynamic viscosity, the models using algorithms of neural network technologies and machine learning (MLP multilayer perceptron, RBF radial basis function network, SVM support vector method, RF random forest method) were trained. The effectiveness of the constructed models was evaluated using various statistical metrics, including the average absolute approximation error (MAE), the average quadratic error (MSE), the coefficient of determination R2, and the average percentage of absolute relative deviation (AARD%). The RF model proved to be the best model in the training and test samples. The contribution of each component to the constructed model is determined. It is shown that the concentration of SiO2 has the greatest influence on the dynamic viscosity, followed by pH acidity and shear rate γ. The accuracy of the proposed models is compared to the accuracy of models previously published. The results confirm that the developed models can be considered as a practical tool for studying the behavior of nanofluids, which use aqueous suspensions based on nanoscale particles of silicon dioxide.

  3. Stonyakin F.S., Ablaev S.S., Baran I.V., Alkousa M.S.
    Subgradient methods for weakly convex and relatively weakly convex problems with a sharp minimum
    Computer Research and Modeling, 2023, v. 15, no. 2, pp. 393-412

    The work is devoted to the study of subgradient methods with different variations of the Polyak stepsize for minimization functions from the class of weakly convex and relatively weakly convex functions that have the corresponding analogue of a sharp minimum. It turns out that, under certain assumptions about the starting point, such an approach can make it possible to justify the convergence of the subgradient method with the speed of a geometric progression. For the subgradient method with the Polyak stepsize, a refined estimate for the rate of convergence is proved for minimization problems for weakly convex functions with a sharp minimum. The feature of this estimate is an additional consideration of the decrease of the distance from the current point of the method to the set of solutions with the increase in the number of iterations. The results of numerical experiments for the phase reconstruction problem (which is weakly convex and has a sharp minimum) are presented, demonstrating the effectiveness of the proposed approach to estimating the rate of convergence compared to the known one. Next, we propose a variation of the subgradient method with switching over productive and non-productive steps for weakly convex problems with inequality constraints and obtain the corresponding analog of the result on convergence with the rate of geometric progression. For the subgradient method with the corresponding variation of the Polyak stepsize on the class of relatively Lipschitz and relatively weakly convex functions with a relative analogue of a sharp minimum, it was obtained conditions that guarantee the convergence of such a subgradient method at the rate of a geometric progression. Finally, a theoretical result is obtained that describes the influence of the error of the information about the (sub)gradient available by the subgradient method and the objective function on the estimation of the quality of the obtained approximate solution. It is proved that for a sufficiently small error δ>0, one can guarantee that the accuracy of the solution is comparable to δ.

  4. Matveev A.V.
    Mathematical features of individual dosimetric planning of radioiodotherapy based on pharmacokinetic modeling
    Computer Research and Modeling, 2024, v. 16, no. 3, pp. 773-784

    When determining therapeutic absorbed doses in the process of radioiodine therapy, the method of individual dosimetric planning is increasingly used in Russian medicine. However, for the successful implementation of this method, it is necessary to have appropriate software that allows modeling the pharmacokinetics of radioiodine in the patient’s body and calculate the necessary therapeutic activity of a radiopharmaceutical drug to achieve the planned therapeutic absorbed dose in the thyroid gland.

    Purpose of the work: development of a software package for pharmacokinetic modeling and calculation of individual absorbed doses in radioiodine therapy based on a five-chamber model of radioiodine kinetics using two mathematical optimization methods. The work is based on the principles and methods of RFLP pharmacokinetics (chamber modeling). To find the minimum of the residual functional in identifying the values of the transport constants of the model, the Hook – Jeeves method and the simulated annealing method were used. Calculation of dosimetric characteristics and administered therapeutic activity is based on the method of calculating absorbed doses using the functions of radioiodine activity in the chambers found during modeling. To identify the parameters of the model, the results of radiometry of the thyroid gland and urine of patients with radioiodine introduced into the body were used.

    A software package for modeling the kinetics of radioiodine during its oral intake has been developed. For patients with diffuse toxic goiter, the transport constants of the model were identified and individual pharmacokinetic and dosimetric characteristics (elimination half-lives, maximum thyroid activity and time to reach it, absorbed doses to critical organs and tissues, administered therapeutic activity) were calculated. The activity-time relationships for all cameras in the model are obtained and analyzed. A comparative analysis of the calculated pharmacokinetic and dosimetric characteristics calculated using two mathematical optimization methods was performed. Evaluation completed the stunning-effect and its contribution to the errors in calculating absorbed doses. From a comparative analysis of the pharmacokinetic and dosimetric characteristics calculated in the framework of two optimization methods, it follows that the use of a more complex mathematical method for simulating annealing in a software package does not lead to significant changes in the values of the characteristics compared to the simple Hook – Jeeves method. Errors in calculating absorbed doses in the framework of these mathematical optimization methods do not exceed the spread of absorbed dose values from the stunning-effect.

  5. Pletnev N.V., Dvurechensky P.E., Gasnikov A.V.
    Application of gradient optimization methods to solve the Cauchy problem for the Helmholtz equation
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 417-444

    The article is devoted to studying the application of convex optimization methods to solve the Cauchy problem for the Helmholtz equation, which is ill-posed since the equation belongs to the elliptic type. The Cauchy problem is formulated as an inverse problem and is reduced to a convex optimization problem in a Hilbert space. The functional to be optimized and its gradient are calculated using the solution of boundary value problems, which, in turn, are well-posed and can be approximately solved by standard numerical methods, such as finite-difference schemes and Fourier series expansions. The convergence of the applied fast gradient method and the quality of the solution obtained in this way are experimentally investigated. The experiment shows that the accelerated gradient method — the Similar Triangle Method — converges faster than the non-accelerated method. Theorems on the computational complexity of the resulting algorithms are formulated and proved. It is found that Fourier’s series expansions are better than finite-difference schemes in terms of the speed of calculations and improve the quality of the solution obtained. An attempt was made to use restarts of the Similar Triangle Method after halving the residual of the functional. In this case, the convergence does not improve, which confirms the absence of strong convexity. The experiments show that the inaccuracy of the calculations is more adequately described by the additive concept of the noise in the first-order oracle. This factor limits the achievable quality of the solution, but the error does not accumulate. According to the results obtained, the use of accelerated gradient optimization methods can be the way to solve inverse problems effectively.

  6. Salem N., Hudaib A., Al-Tarawneh K., Salem H., Tareef A., Salloum H., Mazzara M.
    A survey on the application of large language models in software engineering
    Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1715-1726

    Large Language Models (LLMs) are transforming software engineering by bridging the gap between natural language and programming languages. These models have revolutionized communication within development teams and the Software Development Life Cycle (SDLC) by enabling developers to interact with code using natural language, thereby improving workflow efficiency. This survey examines the impact of LLMs across various stages of the SDLC, including requirement gathering, system design, coding, debugging, testing, and documentation. LLMs have proven to be particularly useful in automating repetitive tasks such as code generation, refactoring, and bug detection, thus reducing manual effort and accelerating the development process. The integration of LLMs into the development process offers several advantages, including the automation of error correction, enhanced collaboration, and the ability to generate high-quality, functional code based on natural language input. Additionally, LLMs assist developers in understanding and implementing complex software requirements and design patterns. This paper also discusses the evolution of LLMs from simple code completion tools to sophisticated models capable of performing high-level software engineering tasks. However, despite their benefits, there are challenges associated with LLM adoption, such as issues related to model accuracy, interpretability, and potential biases. These limitations must be addressed to ensure the reliable deployment of LLMs in production environments. The paper concludes by identifying key areas for future research, including improving the adaptability of LLMs to specific software domains, enhancing their contextual understanding, and refining their capabilities to generate semantically accurate and efficient code. This survey provides valuable insights into the evolving role of LLMs in software engineering, offering a foundation for further exploration and practical implementation.

  7. Mitin N.A., Orlov Y.N.
    Statistical analysis of bigrams of specialized texts
    Computer Research and Modeling, 2020, v. 12, no. 1, pp. 243-254

    The method of the stochastic matrix spectrum analysis is used to build an indicator that allows to determine the subject of scientific texts without keywords usage. This matrix is a matrix of conditional probabilities of bigrams, built on the statistics of the alphabet characters in the text without spaces, numbers and punctuation marks. Scientific texts are classified according to the mutual arrangement of invariant subspaces of the matrix of conditional probabilities of pairs of letter combinations. The separation indicator is the value of the cosine of the angle between the right and left eigenvectors corresponding to the maximum and minimum eigenvalues. The computational algorithm uses a special representation of the dichotomy parameter, which is the integral of the square norm of the resolvent of the stochastic matrix of bigrams along the circumference of a given radius in the complex plane. The tendency of the integral to infinity testifies to the approximation of the integration circuit to the eigenvalue of the matrix. The paper presents the typical distribution of the indicator of identification of specialties. For statistical analysis were analyzed dissertations on the main 19 specialties without taking into account the classification within the specialty, 20 texts for the specialty. It was found that the empirical distributions of the cosine of the angle for the mathematical and Humanities specialties do not have a common domain, so they can be formally divided by the value of this indicator without errors. Although the body of texts was not particularly large, nevertheless, in the case of arbitrary selection of dissertations, the identification error at the level of 2 % seems to be a very good result compared to the methods based on semantic analysis. It was also found that it is possible to make a text pattern for each of the specialties in the form of a reference matrix of bigrams, in the vicinity of which in the norm of summable functions it is possible to accurately identify the theme of the written scientific work, without using keywords. The proposed method can be used as a comparative indicator of greater or lesser severity of the scientific text or as an indicator of compliance of the text to a certain scientific level.

  8. Makhov S.A.
    Forecasting demographic and macroeconomic indicators in a distributed global model
    Computer Research and Modeling, 2023, v. 15, no. 3, pp. 757-779

    The paper present a dynamic macro model of world dynamics. The world is divided into 19 geographic regions in the model. The internal development of the regions is described by regression equations for demographic and economic indicators (Population, Gross Domestic Product, Gross Capital Formation). The bilateral trade flows from region to region describes interregional interactions and represented the trade submodel. Time, the gross product of the exporter and the gross product of the importer were used as regressors. Four types were considered: time pair regression — dependence of trade flow on time, export function — dependence of the share of trade flow in the gross product of the exporter on the gross product of the importer, import function — dependence of the share of trade flow in the gross product of the importer on the gross product of the exporter, multiple regression — dependence of trade flow on the gross products of the exporter and importer. Two types of functional dependence were used for each type: linear and log-linear, in total eight variants of the trading equation were studied. The quality of regression models is compared by the coefficient of determination. By calculations the model satisfactorily approximates the dynamics of monotonically changing indicators. The dynamics of non-monotonic trade flows is analyzed, three types of functional dependence on time are proposed for their approximation. It is shown that the number of foreign trade series can be approximated by the space of seven main components with a 10% error. The forecast of regional development and global dynamics up to 2040 is constructed.

  9. Stonyakin F.S., Lushko Е.A., Trеtiak I.D., Ablaev S.S.
    Subgradient methods for weakly convex problems with a sharp minimum in the case of inexact information about the function or subgradient
    Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1765-1778

    The problem of developing efficient numerical methods for non-convex (including non-smooth) problems is relevant due to their widespread use of such problems in applications. This paper is devoted to subgradient methods for minimizing Lipschitz μ-weakly convex functions, which are not necessarily smooth. It is well known that subgradient methods have low convergence rates in high-dimensional spaces even for convex functions. However, if we consider a subclass of functions that satisfies sharp minimum condition and also use the Polyak step, we can guarantee a linear convergence rate of the subgradient method. In some cases, the values of the function or it’s subgradient may be available to the numerical method with some error. The accuracy of the solution provided by the numerical method depends on the magnitude of this error. In this paper, we investigate the behavior of the subgradient method with a Polyak step when inaccurate information about the objective function value or subgradient is used in iterations. We prove that with a specific choice of starting point, the subgradient method with some analogue of the Polyak step-size converges at a geometric progression rate on a class of μ-weakly convex functions with a sharp minimum, provided that there is additive inaccuracy in the subgradient values. In the case when both the value of the function and the value of its subgradient at the current point are known with error, convergence to some neighborhood of the set of exact solutions is shown and the quality estimates of the output solution by the subgradient method with the corresponding analogue of the Polyak step are obtained. The article also proposes a subgradient method with a clipped step, and an assessment of the quality of the solution obtained by this method for the class of μ-weakly convex functions with a sharp minimum is presented. Numerical experiments were conducted for the problem of low-rank matrix recovery. They showed that the efficiency of the studied algorithms may not depend on the accuracy of localization of the initial approximation within the required region, and the inaccuracy in the values of the function and subgradient may affect the number of iterations required to achieve an acceptable quality of the solution, but has almost no effect on the quality of the solution itself.

Pages: « first previous

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"