Результаты поиска по 'optimization tasks':
Найдено статей: 38
  1. Emaletdinova L.Y., Mukhametzyanov Z.I., Kataseva D.V., Kabirova A.N.
    A method of constructing a predictive neural network model of a time series
    Computer Research and Modeling, 2020, v. 12, no. 4, pp. 737-756

    This article studies a method of constructing a predictive neural network model of a time series based on determining the composition of input variables, constructing a training sample and training itself using the back propagation method. Traditional methods of constructing predictive models of the time series are: the autoregressive model, the moving average model or the autoregressive model — the moving average allows us to approximate the time series by a linear dependence of the current value of the output variable on a number of its previous values. Such a limitation as linearity of dependence leads to significant errors in forecasting.

    Mining Technologies using neural network modeling make it possible to approximate the time series by a nonlinear dependence. Moreover, the process of constructing of a neural network model (determining the composition of input variables, the number of layers and the number of neurons in the layers, choosing the activation functions of neurons, determining the optimal values of the neuron link weights) allows us to obtain a predictive model in the form of an analytical nonlinear dependence.

    The determination of the composition of input variables of neural network models is one of the key points in the construction of neural network models in various application areas that affect its adequacy. The composition of the input variables is traditionally selected from some physical considerations or by the selection method. In this work it is proposed to use the behavior of the autocorrelation and private autocorrelation functions for the task of determining the composition of the input variables of the predictive neural network model of the time series.

    In this work is proposed a method for determining the composition of input variables of neural network models for stationary and non-stationary time series, based on the construction and analysis of autocorrelation functions. Based on the proposed method in the Python programming environment are developed an algorithm and a program, determining the composition of the input variables of the predictive neural network model — the perceptron, as well as building the model itself. The proposed method was experimentally tested using the example of constructing a predictive neural network model of a time series that reflects energy consumption in different regions of the United States, openly published by PJM Interconnection LLC (PJM) — a regional network organization in the United States. This time series is non-stationary and is characterized by the presence of both a trend and seasonality. Prediction of the next values of the time series based on previous values and the constructed neural network model showed high approximation accuracy, which proves the effectiveness of the proposed method.

  2. Krotov K.V., Skatkov A.V.
    Optimization of task package execution planning in multi-stage systems under restrictions and the formation of sets
    Computer Research and Modeling, 2021, v. 13, no. 5, pp. 917-946

    Modern methods of complex planning the execution of task packages in multistage systems are characterized by the presence of restrictions on the dimension of the problem being solved, the impossibility of guaranteed obtaining effective solutions for various values of its input parameters, as well as the impossibility of registration the conditions for the formation of sets from the result and the restriction on the interval duration of time of the system operating. The decomposition of the generalized function of the system into a set of hierarchically interconnected subfunctions is implemented to solve the problem of scheduling the execution of task packages with generating sets of results and the restriction on the interval duration of time for the functioning of the system. The use of decomposition made it possible to employ the hierarchical approach for planning the execution of task packages in multistage systems, which provides the determination of decisions by the composition of task groups at the first level of the hierarchy decisions by the composition of task packages groups executed during time intervals of limited duration at the second level and schedules for executing packages at the third level the hierarchy. In order to evaluate decisions on the composition of packages, the results of their execution, obtained during the specified time intervals, are distributed among the packages. The apparatus of the theory of hierarchical games is used to determine complex solutions. A model of a hierarchical game for making decisions by the compositions of packages, groups of packages and schedules of executing packages is built, which is a system of hierarchically interconnected criteria for optimizing decisions. The model registers the condition for the formation of sets from the results of the execution of task packages and restriction on duration of time intervals of its operating. The problem of determining the compositions of task packages and groups of task packages is NP-hard; therefore, its solution requires the use of approximate optimization methods. In order to optimize groups of task packages, the construction of a method for formulating initial solutions by their compositions has been implemented, which are further optimized. Moreover, a algorithm for distributing the results of executing task packages obtained during time intervals of limited duration by sets is formulated. The method of local solutions optimization by composition of packages groups, in accordance with which packages are excluded from groups, the results of which are not included in sets, and packages, that aren’t included in any group, is proposed. The software implementation of the considered method of complex optimization of the compositions of task packages, groups of task packages, and schedules for executing task packages from groups (including the implementation of the method for optimizing the compositions of groups of task packages) has been performed. With its use, studies of the features of the considered planning task are carried out. Conclusion are formulated concerning the dependence of the efficiency of scheduling the execution of task packages in multistage system under the introduced conditions from the input parameters of the problem. The use of the method of local optimization of the compositions of groups of task packages allows to increase the number of formed sets from the results of task execution in packages from groups by 60% in comparison with fixed groups (which do not imply optimization).

  3. Kutalev A.A., Lapina A.A.
    Modern ways to overcome neural networks catastrophic forgetting and empirical investigations on their structural issues
    Computer Research and Modeling, 2023, v. 15, no. 1, pp. 45-56

    This paper presents the results of experimental validation of some structural issues concerning the practical use of methods to overcome catastrophic forgetting of neural networks. A comparison of current effective methods like EWC (Elastic Weight Consolidation) and WVA (Weight Velocity Attenuation) is made and their advantages and disadvantages are considered. It is shown that EWC is better for tasks where full retention of learned skills is required on all the tasks in the training queue, while WVA is more suitable for sequential tasks with very limited computational resources, or when reuse of representations and acceleration of learning from task to task is required rather than exact retention of the skills. The attenuation of the WVA method must be applied to the optimization step, i. e. to the increments of neural network weights, rather than to the loss function gradient itself, and this is true for any gradient optimization method except the simplest stochastic gradient descent (SGD). The choice of the optimal weights attenuation function between the hyperbolic function and the exponent is considered. It is shown that hyperbolic attenuation is preferable because, despite comparable quality at optimal values of the hyperparameter of the WVA method, it is more robust to hyperparameter deviations from the optimal value (this hyperparameter in the WVA method provides a balance between preservation of old skills and learning a new skill). Empirical observations are presented that support the hypothesis that the optimal value of this hyperparameter does not depend on the number of tasks in the sequential learning queue. And, consequently, this hyperparameter can be picked up on a small number of tasks and used on longer sequences.

  4. Pletnev N.V.
    Fast adaptive by constants of strong-convexity and Lipschitz for gradient first order methods
    Computer Research and Modeling, 2021, v. 13, no. 5, pp. 947-963

    The work is devoted to the construction of efficient and applicable to real tasks first-order methods of convex optimization, that is, using only values of the target function and its derivatives. Construction uses OGMG, fast gradient method which is optimal by complexity, but requires to know the Lipschitz constant for gradient and the strong convexity constant to determine the number of steps and step length. This requirement makes practical usage very hard. An adaptive on the constant for strong convexity algorithm ACGM is proposed, based on restarts of the OGM-G with update of the strong convexity constant estimate, and an adaptive on the Lipschitz constant for gradient ALGM, in which the use of OGM-G restarts is supplemented by the selection of the Lipschitz constant with verification of the smoothness conditions used in the universal gradient descent method. This eliminates the disadvantages of the original method associated with the need to know these constants, which makes practical usage possible. Optimality of estimates for the complexity of the constructed algorithms is proved. To verify the results obtained, experiments on model functions and real tasks from machine learning are carried out.

  5. Klimenko A.B.
    Mathematical model and heuristic methods of distributed computations organizing in the Internet of Things systems
    Computer Research and Modeling, 2025, v. 17, no. 5, pp. 851-870

    Currently, a significant development has been observed in the direction of distributed computing theory, where computational tasks are solved collectively by resource-constrained devices. In practice, this scenario is implemented when processing data in Internet of Things systems, with the aim of reducing system latency and network infrastructure load, as data is processed on edge network computing devices. However, the rapid growth and widespread adoption of IoT systems raise questions about the need to develop methods for reducing the resource intensity of computations. The resource constraints of computing devices pose the following issues regarding the distribution of computational resources: firstly, the necessity to account for the transit cost between different devices solving various tasks; secondly, the necessity to consider the resource cost associated directly with the process of distributing computational resources, which is particularly relevant for groups of autonomous devices such as drones or robots. An analysis of modern publications available in open access demonstrated the absence of proposed models or methods for distributing computational resources that would simultaneously take into account all these factors, making the creation of a new mathematical model for organizing distributed computing in IoT systems and its solution methods topical. This article proposes a novel mathematical model for distributing computational resources along with heuristic optimization methods, providing an integrated approach to implementing distributed computing in IoT systems. A scenario is considered where there exists a leader device within a group that makes decisions concerning the allocation of computational resources, including its own, for distributed task resolution involving information exchanges. It is also assumed that no prior knowledge exists regarding which device will assume the role of leader or the migration paths of computational tasks across devices. Experimental results have shown the effectiveness of using the proposed models and heuristics: achieving up to a 52% reduction in resource costs for solving computational problems while accounting for data transit costs, saving up to 73% of resources through supplementary criteria optimizing task distribution based on minimizing fragment migrations and distances, and decreasing the resource cost of resolving the computational resource distribution problem by up to 28 times with reductions in distribution quality up to 10%.

  6. Buglak A.A., Pomogaev V.A., Kononov A.I.
    Calculation of absorption spectra of silver-thiolate complexes
    Computer Research and Modeling, 2019, v. 11, no. 2, pp. 275-286

    Ligand protected metal nanoclusters (NCs) have gained much attention due to their unique physicochemical properties and potential applications in material science. Noble metal NCs protected with thiolate ligands have been of interest because of their long-term stability. The detailed structures of most of the ligandstabilized metal NCs remain unknown due to the absence of crystal structure data for them. Theoretical calculations using quantum chemistry techniques appear as one of the most promising tools for determining the structure and electronic properties of NCs. That is why finding a cost-effective strategy for calculations is such an important and challenging task. In this work, we compare the performance of different theoretical methods of geometry optimization and absorption spectra calculation for silver-thiolate complexes. We show that second order Moller–Plesset perturbation theory reproduces nicely the geometries obtained at a higher level of theory, in particular, with RI-CC2 method. We compare the absorption spectra of silver-thiolate complexes simulated with different methods: EOM-CCSD, RI-CC2, ADC(2) and TDDFT. We show that the absorption spectra calculated with the ADC(2) method are consistent with the spectra obtained with the EOM-CCSD and RI-CC2 methods. CAM-B3LYP functional fails to reproduce the absorption spectra of the silver-thiolate complexes. However, M062X global hybrid meta-GGA functional seems to be a nice compromise regarding its low computational costs. In our previous study, we have already demonstrated that M062X functional shows good accuracy as compared to ADC(2) ab initio method predicting the excitation spectra of silver nanocluster complexes with nucleobases.

    Views (last year): 14.
  7. Podlipnova I.V., Persiianov M.I., Shvetsov V.I., Gasnikova E.V.
    Transport modeling: averaging price matrices
    Computer Research and Modeling, 2023, v. 15, no. 2, pp. 317-327

    This paper considers various approaches to averaging the generalized travel costs calculated for different modes of travel in the transportation network. The mode of transportation is understood to mean both the mode of transport, for example, a car or public transport, and movement without the use of transport, for example, on foot. The task of calculating the trip matrices includes the task of calculating the total matrices, in other words, estimating the total demand for movements by all modes, as well as the task of splitting the matrices according to the mode, also called modal splitting. To calculate trip matrices, gravitational, entropy and other models are used, in which the probability of movement between zones is estimated based on a certain measure of the distance of these zones from each other. Usually, the generalized cost of moving along the optimal path between zones is used as a distance measure. However, the generalized cost of movement differs for different modes of movement. When calculating the total trip matrices, it becomes necessary to average the generalized costs by modes of movement. The averaging procedure is subject to the natural requirement of monotonicity in all arguments. This requirement is not met by some commonly used averaging methods, for example, averaging with weights. The problem of modal splitting is solved by applying the methods of discrete choice theory. In particular, within the framework of the theory of discrete choice, correct methods have been developed for averaging the utility of alternatives that are monotonic in all arguments. The authors propose some adaptation of the methods of the theory of discrete choice for application to the calculation of the average cost of movements in the gravitational and entropy models. The transfer of averaging formulas from the context of the modal splitting model to the trip matrix calculation model requires the introduction of new parameters and the derivation of conditions for the possible value of these parameters, which was done in this article. The issues of recalibration of the gravitational function, which is necessary when switching to a new averaging method, if the existing function is calibrated taking into account the use of the weighted average cost, were also considered. The proposed methods were implemented on the example of a small fragment of the transport network. The results of calculations are presented, demonstrating the advantage of the proposed methods.

  8. Mezentsev Y.A., Razumnikova O.M., Estraykh I.V., Tarasova I.V., Trubnikova O.A.
    Tasks and algorithms for optimal clustering of multidimensional objects by a variety of heterogeneous indicators and their applications in medicine
    Computer Research and Modeling, 2024, v. 16, no. 3, pp. 673-693

    The work is devoted to the description of the author’s formal statements of the clustering problem for a given number of clusters, algorithms for their solution, as well as the results of using this toolkit in medicine.

    The solution of the formulated problems by exact algorithms of implementations of even relatively low dimensions before proving optimality is impossible in a finite time due to their belonging to the NP class.

    In this regard, we have proposed a hybrid algorithm that combines the advantages of precise methods based on clustering in paired distances at the initial stage with the speed of methods for solving simplified problems of splitting by cluster centers at the final stage. In the development of this direction, a sequential hybrid clustering algorithm using random search in the paradigm of swarm intelligence has been developed. The article describes it and presents the results of calculations of applied clustering problems.

    To determine the effectiveness of the developed tools for optimal clustering of multidimensional objects according to a variety of heterogeneous indicators, a number of computational experiments were performed using data sets including socio-demographic, clinical anamnestic, electroencephalographic and psychometric data on the cognitive status of patients of the cardiology clinic. An experimental proof of the effectiveness of using local search algorithms in the paradigm of swarm intelligence within the framework of a hybrid algorithm for solving optimal clustering problems has been obtained.

    The results of the calculations indicate the actual resolution of the main problem of using the discrete optimization apparatus — limiting the available dimensions of task implementations. We have shown that this problem is eliminated while maintaining an acceptable proximity of the clustering results to the optimal ones. The applied significance of the obtained clustering results is also due to the fact that the developed optimal clustering toolkit is supplemented by an assessment of the stability of the formed clusters, which allows for known factors (the presence of stenosis or older age) to additionally identify those patients whose cognitive resources are insufficient to overcome the influence of surgical anesthesia, as a result of which there is a unidirectional effect of postoperative deterioration of complex visual-motor reaction, attention and memory. This effect indicates the possibility of differentiating the classification of patients using the proposed tools.

  9. Korolev S.A., Maykov D.V.
    Solution of the problem of optimal control of the process of methanogenesis based on the Pontryagin maximum principle
    Computer Research and Modeling, 2020, v. 12, no. 2, pp. 357-367

    The paper presents a mathematical model that describes the process of obtaining biogas from livestock waste. This model describes the processes occurring in a biogas plant for mesophilic and thermophilic media, as well as for continuous and periodic modes of substrate inflow. The values of the coefficients of this model found earlier for the periodic mode, obtained by solving the problem of model identification from experimental data using a genetic algorithm, are given.

    For the model of methanogenesis, an optimal control problem is formulated in the form of a Lagrange problem, whose criterial functionality is the output of biogas over a certain period of time. The controlling parameter of the task is the rate of substrate entry into the biogas plant. An algorithm for solving this problem is proposed, based on the numerical implementation of the Pontryagin maximum principle. In this case, a hybrid genetic algorithm with an additional search in the vicinity of the best solution using the method of conjugate gradients was used as an optimization method. This numerical method for solving an optimal control problem is universal and applicable to a wide class of mathematical models.

    In the course of the study, various modes of submission of the substrate to the digesters, temperature environments and types of raw materials were analyzed. It is shown that the rate of biogas production in the continuous feed mode is 1.4–1.9 times higher in the mesophilic medium (1.9–3.2 in the thermophilic medium) than in the periodic mode over the period of complete fermentation, which is associated with a higher feed rate of the substrate and a greater concentration of nutrients in the substrate. However, the yield of biogas during the period of complete fermentation with a periodic mode is twice as high as the output over the period of a complete change of the substrate in the methane tank at a continuous mode, which means incomplete processing of the substrate in the second case. The rate of biogas formation for a thermophilic medium in continuous mode and the optimal rate of supply of raw materials is three times higher than for a mesophilic medium. Comparison of biogas output for various types of raw materials shows that the highest biogas output is observed for waste poultry farms, the least — for cattle farms waste, which is associated with the nutrient content in a unit of substrate of each type.

  10. Kovalenko S.Yu., Yusubalieva G.M.
    Survival task for the mathematical model of glioma therapy with blood-brain barrier
    Computer Research and Modeling, 2018, v. 10, no. 1, pp. 113-123

    The paper proposes a mathematical model for the therapy of glioma, taking into account the blood-brain barrier, radiotherapy and antibody therapy. The parameters were estimated from experimental data and the evaluation of the effect of parameter values on the effectiveness of treatment and the prognosis of the disease were obtained. The possible variants of sequential use of radiotherapy and the effect of antibodies have been explored. The combined use of radiotherapy with intravenous administration of $mab$ $Cx43$ leads to a potentiation of the therapeutic effect in glioma.

    Radiotherapy must precede chemotherapy, as radio exposure reduces the barrier function of endothelial cells. Endothelial cells of the brain vessels fit tightly to each other. Between their walls are formed so-called tight contacts, whose role in the provision of BBB is that they prevent the penetration into the brain tissue of various undesirable substances from the bloodstream. Dense contacts between endothelial cells block the intercellular passive transport.

    The mathematical model consists of a continuous part and a discrete one. Experimental data on the volume of glioma show the following interesting dynamics: after cessation of radio exposure, tumor growth does not resume immediately, but there is some time interval during which glioma does not grow. Glioma cells are divided into two groups. The first group is living cells that divide as fast as possible. The second group is cells affected by radiation. As a measure of the health of the blood-brain barrier system, the ratios of the number of BBB cells at the current moment to the number of cells at rest, that is, on average healthy state, are chosen.

    The continuous part of the model includes a description of the division of both types of glioma cells, the recovery of BBB cells, and the dynamics of the drug. Reducing the number of well-functioning BBB cells facilitates the penetration of the drug to brain cells, that is, enhances the action of the drug. At the same time, the rate of division of glioma cells does not increase, since it is limited not by the deficiency of nutrients available to cells, but by the internal mechanisms of the cell. The discrete part of the mathematical model includes the operator of radio interaction, which is applied to the indicator of BBB and to glial cells.

    Within the framework of the mathematical model of treatment of a cancer tumor (glioma), the problem of optimal control with phase constraints is solved. The patient’s condition is described by two variables: the volume of the tumor and the condition of the BBB. The phase constraints delineate a certain area in the space of these indicators, which we call the survival area. Our task is to find such treatment strategies that minimize the time of treatment, maximize the patient’s rest time, and at the same time allow state indicators not to exceed the permitted limits. Since the task of survival is to maximize the patient’s lifespan, it is precisely such treatment strategies that return the indicators to their original position (and we see periodic trajectories on the graphs). Periodic trajectories indicate that the deadly disease is translated into a chronic one.

    Views (last year): 14.
Pages: previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"