Результаты поиска по 'comparison of results':
Найдено статей: 115
  1. Popov D.I.
    Calibration of an elastostatic manipulator model using AI-based design of experiment
    Computer Research and Modeling, 2023, v. 15, no. 6, pp. 1535-1553

    This paper demonstrates the advantages of using artificial intelligence algorithms for the design of experiment theory, which makes possible to improve the accuracy of parameter identification for an elastostatic robot model. Design of experiment for a robot consists of the optimal configuration-external force pairs for the identification algorithms and can be described by several main stages. At the first stage, an elastostatic model of the robot is created, taking into account all possible mechanical compliances. The second stage selects the objective function, which can be represented by both classical optimality criteria and criteria defined by the desired application of the robot. At the third stage the optimal measurement configurations are found using numerical optimization. The fourth stage measures the position of the robot body in the obtained configurations under the influence of an external force. At the last, fifth stage, the elastostatic parameters of the manipulator are identified based on the measured data.

    The objective function required to finding the optimal configurations for industrial robot calibration is constrained by mechanical limits both on the part of the possible angles of rotation of the robot’s joints and on the part of the possible applied forces. The solution of this multidimensional and constrained problem is not simple, therefore it is proposed to use approaches based on artificial intelligence. To find the minimum of the objective function, the following methods, also sometimes called heuristics, were used: genetic algorithms, particle swarm optimization, simulated annealing algorithm, etc. The obtained results were analyzed in terms of the time required to obtain the configurations, the optimal value, as well as the final accuracy after applying the calibration. The comparison showed the advantages of the considered optimization techniques based on artificial intelligence over the classical methods of finding the optimal value. The results of this work allow us to reduce the time spent on calibration and increase the positioning accuracy of the robot’s end-effector after calibration for contact operations with high loads, such as machining and incremental forming.

  2. Jeeva N., Dharmalingam K.M.
    Sensitivity analysis and semi-analytical solution for analyzing the dynamics of coffee berry disease
    Computer Research and Modeling, 2024, v. 16, no. 3, pp. 731-753

    Coffee berry disease (CBD), resulting from the Colletotrichum kahawae fungal pathogen, poses a severe risk to coffee crops worldwide. Focused on coffee berries, it triggers substantial economic losses in regions relying heavily on coffee cultivation. The devastating impact extends beyond agricultural losses, affecting livelihoods and trade economies. Experimental insights into coffee berry disease provide crucial information on its pathogenesis, progression, and potential mitigation strategies for control, offering valuable knowledge to safeguard the global coffee industry. In this paper, we investigated the mathematical model of coffee berry disease, with a focus on the dynamics of the coffee plant and Colletotrichum kahawae pathogen populations, categorized as susceptible, exposed, infected, pathogenic, and recovered (SEIPR) individuals. To address the system of nonlinear differential equations and obtain semi-analytical solution for the coffee berry disease model, a novel analytical approach combining the Shehu transformation, Akbari – Ganji, and Pade approximation method (SAGPM) was utilized. A comparison of analytical results with numerical simulations demonstrates that the novel SAGPM is excellent efficiency and accuracy. Furthermore, the sensitivity analysis of the coffee berry disease model examines the effects of all parameters on the basic reproduction number $R_0$. Moreover, in order to examine the behavior of the model individuals, we varied some parameters in CBD. Through this analysis, we obtained valuable insights into the responses of the coffee berry disease model under various conditions and scenarios. This research offers valuable insights into the utilization of SAGPM and sensitivity analysis for analyzing epidemiological models, providing significant utility for researchers in the field.

  3. When modeling turbulent flows in practical applications, it is often necessary to carry out a series of calculations of bodies of similar topology. For example, bodies that differ in the shape of the fairing. The use of convolutional neural networks allows to reduce the number of calculations in a series, restoring some of them based on calculations already performed. The paper proposes a method that allows to apply a convolutional neural network regardless of the method of constructing a computational mesh. To do this, the flow field is reinterpolated to a uniform mesh along with the body itself. The geometry of the body is set using the signed distance function and masking. The restoration of the flow field based on part of the calculations for similar geometries is carried out using a neural network of the UNet type with a spatial attention mechanism. The resolution of the nearwall region, which is a critical condition for turbulent modeling, is based on the equations obtained in the nearwall domain decomposition method.

    A demonstration of the method is given for the case of a flow around a rounded plate by a turbulent air flow with different rounding at fixed parameters of the incoming flow with the Reynolds number $Re = 10^5$ and the Mach number $M = 0.15$. Since flows with such parameters of the incoming flow can be considered incompressible, only the velocity components are studied directly. The flow fields, velocity and friction profiles obtained by the surrogate model and numerically are compared. The analysis is carried out both on the plate and on the rounding. The simulation results confirm the prospects of the proposed approach. In particular, it was shown that even if the model is used at the maximum permissible limits of its applicability, friction can be obtained with an accuracy of up to 90%. The work also analyzes the constructed architecture of the neural network. The obtained surrogate model is compared with alternative models based on a variational autoencoder or the principal component analysis using radial basis functions. Based on this comparison, the advantages of the proposed method are demonstrated.

  4. Sobolev O.V., Lunina N.L., Lunin V.Yu.
    The use of cluster analysis methods for the study of a set of feasible solutions of the phase problem in biological crystallography
    Computer Research and Modeling, 2010, v. 2, no. 1, pp. 91-101

    X-ray diffraction experiment allows determining of magnitudes of complex coefficients in the decomposition of the studied electron density distribution into Fourier series. The determination of the lost in the experiment phase values poses the central problem of the method, namely the phase problem. Some methods for solving of the phase problem result in a set of feasible solutions. Cluster analysis method may be used to investigate the composition of this set and to extract one or several typical solutions. An essential feature of the approach is the estimation of the closeness of two solutions by the map correlation between two aligned Fourier syntheses calculated with the use of phase sets under comparison. An interactive computer program ClanGR was designed to perform this analysis.

    Views (last year): 2.
  5. Potapov I.I., Snigur K.S.
    Modeling of sand-gravel bed evolution in one-dimension
    Computer Research and Modeling, 2015, v. 7, no. 2, pp. 315-328

    In the paper the model for a one-dimensional non-equilibrium riverbed process is proposed. The model takes into account the suspended and bed-load sediment transport. The bed-load transport is determined by using the original formula. This formula was derived from the thin bottom layer motion equation. The formula doesn’t contain new phenomenological parameters and takes into account the influence of bed slope, granulometric and physical mechanical parameters on the bed-load transport. A number of the model test problems are solved for the verification of the proposed mathematical model. The comparison of the calculation results with the established experimental data and the results of other authors is made. It was shown, that the obtained results have a good agreement with the experimental data in spite of the relative simplicity of the proposed mathematical model.

  6. Golov A.V., Simakov S.S.
    Mathematical model of respiratory regulation during hypoxia and hypercapnia
    Computer Research and Modeling, 2017, v. 9, no. 2, pp. 297-310

    Transport of respiratory gases by respiratory and circulatory systems is one of the most important processes associated with living conditions of the human body. Significant and/or long-term deviations of oxygen and carbon dioxide concentrations from the normal values in blood can be a reason of significant pathological changes with irreversible consequences: lack of oxygen (hypoxia and ischemic events), the change in the acidbase balance of blood (acidosis or alkalosis), and others. In the context of a changing external environment and internal conditions of the body the action of its regulatory systems aimed at maintaining homeostasis. One of the major mechanisms for maintaining concentrations (partial pressures) of oxygen and carbon dioxide in the blood at a normal level is the regulation of minute ventilation, respiratory rate and depth of respiration, which is caused by the activity of the central and peripheral regulators.

    In this paper we propose a mathematical model of the regulation of pulmonary ventilation parameter. The model is used to calculate the minute ventilation adaptation during hypoxia and hypercapnia. The model is developed using a single-component model of the lungs, and biochemical equilibrium conditions of oxygen and carbon dioxide in the blood and the alveolar lung volume. A comparison with laboratory data is performed during hypoxia and hypercapnia. Analysis of the results shows that the model reproduces the dynamics of minute ventilation during hypercapnia with sufficient accuracy. Another result is that more accurate model of regulation of minute ventilation during hypoxia should be developed. The factors preventing from satisfactory accuracy are analysed in the final section.

    Respiratory function is one of the main limiting factors of the organism during intense physical activities. Thus, it is important characteristic of high performance sport and extreme physical activity conditions. Therefore, the results of this study have significant application value in the field of mathematical modeling in sport. The considered conditions of hypoxia and hypercapnia are partly reproduce training at high altitude and at hypoxia conditions. The purpose of these conditions is to increase the level of hemoglobin in the blood of highly qualified athletes. These conditions are the only admitted by sport committees.

    Views (last year): 16.
  7. Usanov M.S., Kulberg N.S., Yakovleva T.V., Morozov S.P.
    Determination of CT dose by means of noise analysis
    Computer Research and Modeling, 2018, v. 10, no. 4, pp. 525-533

    The article deals with the process of creating an effective algorithm for determining the amount of emitted quanta from an X-ray tube in computer tomography (CT) studies. An analysis of domestic and foreign literature showed that most of the work in the field of radiometry and radiography takes the tabulated values of X-ray absorption coefficients into account, while individual dose factors are not taken into account at all since many studies are lacking the Dose Report. Instead, an average value is used to simplify the calculation of statistics. In this regard, it was decided to develop a method to detect the amount of ionizing quanta by analyzing the noise of CT data. As the basis of the algorithm, we used Poisson and Gauss distribution mathematical model of owns’ design of logarithmic value. The resulting mathematical model was tested on the CT data of a calibration phantom consisting of three plastic cylinders filled with water, the X-ray absorption coefficient of which is known from the table values. The data were obtained from several CT devices from different manufacturers (Siemens, Toshiba, GE, Phillips). The developed algorithm made it possible to calculate the number of emitted X-ray quanta per unit time. These data, taking into account the noise level and the radiuses of the cylinders, were converted to X-ray absorption values, after which a comparison was made with tabulated values. As a result of this operation, the algorithm used with CT data of various configurations, experimental data were obtained, consistent with the theoretical part and the mathematical model. The results showed good accuracy of the algorithm and mathematical apparatus, which shows reliability of the obtained data. This mathematical model is already used in the noise reduction program of the CT of own design, where it participates as a method of creating a dynamic threshold of noise reduction. At the moment, the algorithm is being processed to work with real data from computer tomography of patients.

    Views (last year): 23. Citations: 1 (RSCI).
  8. Malikov Z.M., Nazarov F.K.
    Study of turbulence models for calculating a strongly swirling flow in an abrupt expanding channel
    Computer Research and Modeling, 2021, v. 13, no. 4, pp. 793-805

    In this paper, compared fundamentally different turbulence models for calculating a strongly swirling flow in an abrupt expanding pipe. This task is not only of great importance in practice, but also in theoretical terms. Because in such a flow a very complex anisotropic turbulence with recirculation zones arises and the study of the ongoing processes allows us to find an answer to many questions about turbulence. The flow under consideration has been well studied experimentally. Therefore, it is a very complex and interesting test problem for turbulence models. In the paper compared the numerical results of the one-parameter vt-92 model, the SSG/LRR-RSMw2012 Reynolds stress method and the new two-fluid model. These models are very different from each other. Because the Boussinesq hypothesis is used in the one-parameter vt-92 model, in the SSG/LRR-RSM-w2012 model, its own equation is written for each stress, and for the new two-fluid model, the basis is a completely different approach to turbulence. A feature of the approach to turbulence for the new two-fluid model is that it allows one to obtain a closed system of equations. Comparison of these models is carried out not only by the correspondence of their results with experimental data, but also by the computational resources expended on the numerical implementation of these models. Therefore, in this work, for all models, the same technique was used to numerically calculate the turbulent swirling flow at the Reynolds number $Re=3\cdot 10^4$ and the swirl parameter $S_w=0.6$. In the paper showed that the new two-fluid model is effective for the study of turbulent flows, because has good accuracy in describing complex anisotropic turbulent flows and is simple enough for numerical implementation.

  9. Varshavskiy A.E.
    A model for analyzing income inequality based on a finite functional sequence (adequacy and application problems)
    Computer Research and Modeling, 2022, v. 14, no. 3, pp. 675-689

    The paper considers the adequacy of the model developed earlier by the author for the analysis of income inequality and based on an empirically confirmed hypothesis that the relative (to the income of the richest group) income values of 20% population groups in total income can be represented as a finite functional sequence, each member of which depends on one parameter — a specially defined indicator of inequality. It is shown that in addition to the existing methods of inequality analysis, the model makes it possible to estimate with the help of analytical expressions the income shares of 20%, 10% and smaller groups of the population for different levels of inequality, as well as to identify how they change with the growth of inequality, to estimate the level of inequality for known ratios between the incomes of different groups of the population, etc.

    The paper provides a more detailed confirmation of the proposed model adequacy in comparison with the previously obtained results of statistical analysis of empirical data on the distribution of income between the 20% and 10% population groups. It is based on the analysis of certain ratios between the values of quintiles and deciles according to the proposed model. The verification of these ratios was carried out using a set of data for a large number of countries and the estimates obtained confirm the sufficiently high accuracy of the model.

    Data are presented that confirm the possibility of using the model to analyze the dependence of income distribution by population groups on the level of inequality, as well as to estimate the inequality indicator for income ratios between different groups, including variants when the income of the richest 20% is equal to the income of the poor 60 %, income of the middle class 40% or income of the rest 80% of the population, as well as when the income of the richest 10% is equal to the income of the poor 40 %, 50% or 60%, to the income of various middle class groups, etc., as well as for cases, when the distribution of income obeys harmonic proportions and when the quintiles and deciles corresponding to the middle class reach a maximum. It is shown that the income shares of the richest middle class groups are relatively stable and have a maximum at certain levels of inequality.

    The results obtained with the help of the model can be used to determine the standards for developing a policy of gradually increasing the level of progressive taxation in order to move to the level of inequality typical of countries with social oriented economy.

  10. Skorik S.N., Pirau V.V., Sedov S.A., Dvinskikh D.M.
    Comparsion of stochastic approximation and sample average approximation for saddle point problem with bilinear coupling term
    Computer Research and Modeling, 2023, v. 15, no. 2, pp. 381-391

    Stochastic optimization is a current area of research due to significant advances in machine learning and their applications to everyday problems. In this paper, we consider two fundamentally different methods for solving the problem of stochastic optimization — online and offline algorithms. The corresponding algorithms have their qualitative advantages over each other. So, for offline algorithms, it is required to solve an auxiliary problem with high accuracy. However, this can be done in a distributed manner, and this opens up fundamental possibilities such as, for example, the construction of a dual problem. Despite this, both online and offline algorithms pursue a common goal — solving the stochastic optimization problem with a given accuracy. This is reflected in the comparison of the computational complexity of the described algorithms, which is demonstrated in this paper.

    The comparison of the described methods is carried out for two types of stochastic problems — convex optimization and saddles. For problems of stochastic convex optimization, the existing solutions make it possible to compare online and offline algorithms in some detail. In particular, for strongly convex problems, the computational complexity of the algorithms is the same, and the condition of strong convexity can be weakened to the condition of $\gamma$-growth of the objective function. From this point of view, saddle point problems are much less studied. Nevertheless, existing solutions allow us to outline the main directions of research. Thus, significant progress has been made for bilinear saddle point problems using online algorithms. Offline algorithms are represented by just one study. In this paper, this example demonstrates the similarity of both algorithms with convex optimization. The issue of the accuracy of solving the auxiliary problem for saddles was also worked out. On the other hand, the saddle point problem of stochastic optimization generalizes the convex one, that is, it is its logical continuation. This is manifested in the fact that existing results from convex optimization can be transferred to saddles. In this paper, such a transfer is carried out for the results of the online algorithm in the convex case, when the objective function satisfies the $\gamma$-growth condition.

Pages: « first previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"