Результаты поиска по 'algorithm':
Найдено статей: 287
  1. Sukhov E.A., Chekina E.A.
    Software complex for numerical modeling of multibody system dynamics
    Computer Research and Modeling, 2024, v. 16, no. 1, pp. 161-174

    This work deals with numerical modeling of motion of the multibody systems consisting of rigid bodies with arbitrary masses and inertial properties. We consider both planar and spatial systems which may contain kinematic loops.

    The numerical modeling is fully automatic and its computational algorithm contains three principal steps. On step one a graph of the considered mechanical system is formed from the userinput data. This graph represents the hierarchical structure of the mechanical system. On step two the differential-algebraic equations of motion of the system are derived using the so-called Joint Coordinate Method. This method allows to minimize the redundancy and lower the number of the equations of motion and thus optimize the calculations. On step three the equations of motion are integrated numerically and the resulting laws of motion are presented via user interface or files.

    The aforementioned algorithm is implemented in the software complex that contains a computer algebra system, a graph library, a mechanical solver, a library of numerical methods and a user interface.

  2. Krivovichev G.V.
    Difference splitting schemes for the system of one-dimensional equations of hemodynamics
    Computer Research and Modeling, 2024, v. 16, no. 2, pp. 459-488

    The work is devoted to the construction and analysis of difference schemes for a system of hemodynamic equations obtained by averaging the hydrodynamic equations of a viscous incompressible fluid over the vessel cross-section. Models of blood as an ideal and as a viscous Newtonian fluid are considered. Difference schemes that approximate equations with second order on the spatial variable are proposed. The computational algorithms of the constructed schemes are based on the method of splitting on physical processes. According to this approach, at one time step, the model equations are considered separately and sequentially. The practical implementation of the proposed schemes at each time step leads to a sequential solution of two linear systems with tridiagonal matrices. It is demonstrated that the schemes are $\rho$-stable under minor restrictions on the time step in the case of sufficiently smooth solutions.

    For the problem with a known analytical solution, it is demonstrated that the numerical solution has a second order convergence in a wide range of spatial grid step. The proposed schemes are compared with well-known explicit schemes, such as the Lax – Wendroff, Lax – Friedrichs and McCormack schemes in computational experiments on modeling blood flow in model vascular systems. It is demonstrated that the results obtained using the proposed schemes are close to the results obtained using other computational schemes, including schemes constructed by other approaches to spatial discretization. It is demonstrated that in the case of different spatial grids, the time of computation for the proposed schemes is significantly less than in the case of explicit schemes, despite the need to solve systems of linear equations at each step. The disadvantages of the schemes are the limitation on the time step in the case of discontinuous or strongly changing solutions and the need to use extrapolation of values at the boundary points of the vessels. In this regard, problems on the adaptation of splitting schemes for problems with discontinuous solutions and in cases of special types of conditions at the vessels ends are perspective for further research.

  3. Vetchanin E.V., Tenenev V.A., Shaura A.S.
    Motion control of a rigid body in viscous fluid
    Computer Research and Modeling, 2013, v. 5, no. 4, pp. 659-675

    We consider the optimal motion control problem for a mobile device with an external rigid shell moving along a prescribed trajectory in a viscous fluid. The mobile robot under consideration possesses the property of self-locomotion. Self-locomotion is implemented due to back-and-forth motion of an internal material point. The optimal motion control is based on the Sugeno fuzzy inference system. An approach based on constructing decision trees using the genetic algorithm for structural and parametric synthesis has been proposed to obtain the base of fuzzy rules.

    Views (last year): 2. Citations: 1 (RSCI).
  4. Rusyak I.G., Tenenev V.A.
    Modeling of ballistics of an artillery shot taking into account the spatial distribution of parameters and backpressure
    Computer Research and Modeling, 2020, v. 12, no. 5, pp. 1123-1147

    The paper provides a comparative analysis of the results obtained by various approaches to modeling the process of artillery shot. In this connection, the main problem of internal ballistics and its particular case of the Lagrange problem are formulated in averaged parameters, where, within the framework of the assumptions of the thermodynamic approach, the distribution of pressure and gas velocity over the projectile space for a channel of variable cross section is taken into account for the first time. The statement of the Lagrange problem is also presented in the framework of the gas-dynamic approach, taking into account the spatial (one-dimensional and two-dimensional axisymmetric) changes in the characteristics of the ballistic process. The control volume method is used to numerically solve the system of Euler gas-dynamic equations. Gas parameters at the boundaries of control volumes are determined using a selfsimilar solution to the Riemann problem. Based on the Godunov method, a modification of the Osher scheme is proposed, which allows to implement a numerical calculation algorithm with a second order of accuracy in coordinate and time. The solutions obtained in the framework of the thermodynamic and gas-dynamic approaches are compared for various loading parameters. The effect of projectile mass and chamber broadening on the distribution of the ballistic parameters of the shot and the dynamics of the projectile motion was studied. It is shown that the thermodynamic approach, in comparison with the gas-dynamic approach, leads to a systematic overestimation of the estimated muzzle velocity of the projectile in the entire range of parameters studied, while the difference in muzzle velocity can reach 35%. At the same time, the discrepancy between the results obtained in the framework of one-dimensional and two-dimensional gas-dynamic models of the shot in the same range of change in parameters is not more than 1.3%.

    A spatial gas-dynamic formulation of the backpressure problem is given, which describes the change in pressure in front of an accelerating projectile as it moves along the barrel channel. It is shown that accounting the projectile’s front, considered in the two-dimensional axisymmetric formulation of the problem, leads to a significant difference in the pressure fields behind the front of the shock wave, compared with the solution in the framework of the onedimensional formulation of the problem, where the projectile’s front is not possible to account. It is concluded that this can significantly affect the results of modeling ballistics of a shot at high shooting velocities.

  5. Elaraby A.E., Nechaevskiy A.V.
    An effective segmentation approach for liver computed tomography scans using fuzzy exponential entropy
    Computer Research and Modeling, 2021, v. 13, no. 1, pp. 195-202

    Accurate segmentation of liver plays important in contouring during diagnosis and the planning of treatment. Imaging technology analysis and processing are wide usage in medical diagnostics, and therapeutic applications. Liver segmentation referring to the process of automatic or semi-automatic detection of liver image boundaries. A major difficulty in segmentation of liver image is the high variability as; the human anatomy itself shows major variation modes. In this paper, a proposed approach for computed tomography (CT) liver segmentation is presented by combining exponential entropy and fuzzy c-partition. Entropy concept has been utilized in various applications in imaging computing domain. Threshold techniques based on entropy have attracted a considerable attention over the last years in image analysis and processing literatures and it is among the most powerful techniques in image segmentation. In the proposed approach, the computed tomography (CT) of liver is transformed into fuzzy domain and fuzzy entropies are defined for liver image object and background. In threshold selection procedure, the proposed approach considers not only the information of liver image background and object, but also interactions between them as the selection of threshold is done by find a proper parameter combination of membership function such that the total fuzzy exponential entropy is maximized. Differential Evolution (DE) algorithm is utilizing to optimize the exponential entropy measure to obtain image thresholds. Experimental results in different CT livers scan are done and the results demonstrate the efficient of the proposed approach. Based on the visual clarity of segmented images with varied threshold values using the proposed approach, it was observed that liver segmented image visual quality is better with the results higher level of threshold.

  6. Pletnev N.V., Dvurechensky P.E., Gasnikov A.V.
    Application of gradient optimization methods to solve the Cauchy problem for the Helmholtz equation
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 417-444

    The article is devoted to studying the application of convex optimization methods to solve the Cauchy problem for the Helmholtz equation, which is ill-posed since the equation belongs to the elliptic type. The Cauchy problem is formulated as an inverse problem and is reduced to a convex optimization problem in a Hilbert space. The functional to be optimized and its gradient are calculated using the solution of boundary value problems, which, in turn, are well-posed and can be approximately solved by standard numerical methods, such as finite-difference schemes and Fourier series expansions. The convergence of the applied fast gradient method and the quality of the solution obtained in this way are experimentally investigated. The experiment shows that the accelerated gradient method — the Similar Triangle Method — converges faster than the non-accelerated method. Theorems on the computational complexity of the resulting algorithms are formulated and proved. It is found that Fourier’s series expansions are better than finite-difference schemes in terms of the speed of calculations and improve the quality of the solution obtained. An attempt was made to use restarts of the Similar Triangle Method after halving the residual of the functional. In this case, the convergence does not improve, which confirms the absence of strong convexity. The experiments show that the inaccuracy of the calculations is more adequately described by the additive concept of the noise in the first-order oracle. This factor limits the achievable quality of the solution, but the error does not accumulate. According to the results obtained, the use of accelerated gradient optimization methods can be the way to solve inverse problems effectively.

  7. Nikitin I.S., Nikitin A.D.
    Multi regime model and numerical algorithm for calculations on various types quasi crack developing under cyclic loading
    Computer Research and Modeling, 2022, v. 14, no. 4, pp. 873-885

    A new method for calculating the initiation and development of narrow local damage zones in specimens and structural elements subjected to various modes cyclic loadings is proposed based on multi regime two criteria model of fatigue fracture. Such narrow zones of damage can be considered as quasi-cracks of two different types, corresponding to the mechanism of normal crack opening and shear.

    Numerical simulations that are aimed to reproduce the left and right branches of the full fatigue curves for specimens made from titanium and aluminum alloy and to verify the model. These branches were constructed based on tests results obtained under various modes and cyclic loading schemes. Examples of modeling the development of quasi-cracks for two types (normal opening and shear) under different cyclic loading modes for a plate with a hole as a stress concentrator are given. Under a complex stress state in the proposed multi regime model, a natural implementation of any considered mechanisms for the quasi-cracks development is possible. Quasi-cracks of different types can develop in different parts of the specimen, including simultaneously.

  8. Ignatev N.A., Tuliev U.Y.
    Semantic structuring of text documents based on patterns of natural language entities
    Computer Research and Modeling, 2022, v. 14, no. 5, pp. 1185-1197

    The technology of creating patterns from natural language words (concepts) based on text data in the bag of words model is considered. Patterns are used to reduce the dimension of the original space in the description of documents and search for semantically related words by topic. The process of dimensionality reduction is implemented through the formation of patterns of latent features. The variety of structures of document relations is investigated in order to divide them into themes in the latent space.

    It is considered that a given set of documents (objects) is divided into two non-overlapping classes, for the analysis of which it is necessary to use a common dictionary. The belonging of words to a common vocabulary is initially unknown. Class objects are considered as opposition to each other. Quantitative parameters of oppositionality are determined through the values of the stability of each feature and generalized assessments of objects according to non-overlapping sets of features.

    To calculate the stability, the feature values are divided into non-intersecting intervals, the optimal boundaries of which are determined by a special criterion. The maximum stability is achieved under the condition that the boundaries of each interval contain values of one of the two classes.

    The composition of features in sets (patterns of words) is formed from a sequence ordered by stability values. The process of formation of patterns and latent features based on them is implemented according to the rules of hierarchical agglomerative grouping.

    A set of latent features is used for cluster analysis of documents using metric grouping algorithms. The analysis applies the coefficient of content authenticity based on the data on the belonging of documents to classes. The coefficient is a numerical characteristic of the dominance of class representatives in groups.

    To divide documents into topics, it is proposed to use the union of groups in relation to their centers. As patterns for each topic, a sequence of words ordered by frequency of occurrence from a common dictionary is considered.

    The results of a computational experiment on collections of abstracts of scientific dissertations are presented. Sequences of words from the general dictionary on 4 topics are formed.

  9. Makarov I.S., Bagantsova E.R., Iashin P.A., Kovaleva M.D., Gorbachev R.A.
    Development of and research on machine learning algorithms for solving the classification problem in Twitter publications
    Computer Research and Modeling, 2023, v. 15, no. 1, pp. 185-195

    Posts on social networks can both predict the movement of the financial market, and in some cases even determine its direction. The analysis of posts on Twitter contributes to the prediction of cryptocurrency prices. The specificity of the community is represented in a special vocabulary. Thus, slang expressions and abbreviations are used in posts, the presence of which makes it difficult to vectorize text data, as a result of which preprocessing methods such as Stanza lemmatization and the use of regular expressions are considered. This paper describes created simplest machine learning models, which may work despite such problems as lack of data and short prediction timeframe. A word is considered as an element of a binary vector of a data unit in the course of the problem of binary classification solving. Basic words are determined according to the frequency analysis of mentions of a word. The markup is based on Binance candlesticks with variable parameters for a more accurate description of the trend of price changes. The paper introduces metrics that reflect the distribution of words depending on their belonging to a positive or negative classes. To solve the classification problem, we used a dense model with parameters selected by Keras Tuner, logistic regression, a random forest classifier, a naive Bayesian classifier capable of working with a small sample, which is very important for our task, and the k-nearest neighbors method. The constructed models were compared based on the accuracy metric of the predicted labels. During the investigation we recognized that the best approach is to use models which predict price movements of a single coin. Our model deals with posts that mention LUNA project, which no longer exist. This approach to solving binary classification of text data is widely used to predict the price of an asset, the trend of its movement, which is often used in automated trading.

  10. Tomonin Y.D., Tominin V.D., Borodich E.D., Kovalev D.A., Dvurechensky P.E., Gasnikov A.V., Chukanov S.V.
    On Accelerated Methods for Saddle-Point Problems with Composite Structure
    Computer Research and Modeling, 2023, v. 15, no. 2, pp. 433-467

    We consider strongly-convex-strongly-concave saddle-point problems with general non-bilinear objective and different condition numbers with respect to the primal and dual variables. First, we consider such problems with smooth composite terms, one of which has finite-sum structure. For this setting we propose a variance reduction algorithm with complexity estimates superior to the existing bounds in the literature. Second, we consider finite-sum saddle-point problems with composite terms and propose several algorithms depending on the properties of the composite terms. When the composite terms are smooth we obtain better complexity bounds than the ones in the literature, including the bounds of a recently proposed nearly-optimal algorithms which do not consider the composite structure of the problem. If the composite terms are prox-friendly, we propose a variance reduction algorithm that, on the one hand, is accelerated compared to existing variance reduction algorithms and, on the other hand, provides in the composite setting similar complexity bounds to the nearly-optimal algorithm which is designed for noncomposite setting. Besides, our algorithms allow one to separate the complexity bounds, i. e. estimate, for each part of the objective separately, the number of oracle calls that is sufficient to achieve a given accuracy. This is important since different parts can have different arithmetic complexity of the oracle, and it is desired to call expensive oracles less often than cheap oracles. The key thing to all these results is our general framework for saddle-point problems, which may be of independent interest. This framework, in turn is based on our proposed Accelerated Meta-Algorithm for composite optimization with probabilistic inexact oracles and probabilistic inexactness in the proximal mapping, which may be of independent interest as well.

Pages: « first previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"