Результаты поиска по 'criterion':
Найдено статей: 39
  1. Nikitin I.S., Nikitin A.D.
    Multi regime model and numerical algorithm for calculations on various types quasi crack developing under cyclic loading
    Computer Research and Modeling, 2022, v. 14, no. 4, pp. 873-885

    A new method for calculating the initiation and development of narrow local damage zones in specimens and structural elements subjected to various modes cyclic loadings is proposed based on multi regime two criteria model of fatigue fracture. Such narrow zones of damage can be considered as quasi-cracks of two different types, corresponding to the mechanism of normal crack opening and shear.

    Numerical simulations that are aimed to reproduce the left and right branches of the full fatigue curves for specimens made from titanium and aluminum alloy and to verify the model. These branches were constructed based on tests results obtained under various modes and cyclic loading schemes. Examples of modeling the development of quasi-cracks for two types (normal opening and shear) under different cyclic loading modes for a plate with a hole as a stress concentrator are given. Under a complex stress state in the proposed multi regime model, a natural implementation of any considered mechanisms for the quasi-cracks development is possible. Quasi-cracks of different types can develop in different parts of the specimen, including simultaneously.

  2. Ignatev N.A., Tuliev U.Y.
    Semantic structuring of text documents based on patterns of natural language entities
    Computer Research and Modeling, 2022, v. 14, no. 5, pp. 1185-1197

    The technology of creating patterns from natural language words (concepts) based on text data in the bag of words model is considered. Patterns are used to reduce the dimension of the original space in the description of documents and search for semantically related words by topic. The process of dimensionality reduction is implemented through the formation of patterns of latent features. The variety of structures of document relations is investigated in order to divide them into themes in the latent space.

    It is considered that a given set of documents (objects) is divided into two non-overlapping classes, for the analysis of which it is necessary to use a common dictionary. The belonging of words to a common vocabulary is initially unknown. Class objects are considered as opposition to each other. Quantitative parameters of oppositionality are determined through the values of the stability of each feature and generalized assessments of objects according to non-overlapping sets of features.

    To calculate the stability, the feature values are divided into non-intersecting intervals, the optimal boundaries of which are determined by a special criterion. The maximum stability is achieved under the condition that the boundaries of each interval contain values of one of the two classes.

    The composition of features in sets (patterns of words) is formed from a sequence ordered by stability values. The process of formation of patterns and latent features based on them is implemented according to the rules of hierarchical agglomerative grouping.

    A set of latent features is used for cluster analysis of documents using metric grouping algorithms. The analysis applies the coefficient of content authenticity based on the data on the belonging of documents to classes. The coefficient is a numerical characteristic of the dominance of class representatives in groups.

    To divide documents into topics, it is proposed to use the union of groups in relation to their centers. As patterns for each topic, a sequence of words ordered by frequency of occurrence from a common dictionary is considered.

    The results of a computational experiment on collections of abstracts of scientific dissertations are presented. Sequences of words from the general dictionary on 4 topics are formed.

  3. Fokin G.A., Volgushev D.B.
    Models for spatial selection during location-aware beamforming in ultra-dense millimeter wave radio access networks
    Computer Research and Modeling, 2024, v. 16, no. 1, pp. 195-216

    The work solves the problem of establishing the dependence of the potential for spatial selection of useful and interfering signals according to the signal-to-interference ratio criterion on the positioning error of user equipment during beamforming by their location at a base station, equipped with an antenna array. Configurable simulation parameters include planar antenna array with a different number of antenna elements, movement trajectory, as well as the accuracy of user equipment location estimation using root mean square error of coordinate estimates. The model implements three algorithms for controlling the shape of the antenna radiation pattern: 1) controlling the beam direction for one maximum and one zero; 2) controlling the shape and width of the main beam; 3) adaptive beamforming. The simulation results showed, that the first algorithm is most effective, when the number of antenna array elements is no more than 5 and the positioning error is no more than 7 m, and the second algorithm is appropriate to employ, when the number of antenna array elements is more than 15 and the positioning error is more than 5 m. Adaptive beamforming is implemented using a training signal and provides optimal spatial selection of useful and interfering signals without device location data, but is characterized by high complexity of hardware implementation. Scripts of the developed models are available for verification. The results obtained can be used in the development of scientifically based recommendations for beam control in ultra-dense millimeter-wave radio access networks of the fifth and subsequent generations.

  4. varshavsky L.Eug.
    Study of the dynamics of the structure of oligopolistic markets with non-market opposition parties
    Computer Research and Modeling, 2021, v. 13, no. 1, pp. 219-233

    The article examines the impact of non-market actions of participants in oligopolistic markets on the market structure. The following actions of one of the market participants aimed at increasing its market share are analyzed: 1) price manipulation; 2) blocking investments of stronger oligopolists; 3) destruction of produced products and capacities of competitors. Linear dynamic games with a quadratic criterion are used to model the strategies of oligopolists. The expediency of their use is due to the possibility of both an adequate description of the evolution of markets and the implementation of two mutually complementary approaches to determining the strategies of oligopolists: 1) based on the representation of models in the state space and the solution of generalized Riccati equations; 2) based on the application of operational calculus methods (in the frequency domain) which owns the visibility necessary for economic analysis.

    The article shows the equivalence of approaches to solving the problem with maximin criteria of oligopolists in the state space and in the frequency domain. The results of calculations are considered in relation to a duopoly, with indicators close to one of the duopolies in the microelectronic industry of the world. The second duopolist is less effective from the standpoint of costs, though more mobile. Its goal is to increase its market share by implementing the non-market methods listed above.

    Calculations carried out with help of the game model, made it possible to construct dependencies that characterize the relationship between the relative increase in production volumes over a 25-year period of weak and strong duopolists under price manipulation. Constructed dependencies show that an increase in the price for the accepted linear demand function leads to a very small increase in the production of a strong duopolist, but, simultaneously, to a significant increase in this indicator for a weak one.

    Calculations carried out with use of the other variants of the model, show that blocking investments, as well as destroying the products of a strong duopolist, leads to more significant increase in the production of marketable products for a weak duopolist than to a decrease in this indicator for a strong one.

  5. Zavodskikh R.K., Efanov N.N.
    Performance prediction for chosen types of loops over one-dimensional arrays with embedding-driven intermediate representations analysis
    Computer Research and Modeling, 2023, v. 15, no. 1, pp. 211-224

    The method for mapping of intermediate representations (IR) set of C, C++ programs to vector embedding space is considered to create an empirical estimation framework for static performance prediction using LLVM compiler infrastructure. The usage of embeddings makes programs easier to compare due to avoiding Control Flow Graphs (CFG) and Data Flow Graphs (DFG) direct comparison. This method is based on transformation series of the initial IR such as: instrumentation — injection of artificial instructions in an instrumentation compiler’s pass depending on load offset delta in the current instruction compared to the previous one, mapping of instrumented IR into multidimensional vector with IR2Vec and dimension reduction with t-SNE (t-distributed stochastic neighbor embedding) method. The D1 cache miss ratio measured with perf stat tool is considered as performance metric. A heuristic criterion of programs having more or less cache miss ratio is given. This criterion is based on embeddings of programs in 2D-space. The instrumentation compiler’s pass developed in this work is described: how it generates and injects artificial instructions into IR within the used memory model. The software pipeline that implements the performance estimation based on LLVM compiler infrastructure is given. Computational experiments are performed on synthetic tests which are the sets of programs with the same CFGs but with different sequences of offsets used when accessing the one-dimensional array of a given size. The correlation coefficient between performance metric and distance to the worst program’s embedding is measured and proved to be negative regardless of t-SNE initialization. This fact proves the heuristic criterion to be true. The process of such synthetic tests generation is also considered. Moreover, the variety of performance metric in programs set in such a test is proposed as a metric to be improved with exploration of more tests generators.

  6. Rusyak I.G., Nefedov D.G.
    Solution of optimization problem of wood fuel facility location by the thermal energy cost criterion
    Computer Research and Modeling, 2012, v. 4, no. 3, pp. 651-659

    The paper contains a mathematical model for the optimal location of enterprises producing fuel from renewable wood waste for the regional distributed heating supply system. Optimization is based on total cost minimization of the end product – the thermal energy from wood fuel. A method for solving the problem is based on genetic algorithm. The paper also shows the practical results of the model by example of Udmurt Republic.

    Views (last year): 5. Citations: 2 (RSCI).
  7. Irkhin I.A., Bulatov V.G., Vorontsov K.V.
    Additive regularizarion of topic models with fast text vectorizartion
    Computer Research and Modeling, 2020, v. 12, no. 6, pp. 1515-1528

    The probabilistic topic model of a text document collection finds two matrices: a matrix of conditional probabilities of topics in documents and a matrix of conditional probabilities of words in topics. Each document is represented by a multiset of words also called the “bag of words”, thus assuming that the order of words is not important for revealing the latent topics of the document. Under this assumption, the problem is reduced to a low-rank non-negative matrix factorization governed by likelihood maximization. In general, this problem is ill-posed having an infinite set of solutions. In order to regularize the solution, a weighted sum of optimization criteria is added to the log-likelihood. When modeling large text collections, storing the first matrix seems to be impractical, since its size is proportional to the number of documents in the collection. At the same time, the topical vector representation (embedding) of documents is necessary for solving many text analysis tasks, such as information retrieval, clustering, classification, and summarization of texts. In practice, the topical embedding is calculated for a document “on-the-fly”, which may require dozens of iterations over all the words of the document. In this paper, we propose a way to calculate a topical embedding quickly, by one pass over document words. For this, an additional constraint is introduced into the model in the form of an equation, which calculates the first matrix from the second one in linear time. Although formally this constraint is not an optimization criterion, in fact it plays the role of a regularizer and can be used in combination with other regularizers within the additive regularization framework ARTM. Experiments on three text collections have shown that the proposed method improves the model in terms of sparseness, difference, logLift and coherence measures of topic quality. The open source libraries BigARTM and TopicNet were used for the experiments.

  8. Abramov V.S., Petrov M.N.
    Application of the Dynamic Mode Decomposition in search of unstable modes in laminar-turbulent transition problem
    Computer Research and Modeling, 2023, v. 15, no. 4, pp. 1069-1090

    Laminar-turbulent transition is the subject of an active research related to improvement of economic efficiency of air vehicles, because in the turbulent boundary layer drag increases, which leads to higher fuel consumption. One of the directions of such research is the search for efficient methods, that can be used to find the position of the transition in space. Using this information about laminar-turbulent transition location when designing an aircraft, engineers can predict its performance and profitability at the initial stages of the project. Traditionally, $e^N$ method is applied to find the coordinates of a laminar-turbulent transition. It is a well known approach in industry. However, despite its widespread use, this method has a number of significant drawbacks, since it relies on parallel flow assumption, which limits the scenarios for its application, and also requires computationally expensive calculations in a wide range of frequencies and wave numbers. Alternatively, flow analysis can be done by using Dynamic Mode Decomposition, which allows one to analyze flow disturbances using flow data directly. Since Dynamic Mode Decomposition is a dimensionality reduction method, the number of computations can be dramatically reduced. Furthermore, usage of Dynamic Mode Decomposition expands the applicability of the whole method, due to the absence of assumptions about the parallel flow in its derivation.

    The presented study proposes an approach to finding the location of a laminar-turbulent transition using the Dynamic Mode Decomposition method. The essence of this approach is to divide the boundary layer region into sets of subregions, for each of which the transition point is independently calculated, using Dynamic Mode Decomposition for flow analysis, after which the results are averaged to produce the final result. This approach is validated by laminar-turbulent transition predictions of subsonic and supersonic flows over a 2D flat plate with zero pressure gradient. The results demonstrate the fundamental applicability and high accuracy of the described method in a wide range of conditions. The study focuses on comparison with the $e^N$ method and proves the advantages of the proposed approach. It is shown that usage of Dynamic Mode Decomposition leads to significantly faster execution due to less intensive computations, while the accuracy is comparable to the such of the solution obtained with the $e^N$ method. This indicates the prospects for using the described approach in a real world applications.

  9. Shatrov A.V., Okhapkin V.P.
    Optimal control of bank investment as a factorof economic stability
    Computer Research and Modeling, 2012, v. 4, no. 4, pp. 959-967

    This paper presents a model of replenishment of bank liquidity by additional income of banks. Given the methodological basis for the necessity for bank stabilization funds to cover losses during the economy crisis. An econometric derivation of the equations describing the behavior of the bank financial and operating activity performed. In accordance with the purpose of creating a stabilization fund introduces an optimality criterion used controls. Based on the equations of the behavior of the bank by the method of dynamic programming is derived a vector of optimal controls.

    Views (last year): 5.
Pages: « first previous

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"