Результаты поиска по 'structural analysis':
Найдено статей: 97
  1. Betelin V.B., Galkin V.A.
    Mathematical and computational problems associated with the formation of structures in complex systems
    Computer Research and Modeling, 2022, v. 14, no. 4, pp. 805-815

    In this paper, the system of equations of magnetic hydrodynamics (MHD) is considered. The exact solutions found describe fluid flows in a porous medium and are related to the development of a core simulator and are aimed at creating a domestic technology «digital deposit» and the tasks of controlling the parameters of incompressible fluid. The central problem associated with the use of computer technology is large-dimensional grid approximations and high-performance supercomputers with a large number of parallel microprocessors. Kinetic methods for solving differential equations and methods for «gluing» exact solutions on coarse grids are being developed as possible alternatives to large-dimensional grid approximations. A comparative analysis of the efficiency of computing systems allows us to conclude that it is necessary to develop the organization of calculations based on integer arithmetic in combination with universal approximate methods. A class of exact solutions of the Navier – Stokes system is proposed, describing three-dimensional flows for an incompressible fluid, as well as exact solutions of nonstationary three-dimensional magnetic hydrodynamics. These solutions are important for practical problems of controlled dynamics of mineralized fluids, as well as for creating test libraries for verification of approximate methods. A number of phenomena associated with the formation of macroscopic structures due to the high intensity of interaction of elements of spatially homogeneous systems, as well as their occurrence due to linear spatial transfer in spatially inhomogeneous systems, are highlighted. It is fundamental that the emergence of structures is a consequence of the discontinuity of operators in the norms of conservation laws. The most developed and universal is the theory of computational methods for linear problems. Therefore, from this point of view, the procedures of «immersion» of nonlinear problems into general linear classes by changing the initial dimension of the description and expanding the functional spaces are important. Identification of functional solutions with functions makes it possible to calculate integral averages of an unknown, but at the same time its nonlinear superpositions, generally speaking, are not weak limits of nonlinear superpositions of approximations of the method, i.e. there are functional solutions that are not generalized in the sense of S. L. Sobolev.

  2. Lukianchenko P.P., Danilov A.M., Bugaev A.S., Gorbunov E.I., Pashkov R.A., Ilyina P.G., Gadzhimirzayev Sh.M.
    Approach to Estimating the Dynamics of the Industry Consolidation Level
    Computer Research and Modeling, 2023, v. 15, no. 1, pp. 129-140

    In this article we propose a new approach to the analysis of econometric industry parameters for the industry consolidation level. The research is based on the simple industry automatic control model. The state of the industry is measured by quarterly obtained econometric parameters from each industry’s company provided by the tax control regulator. An approach to analysis of the industry, which does not provide for tracking the economy of each company, but explores the parameters of the set of all companies as a whole, is proposed. Quarterly obtained econometric parameters from each industry’s company are Income, Quantity of employers, Taxes, and Income from Software Licenses. The ABC analysis method was modified by ABCD analysis (D — companies with zero-level impact to industry metrics) and used to make the results obtained for different indicators comparable. Pareto charts were formed for the set of econometric indicators.

    To estimate the industry monopolization, the Herfindahl – Hirschman index was calculated for the most sensitive companies metrics. Using the HHI approach, it was proved that COVID-19 does not lead to changes in the monopolization of the Russian IT industry.

    As the most visually obvious approach to the industry visualization, scattering diagrams in combination with the Pareto graph colors were proposed. The affect of the accreditation procedure is clearly observed by scattering diagram in combination with red/black dots for accredited and nonaccredited companies respectively.

    The last reported result is the proposal to use the Licenses End-to-End Product Identification as the market structure control instrument. It is the basis to avoid the multiple accounting of the licenses reselling within the chain of software distribution.

    The results of research could be the basis for future IT industry analysis and simulation on the agent based approach.

  3. Nayshtut Yu.S.
    On the boundaries of optimally designed elastoplastic structures
    Computer Research and Modeling, 2017, v. 9, no. 3, pp. 503-515

    This paper studies minimum volume elastoplastic bodies. One part of the boundary of every reviewed body is fixed to the same space points while stresses are set for the remaining part of the boundary surface (loaded surface). The shape of the loaded surface can change in space but the limit load factor calculated based on the assumption that the bodies are filled with elastoplastic medium must not be less than a fixed value. Besides, all varying bodies are supposed to have some type of a limited volume sample manifold inside of them.

    The following problem has been set: what is the maximum number of cavities (or holes in a two-dimensional case) that a minimum volume body (plate) can have under the above limitations? It is established that in order to define a mathematically correct problem, two extra conditions have to be met: the areas of the holes must be bigger than the small constant while the total length of the internal hole contour lines within the optimum figure must be minimum among the varying bodies. Thus, unlike most articles on optimum design of elastoplastic structures where parametric analysis of acceptable solutions is done with the set topology, this paper looks for the topological parameter of the design connectivity.

    The paper covers the case when the load limit factor for the sample manifold is quite large while the areas of acceptable holes in the varying plates are bigger than the small constant. The arguments are brought forward that prove the Maxwell and Michell beam system to be the optimum figure under these conditions. As an example, microphotographs of the standard biological bone tissues are presented. It is demonstrated that internal holes with large areas cannot be a part of the Michell system. At the same the Maxwell beam system can include holes with significant areas. The sufficient conditions are given for the hole formation within the solid plate of optimum volume. The results permit generalization for three-dimensional elastoplastic structures.

    The paper concludes with the setting of mathematical problems arising from the new problem optimally designed elastoplastic systems.

    Views (last year): 8.
  4. Uchmanski J.Z.
    On algorithmic essence of biology
    Computer Research and Modeling, 2020, v. 12, no. 3, pp. 641-652

    Mathematicity of physics is surprising, but it enables us to understand the laws of nature through the analysis of mathematical structures describing it. This concerns, however, only physics. The degree of the mathematization of biology is low, and attempts to mathematize it are limited to the application of mathematical methods used for the description of physical systems. When doing so, we are likely to commit an error of attributing to biological systems features that they do not have. Some argue that biology does need new mathematical methods conforming to its needs, and not known from physics. However, because of a specific complexity of biological systems, we should speak of their algorithmicity, rather than of their mathematicity. As an example of algorithmic approach one can indicate so called individual-based models used in ecology to describe population dynamics or fractal models applied to describe geometrical complexity of such biological structures as trees.

  5. Giricheva E.E.
    Analysis of taxis-driven instability of a predator–prey system through the plankton community model
    Computer Research and Modeling, 2020, v. 12, no. 1, pp. 185-199

    The paper deals with a prey-predator model, which describes the spatiotemporal dynamics of plankton community and the nutrients. The system is described by reaction-diffusion-advection equations in a onedimensional vertical column of water in the surface layer. Advective term of the predator equation represents the vertical movements of zooplankton with velocity, which is assumed to be proportional to the gradient of phytoplankton density. This study aimed to determine the conditions under which these movements (taxis) lead to the spatially heterogeneous structures generated by the system. Assuming diffusion coefficients of all model components to be equal the instability of the system in the vicinity of stationary homogeneous state with respect to small inhomogeneous perturbations is analyzed.

    Necessary conditions for the flow-induced instability were obtained through linear stability analysis. Depending on the local kinetics parameters, increasing the taxis rate leads to Turing or wave instability. This fact is in good agreement with conditions for the emergence of spatial and spatiotemporal patterns in a minimal phytoplankton–zooplankton model after flow-induced instabilities derived by other authors. This mechanism of generating patchiness is more general than the Turing mechanism, which depends on strong conditions on the diffusion coefficients.

    While the taxis exceeding a certain critical value, the wave number corresponding to the fastest growing mode remains unchanged. This value determines the type of spatial structure. In support of obtained results, the paper presents the spatiotemporal dynamics of the model components demonstrating Turing-type pattern and standing wave pattern.

  6. Stepanyan I.V.
    Biomathematical system of the nucleic acids description
    Computer Research and Modeling, 2020, v. 12, no. 2, pp. 417-434

    The article is devoted to the application of various methods of mathematical analysis, search for patterns and studying the composition of nucleotides in DNA sequences at the genomic level. New methods of mathematical biology that made it possible to detect and visualize the hidden ordering of genetic nucleotide sequences located in the chromosomes of cells of living organisms described. The research was based on the work on algebraic biology of the doctor of physical and mathematical sciences S. V. Petukhov, who first introduced and justified new algebras and hypercomplex numerical systems describing genetic phenomena. This paper describes a new phase in the development of matrix methods in genetics for studying the properties of nucleotide sequences (and their physicochemical parameters), built on the principles of finite geometry. The aim of the study is to demonstrate the capabilities of new algorithms and discuss the discovered properties of genetic DNA and RNA molecules. The study includes three stages: parameterization, scaling, and visualization. Parametrization is the determination of the parameters taken into account, which are based on the structural and physicochemical properties of nucleotides as elementary components of the genome. Scaling plays the role of “focusing” and allows you to explore genetic structures at various scales. Visualization includes the selection of the axes of the coordinate system and the method of visual display. The algorithms presented in this work are put forward as a new toolkit for the development of research software for the analysis of long nucleotide sequences with the ability to display genomes in parametric spaces of various dimensions. One of the significant results of the study is that new criteria were obtained for the classification of the genomes of various living organisms to identify interspecific relationships. The new concept allows visually and numerically assessing the variability of the physicochemical parameters of nucleotide sequences. This concept also allows one to substantiate the relationship between the parameters of DNA and RNA molecules with fractal geometric mosaics, reveals the ordering and symmetry of polynucleotides, as well as their noise immunity. The results obtained justified the introduction of new terms: “genometry” as a methodology of computational strategies and “genometrica” as specific parameters of a particular genome or nucleotide sequence. In connection with the results obtained, biosemiotics and hierarchical levels of organization of living matter are raised.

  7. Okhapkina E.P., Okhapkin V.P.
    Approaches to a social network groups clustering
    Computer Research and Modeling, 2015, v. 7, no. 5, pp. 1127-1139

    The research is devoted to the problem of the use of social networks as a tool of the illegal activity and as a source of information that could be dangerous to society. The article presents the structure of the multiagent system with which a social network groups could be clustered according to the criteria uniquely defines a group as a destructive. The agents’ of the system clustering algorithm is described.

    Views (last year): 8. Citations: 2 (RSCI).
  8. Lopatin N.V., Kydrjavtsev E.A., Panin P.V., Vidumkina S.V.
    Simulation of forming of UFG Ti-6-4 alloy at low temperature of superplasticity
    Computer Research and Modeling, 2017, v. 9, no. 1, pp. 127-133

    Superplastic forming of Ni and Ti based alloys is widely used in aerospace industry. The main advantage of using the effect of superplasticity in sheet metal forming processes is a feasibility of forming materials with a high amount of plastic strain in conditions of prevailing tensile stresses. This article is dedicated to study commercial FEM software SFTC DEFORM application for prediction thickness deviation during low temperature superplastic forming of UFG Ti-6-4 alloy. Experimentally, thickness deviation during superplastic forming can be observed in the local area of plastic deformation and this process is aggravated by local softening of the metal and this is stipulated by microstructure coarsening. The theoretical model was prepared to analyze experimentally observed metal flow. Two approaches have been used for that. The first one is the using of integrated creep rheology model in DEFORM. As superplastic effect is observed only in materials with fine and ultrafine grain sizes the second approach is carried out using own user procedures for rheology model which is based on microstructure evolution equations. These equations have been implemented into DEFORM via Fortran user’s solver subroutines. Using of FEM simulation for this type of forming allows tracking a strain rate in different parts of a workpiece during a process, which is crucial for maintaining the superplastic conditions. Comparison of these approaches allows us to make conclusions about effect of microstructure evolution on metal flow during superplastic deformation. The results of the FEM analysis and theoretical conclusions have been approved by results of the conducted Erichsen test. The main issues of this study are as follows: a) the DEFORM software allows an engineer to predict formation of metal shape under the condition of low-temperature superplasticity; b) in order to augment the accuracy of the prediction of local deformations, the effect of the microstructure state of an alloy having sub-microcristalline structure should be taken into account in the course of calculations in the DEFORM software.

    Views (last year): 10.
  9. Kovalenko I.B., Dreval V.D., Fedorov V.A., Kholina E.G., Gudimchuk N.B.
    Microtubule protofilament bending characterization
    Computer Research and Modeling, 2020, v. 12, no. 2, pp. 435-443

    This work is devoted to the analysis of conformational changes in tubulin dimers and tetramers, in particular, the assessment of the bending of microtubule protofilaments. Three recently exploited approaches for estimating the bend of tubulin protofilaments are reviewed: (1) measurement of the angle between the vector passing through the H7 helices in $\alpha$ and $\beta$ tubulin monomers in the straight structure and the same vector in the curved structure of tubulin; (2) measurement of the angle between the vector, connecting the centers of mass of the subunit and the associated GTP nucleotide, and the vector, connecting the centers of mass of the same nucleotide and the adjacent tubulin subunit; (3) measurement of the three rotation angles of the bent tubulin subunit relative to the straight subunit. Quantitative estimates of the angles calculated at the intra- and inter-dimer interfaces of tubulin in published crystal structures, calculated in accordance with the three metrics, are presented. Intra-dimer angles of tubulin in one structure, measured by the method (3), as well as measurements by this method of the intra-dimer angles in different structures, were more similar, which indicates a lower sensitivity of the method to local changes in tubulin conformation and characterizes the method as more robust. Measuring the angle of curvature between H7-helices (method 1) produces somewhat underestimated values of the curvature per dimer. Method (2), while at first glance generating the bending angle values, consistent the with estimates of curved protofilaments from cryoelectron microscopy, significantly overestimates the angles in the straight structures. For the structures of tubulin tetramers in complex with the stathmin protein, the bending angles calculated with all three metrics varied quite significantly for the first and second dimers (up to 20% or more), which indicates the sensitivity of all metrics to slight variations in the conformation of tubulin dimers within these complexes. A detailed description of the procedures for measuring the bending of tubulin protofilaments, as well as identifying the advantages and disadvantages of various metrics, will increase the reproducibility and clarity of the analysis of tubulin structures in the future, as well as it will hopefully make it easier to compare the results obtained by various scientific groups.

  10. Musaev A.A., Grigoriev D.A.
    Extracting knowledge from text messages: overview and state-of-the-art
    Computer Research and Modeling, 2021, v. 13, no. 6, pp. 1291-1315

    In general, solving the information explosion problem can be delegated to systems for automatic processing of digital data. These systems are intended for recognizing, sorting, meaningfully processing and presenting data in formats readable and interpretable by humans. The creation of intelligent knowledge extraction systems that handle unstructured data would be a natural solution in this area. At the same time, the evident progress in these tasks for structured data contrasts with the limited success of unstructured data processing, and, in particular, document processing. Currently, this research area is undergoing active development and investigation. The present paper is a systematic survey on both Russian and international publications that are dedicated to the leading trend in automatic text data processing: Text Mining (TM). We cover the main tasks and notions of TM, as well as its place in the current AI landscape. Furthermore, we analyze the complications that arise during the processing of texts written in natural language (NLP) which are weakly structured and often provide ambiguous linguistic information. We describe the stages of text data preparation, cleaning, and selecting features which, alongside the data obtained via morphological, syntactic, and semantic analysis, constitute the input for the TM process. This process can be represented as mapping a set of text documents to «knowledge». Using the case of stock trading, we demonstrate the formalization of the problem of making a trade decision based on a set of analytical recommendations. Examples of such mappings are methods of Information Retrieval (IR), text summarization, sentiment analysis, document classification and clustering, etc. The common point of all tasks and techniques of TM is the selection of word forms and their derivatives used to recognize content in NL symbol sequences. Considering IR as an example, we examine classic types of search, such as searching for word forms, phrases, patterns and concepts. Additionally, we consider the augmentation of patterns with syntactic and semantic information. Next, we provide a general description of all NLP instruments: morphological, syntactic, semantic and pragmatic analysis. Finally, we end the paper with a comparative analysis of modern TM tools which can be helpful for selecting a suitable TM platform based on the user’s needs and skills.

Pages: « first previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"