All issues
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Synthesis of the structure of organised systems as central problem of evolutionary cybernetics
Computer Research and Modeling, 2023, v. 15, no. 5, pp. 1103-1124The article provides approaches to evolutionary modelling of synthesis of organised systems and analyses methodological problems of evolutionary computations of this kind. Based on the analysis of works on evolutionary cybernetics, evolutionary theory, systems theory and synergetics, we conclude that there are open problems in formalising the synthesis of organised systems and modelling their evolution. The article emphasises that the theoretical basis for the practice of evolutionary modelling is the principles of the modern synthetic theory of evolution. Our software project uses a virtual computing environment for machine synthesis of problem solving algorithms. In the process of modelling, we obtained the results on the basis of which we conclude that there are a number of conditions that fundamentally limit the applicability of genetic programming methods in the tasks of synthesis of functional structures. The main limitations are the need for the fitness function to track the step-by-step approach to the solution of the problem and the inapplicability of this approach to the problems of synthesis of hierarchically organised systems. We note that the results obtained in the practice of evolutionary modelling in general for the whole time of its existence, confirm the conclusion the possibilities of genetic programming are fundamentally limited in solving problems of synthesizing the structure of organized systems. As sources of fundamental difficulties for machine synthesis of system structures the article points out the absence of directions for gradient descent in structural synthesis and the absence of regularity of random appearance of new organised structures. The considered problems are relevant for the theory of biological evolution. The article substantiates the statement about the biological specificity of practically possible ways of synthesis of the structure of organised systems. As a theoretical interpretation of the discussed problem, we propose to consider the system-evolutionary concept of P.K.Anokhin. The process of synthesis of functional structures in this context is an adaptive response of organisms to external conditions based on their ability to integrative synthesis of memory, needs and information about current conditions. The results of actual studies are in favour of this interpretation. We note that the physical basis of biological integrativity may be related to the phenomena of non-locality and non-separability characteristic of quantum systems. The problems considered in this paper are closely related to the problem of creating strong artificial intelligence.
-
Conversion of the initial indices of the technological process of the smelting of steel for the subsequent simulation
Computer Research and Modeling, 2017, v. 9, no. 2, pp. 187-199Views (last year): 6. Citations: 1 (RSCI).Efficiency of production directly depends on quality of the management of technology which, in turn, relies on the accuracy and efficiency of the processing of control and measuring information. Development of the mathematical methods of research of the system communications and regularities of functioning and creation of the mathematical models taking into account structural features of object of researches, and also writing of the software products for realization of these methods are an actual task. Practice has shown that the list of parameters that take place in the study of complex object of modern production, ranging from a few dozen to several hundred names, and the degree of influence of each factor in the initial time is not clear. Before working for the direct determination of the model in these circumstances, it is impossible — the amount of the required information may be too great, and most of the work on the collection of this information will be done in vain due to the fact that the degree of influence on the optimization of most factors of the original list would be negligible. Therefore, a necessary step in determining a model of a complex object is to work to reduce the dimension of the factor space. Most industrial plants are hierarchical group processes and mass volume production, characterized by hundreds of factors. (For an example of realization of the mathematical methods and the approbation of the constructed models data of the Moldavian steel works were taken in a basis.) To investigate the systemic linkages and patterns of functioning of such complex objects are usually chosen several informative parameters, and carried out their sampling. In this article the sequence of coercion of the initial indices of the technological process of the smelting of steel to the look suitable for creation of a mathematical model for the purpose of prediction is described. The implementations of new types became also creation of a basis for development of the system of automated management of quality of the production. In the course of weak correlation the following stages are selected: collection and the analysis of the basic data, creation of the table the correlated of the parameters, abbreviation of factor space by means of the correlative pleiads and a method of weight factors. The received results allow to optimize process of creation of the model of multiple-factor process.
-
Hierarchical method for mathematical modeling of stochastic thermal processes in complex electronic systems
Computer Research and Modeling, 2019, v. 11, no. 4, pp. 613-630Views (last year): 3.A hierarchical method of mathematical and computer modeling of interval-stochastic thermal processes in complex electronic systems for various purposes is developed. The developed concept of hierarchical structuring reflects both the constructive hierarchy of a complex electronic system and the hierarchy of mathematical models of heat exchange processes. Thermal processes that take into account various physical phenomena in complex electronic systems are described by systems of stochastic, unsteady, and nonlinear partial differential equations and, therefore, their computer simulation encounters considerable computational difficulties even with the use of supercomputers. The hierarchical method avoids these difficulties. The hierarchical structure of the electronic system design, in general, is characterized by five levels: Level 1 — the active elements of the ES (microcircuits, electro-radio-elements); Level 2 — electronic module; Level 3 — a panel that combines a variety of electronic modules; Level 4 — a block of panels; Level 5 — stand installed in a stationary or mobile room. The hierarchy of models and modeling of stochastic thermal processes is constructed in the reverse order of the hierarchical structure of the electronic system design, while the modeling of interval-stochastic thermal processes is carried out by obtaining equations for statistical measures. The hierarchical method developed in the article allows to take into account the principal features of thermal processes, such as the stochastic nature of thermal, electrical and design factors in the production, assembly and installation of electronic systems, stochastic scatter of operating conditions and the environment, non-linear temperature dependencies of heat exchange factors, unsteady nature of thermal processes. The equations obtained in the article for statistical measures of stochastic thermal processes are a system of 14 non-stationary nonlinear differential equations of the first order in ordinary derivatives, whose solution is easily implemented on modern computers by existing numerical methods. The results of applying the method for computer simulation of stochastic thermal processes in electron systems are considered. The hierarchical method is applied in practice for the thermal design of real electronic systems and the creation of modern competitive devices.
-
Game-theoretic model of coordinations of interests at innovative development of corporations
Computer Research and Modeling, 2016, v. 8, no. 4, pp. 673-684Views (last year): 9. Citations: 6 (RSCI).Dynamic game theoretic models of the corporative innovative development are investigated. The proposed models are based on concordance of private and public interests of agents. It is supposed that the structure of interests of each agent includes both private (personal interests) and public (interests of the whole company connected with its innovative development first) components. The agents allocate their personal resources between these two directions. The system dynamics is described by a difference (not differential) equation. The proposed model of innovative development is studied by simulation and the method of enumeration of the domains of feasible controls with a constant step. The main contribution of the paper consists in comparative analysis of efficiency of the methods of hierarchical control (compulsion or impulsion) for information structures of Stackelberg or Germeier (four structures) by means of the indices of system compatibility. The proposed model is a universal one and can be used for a scientifically grounded support of the programs of innovative development of any economic firm. The features of a specific company are considered in the process of model identification (a determination of the specific classes of model functions and numerical values of its parameters) which forms a separate complex problem and requires an analysis of the statistical data and expert estimations. The following assumptions about information rules of the hierarchical game are accepted: all players use open-loop strategies; the leader chooses and reports to the followers some values of administrative (compulsion) or economic (impulsion) control variables which can be only functions of time (Stackelberg games) or depend also on the followers’ controls (Germeier games); given the leader’s strategies all followers simultaneously and independently choose their strategies that gives a Nash equilibrium in the followers’ game. For a finite number of iterations the proposed algorithm of simulation modeling allows to build an approximate solution of the model or to conclude that it doesn’t exist. A reliability and efficiency of the proposed algorithm follow from the properties of the scenario method and the method of a direct ordered enumeration with a constant step. Some comprehensive conclusions about the comparative efficiency of methods of hierarchical control of innovations are received.
-
Relation between performance of organization and its structure during sudden and smoldering crises
Computer Research and Modeling, 2016, v. 8, no. 4, pp. 685-706Views (last year): 2. Citations: 2 (RSCI).The article describes a mathematical model that simulates performance of a hierarchical organization during an early stage of a crisis. A distinguished feature of this stage of crisis is presence of so called early warning signals containing information on the approaching event. Employees are capable of catching the early warnings and of preparing the organization for the crisis based on the signals’ meaning. The efficiency of the preparation depends on both parameters of the organization and parameters of the crisis. The proposed simulation agentbased model is implemented on Java programming language and is used for conducting experiments via Monte- Carlo method. The goal of the experiments is to compare how centralized and decentralized organizational structures perform during sudden and smoldering crises. By centralized organizations we assume structures with high number of hierarchy levels and low number of direct reports of every manager, while decentralized organizations mean structures with low number of hierarchy levels and high number of direct reports of every manager. Sudden crises are distinguished by short early stage and low number of warning signals, while smoldering crises are defined as crises with long lasting early stage and high number of warning signals not necessary containing important information. Efficiency of the organizational performance during early stage of a crisis is measured by two parameters: percentage of early warnings which have been acted upon in order to prepare organization for the crisis, and time spent by top-manager on working with early warnings. As a result, we show that during early stage of smoldering crises centralized organizations process signals more efficiently than decentralized organizations, while decentralized organizations handle early warning signals more efficiently during early stage of sudden crises. However, occupation of top-managers during sudden crises is higher in decentralized organizations and it is higher in centralized organizations during smoldering crises. Thus, neither of the two classes of organizational structures is more efficient by the two parameters simultaneously. Finally, we conduct sensitivity analysis to verify the obtained results.
-
Modeling the behavior proceeding market crash in a hierarchically organized financial market
Computer Research and Modeling, 2011, v. 3, no. 2, pp. 215-222Views (last year): 1.We consider the hierarchical model of financial crashes introduced by A. Johansen and D. Sornette which reproduces the log-periodic power law behavior of the price before the critical point. In order to build the generalization of this model we introduce the dependence of an influence exponent on an ultrametric distance between agents. Much attention is being paid to a problem of critical point universality which is investigated by comparison of probability density functions of the crash times corresponding to systems with various total numbers of agents.
-
Biomathematical system of the nucleic acids description
Computer Research and Modeling, 2020, v. 12, no. 2, pp. 417-434The article is devoted to the application of various methods of mathematical analysis, search for patterns and studying the composition of nucleotides in DNA sequences at the genomic level. New methods of mathematical biology that made it possible to detect and visualize the hidden ordering of genetic nucleotide sequences located in the chromosomes of cells of living organisms described. The research was based on the work on algebraic biology of the doctor of physical and mathematical sciences S. V. Petukhov, who first introduced and justified new algebras and hypercomplex numerical systems describing genetic phenomena. This paper describes a new phase in the development of matrix methods in genetics for studying the properties of nucleotide sequences (and their physicochemical parameters), built on the principles of finite geometry. The aim of the study is to demonstrate the capabilities of new algorithms and discuss the discovered properties of genetic DNA and RNA molecules. The study includes three stages: parameterization, scaling, and visualization. Parametrization is the determination of the parameters taken into account, which are based on the structural and physicochemical properties of nucleotides as elementary components of the genome. Scaling plays the role of “focusing” and allows you to explore genetic structures at various scales. Visualization includes the selection of the axes of the coordinate system and the method of visual display. The algorithms presented in this work are put forward as a new toolkit for the development of research software for the analysis of long nucleotide sequences with the ability to display genomes in parametric spaces of various dimensions. One of the significant results of the study is that new criteria were obtained for the classification of the genomes of various living organisms to identify interspecific relationships. The new concept allows visually and numerically assessing the variability of the physicochemical parameters of nucleotide sequences. This concept also allows one to substantiate the relationship between the parameters of DNA and RNA molecules with fractal geometric mosaics, reveals the ordering and symmetry of polynucleotides, as well as their noise immunity. The results obtained justified the introduction of new terms: “genometry” as a methodology of computational strategies and “genometrica” as specific parameters of a particular genome or nucleotide sequence. In connection with the results obtained, biosemiotics and hierarchical levels of organization of living matter are raised.
-
Software complex for numerical modeling of multibody system dynamics
Computer Research and Modeling, 2024, v. 16, no. 1, pp. 161-174This work deals with numerical modeling of motion of the multibody systems consisting of rigid bodies with arbitrary masses and inertial properties. We consider both planar and spatial systems which may contain kinematic loops.
The numerical modeling is fully automatic and its computational algorithm contains three principal steps. On step one a graph of the considered mechanical system is formed from the userinput data. This graph represents the hierarchical structure of the mechanical system. On step two the differential-algebraic equations of motion of the system are derived using the so-called Joint Coordinate Method. This method allows to minimize the redundancy and lower the number of the equations of motion and thus optimize the calculations. On step three the equations of motion are integrated numerically and the resulting laws of motion are presented via user interface or files.
The aforementioned algorithm is implemented in the software complex that contains a computer algebra system, a graph library, a mechanical solver, a library of numerical methods and a user interface.
-
Semantic structuring of text documents based on patterns of natural language entities
Computer Research and Modeling, 2022, v. 14, no. 5, pp. 1185-1197The technology of creating patterns from natural language words (concepts) based on text data in the bag of words model is considered. Patterns are used to reduce the dimension of the original space in the description of documents and search for semantically related words by topic. The process of dimensionality reduction is implemented through the formation of patterns of latent features. The variety of structures of document relations is investigated in order to divide them into themes in the latent space.
It is considered that a given set of documents (objects) is divided into two non-overlapping classes, for the analysis of which it is necessary to use a common dictionary. The belonging of words to a common vocabulary is initially unknown. Class objects are considered as opposition to each other. Quantitative parameters of oppositionality are determined through the values of the stability of each feature and generalized assessments of objects according to non-overlapping sets of features.
To calculate the stability, the feature values are divided into non-intersecting intervals, the optimal boundaries of which are determined by a special criterion. The maximum stability is achieved under the condition that the boundaries of each interval contain values of one of the two classes.
The composition of features in sets (patterns of words) is formed from a sequence ordered by stability values. The process of formation of patterns and latent features based on them is implemented according to the rules of hierarchical agglomerative grouping.
A set of latent features is used for cluster analysis of documents using metric grouping algorithms. The analysis applies the coefficient of content authenticity based on the data on the belonging of documents to classes. The coefficient is a numerical characteristic of the dominance of class representatives in groups.
To divide documents into topics, it is proposed to use the union of groups in relation to their centers. As patterns for each topic, a sequence of words ordered by frequency of occurrence from a common dictionary is considered.
The results of a computational experiment on collections of abstracts of scientific dissertations are presented. Sequences of words from the general dictionary on 4 topics are formed.
-
Assessing the validity of clustering of panel data by Monte Carlo methods (using as example the data of the Russian regional economy)
Computer Research and Modeling, 2020, v. 12, no. 6, pp. 1501-1513The paper considers a method for studying panel data based on the use of agglomerative hierarchical clustering — grouping objects based on the similarities and differences in their features into a hierarchy of clusters nested into each other. We used 2 alternative methods for calculating Euclidean distances between objects — the distance between the values averaged over observation interval, and the distance using data for all considered years. Three alternative methods for calculating the distances between clusters were compared. In the first case, the distance between the nearest elements from two clusters is considered to be distance between these clusters, in the second — the average over pairs of elements, in the third — the distance between the most distant elements. The efficiency of using two clustering quality indices, the Dunn and Silhouette index, was studied to select the optimal number of clusters and evaluate the statistical significance of the obtained solutions. The method of assessing statistical reliability of cluster structure consisted in comparing the quality of clustering on a real sample with the quality of clustering on artificially generated samples of panel data with the same number of objects, features and lengths of time series. Generation was made from a fixed probability distribution. At the same time, simulation methods imitating Gaussian white noise and random walk were used. Calculations with the Silhouette index showed that a random walk is characterized not only by spurious regression, but also by “spurious clustering”. Clustering was considered reliable for a given number of selected clusters if the index value on the real sample turned out to be greater than the value of the 95% quantile for artificial data. A set of time series of indicators characterizing production in the regions of the Russian Federation was used as a sample of real data. For these data only Silhouette shows reliable clustering at the level p < 0.05. Calculations also showed that index values for real data are generally closer to values for random walks than for white noise, but it have significant differences from both. Since three-dimensional feature space is used, the quality of clustering was also evaluated visually. Visually, one can distinguish clusters of points located close to each other, also distinguished as clusters by the applied hierarchical clustering algorithm.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"