All issues
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Experimental study of the dynamics of single and connected in a lattice complex-valued mappings: the architecture and interface of author’s software for modeling
Computer Research and Modeling, 2021, v. 13, no. 6, pp. 1101-1124The paper describes a free software for research in the field of holomorphic dynamics based on the computational capabilities of the MATLAB environment. The software allows constructing not only single complex-valued mappings, but also their collectives as linearly connected, on a square or hexagonal lattice. In the first case, analogs of the Julia set (in the form of escaping points with color indication of the escape velocity), Fatou (with chaotic dynamics highlighting), and the Mandelbrot set generated by one of two free parameters are constructed. In the second case, only the dynamics of a cellular automaton with a complex-valued state of the cells and of all the coefficients in the local transition function is considered. The abstract nature of object-oriented programming makes it possible to combine both types of calculations within a single program that describes the iterated dynamics of one object.
The presented software provides a set of options for the field shape, initial conditions, neighborhood template, and boundary cells neighborhood features. The mapping display type can be specified by a regular expression for the MATLAB interpreter. This paper provides some UML diagrams, a short introduction to the user interface, and some examples.
The following cases are considered as example illustrations containing new scientific knowledge:
1) a linear fractional mapping in the form $Az^{n} +B/z^{n} $, for which the cases $n=2$, $4$, $n>1$, are known. In the portrait of the Fatou set, attention is drawn to the characteristic (for the classical quadratic mapping) figures of <>, showing short-period regimes, components of conventionally chaotic dynamics in the sea;
2) for the Mandelbrot set with a non-standard position of the parameter in the exponent $z(t+1)\Leftarrow z(t)^{\mu } $ sketch calculations reveal some jagged structures and point clouds resembling Cantor's dust, which are not Cantor's bouquets that are characteristic for exponential mapping. Further detailing of these objects with complex topology is required.
-
Optimal threshold selection algorithms for multi-label classification: property study
Computer Research and Modeling, 2022, v. 14, no. 6, pp. 1221-1238Multi-label classification models arise in various areas of life, which is explained by an increasing amount of information that requires prompt analysis. One of the mathematical methods for solving this problem is a plug-in approach, at the first stage of which, for each class, a certain ranking function is built, ordering all objects in some way, and at the second stage, the optimal thresholds are selected, the objects on one side of which are assigned to the current class, and on the other — to the other. Thresholds are chosen to maximize the target quality measure. The algorithms which properties are investigated in this article are devoted to the second stage of the plug-in approach which is the choice of the optimal threshold vector. This step becomes non-trivial if the $F$-measure of average precision and recall is used as the target quality assessment since it does not allow independent threshold optimization in each class. In problems of extreme multi-label classification, the number of classes can reach hundreds of thousands, so the original optimization problem is reduced to the problem of searching a fixed point of a specially introduced transformation $\boldsymbol V$, defined on a unit square on the plane of average precision $P$ and recall $R$. Using this transformation, two algorithms are proposed for optimization: the $F$-measure linearization method and the method of $\boldsymbol V$ domain analysis. The properties of algorithms are studied when applied to multi-label classification data sets of various sizes and origin, in particular, the dependence of the error on the number of classes, on the $F$-measure parameter, and on the internal parameters of methods under study. The peculiarity of both algorithms work when used for problems with the domain of $\boldsymbol V$, containing large linear boundaries, was found. In case when the optimal point is located in the vicinity of these boundaries, the errors of both methods do not decrease with an increase in the number of classes. In this case, the linearization method quite accurately determines the argument of the optimal point, while the method of $\boldsymbol V$ domain analysis — the polar radius.
-
Simulation of turbulent compressible flows in the FlowVision software
Computer Research and Modeling, 2023, v. 15, no. 4, pp. 805-825Simulation of turbulent compressible gas flows using turbulence models $k-\varepsilon$ standard (KES), $k-\varepsilon$ FlowVision (KEFV) and SST $k-\omega$ is discussed in the given article. A new version of turbulence model KEFV is presented. The results of its testing are shown. Numerical investigation of the discharge of an over-expanded jet from a conic nozzle into unlimited space is performed. The results are compared against experimental data. The dependence of the results on computational mesh is demonstrated. The dependence of the results on turbulence specified at the nozzle inlet is demonstrated. The conclusion is drawn about necessity to allow for compressibility in two-parametric turbulence models. The simple method proposed by Wilcox in 1994 suits well for this purpose. As a result, the range of applicability of the three aforementioned two-parametric turbulence models is essentially extended. Particular values of the constants responsible for the account of compressibility in the Wilcox approach are proposed. It is recommended to specify these values in simulations of compressible flows with use of models KES, KEFV, and SST.
In addition, the question how to obtain correct characteristics of supersonic turbulent flows using two-parametric turbulence models is considered. The calculations on different grids have shown that specifying a laminar flow at the inlet to the nozzle and wall functions at its surfaces, one obtains the laminar core of the flow up to the fifth Mach disk. In order to obtain correct flow characteristics, it is necessary either to specify two parameters characterizing turbulence of the inflowing gas, or to set a “starting” turbulence in a limited volume enveloping the region of presumable laminar-turbulent transition next to the exit from the nozzle. The latter possibility is implemented in model KEFV.
-
Synthesis of the structure of organised systems as central problem of evolutionary cybernetics
Computer Research and Modeling, 2023, v. 15, no. 5, pp. 1103-1124The article provides approaches to evolutionary modelling of synthesis of organised systems and analyses methodological problems of evolutionary computations of this kind. Based on the analysis of works on evolutionary cybernetics, evolutionary theory, systems theory and synergetics, we conclude that there are open problems in formalising the synthesis of organised systems and modelling their evolution. The article emphasises that the theoretical basis for the practice of evolutionary modelling is the principles of the modern synthetic theory of evolution. Our software project uses a virtual computing environment for machine synthesis of problem solving algorithms. In the process of modelling, we obtained the results on the basis of which we conclude that there are a number of conditions that fundamentally limit the applicability of genetic programming methods in the tasks of synthesis of functional structures. The main limitations are the need for the fitness function to track the step-by-step approach to the solution of the problem and the inapplicability of this approach to the problems of synthesis of hierarchically organised systems. We note that the results obtained in the practice of evolutionary modelling in general for the whole time of its existence, confirm the conclusion the possibilities of genetic programming are fundamentally limited in solving problems of synthesizing the structure of organized systems. As sources of fundamental difficulties for machine synthesis of system structures the article points out the absence of directions for gradient descent in structural synthesis and the absence of regularity of random appearance of new organised structures. The considered problems are relevant for the theory of biological evolution. The article substantiates the statement about the biological specificity of practically possible ways of synthesis of the structure of organised systems. As a theoretical interpretation of the discussed problem, we propose to consider the system-evolutionary concept of P.K.Anokhin. The process of synthesis of functional structures in this context is an adaptive response of organisms to external conditions based on their ability to integrative synthesis of memory, needs and information about current conditions. The results of actual studies are in favour of this interpretation. We note that the physical basis of biological integrativity may be related to the phenomena of non-locality and non-separability characteristic of quantum systems. The problems considered in this paper are closely related to the problem of creating strong artificial intelligence.
-
Modeling time series trajectories using the Liouville equation
Computer Research and Modeling, 2024, v. 16, no. 3, pp. 585-598This paper presents algorithm for modeling set of trajectories of non-stationary time series, based on a numerical scheme for approximating the sample density of the distribution function in a problem with fixed ends, when the initial distribution for a given number of steps transforms into a certain final distribution, so that at each step the semigroup property of solving the Liouville equation is satisfied. The model makes it possible to numerically construct evolving densities of distribution functions during random switching of states of the system generating the original time series.
The main problem is related to the fact that with the numerical implementation of the left-hand differential derivative in time, the solution becomes unstable, but such approach corresponds to the modeling of evolution. An integrative approach is used while choosing implicit stable schemes with “going into the future”, this does not match the semigroup property at each step. If, on the other hand, some real process is being modeled, in which goal-setting presumably takes place, then it is desirable to use schemes that generate a model of the transition process. Such model is used in the future in order to build a predictor of the disorder, which will allow you to determine exactly what state the process under study is going into, before the process really went into it. The model described in the article can be used as a tool for modeling real non-stationary time series.
Steps of the modeling scheme are described further. Fragments corresponding to certain states are selected from a given time series, for example, trends with specified slope angles and variances. Reference distributions of states are compiled from these fragments. Then the empirical distributions of the duration of the system’s stay in the specified states and the duration of the transition time from state to state are determined. In accordance with these empirical distributions, a probabilistic model of the disorder is constructed and the corresponding trajectories of the time series are modeled.
-
Quantile shape measures for heavy-tailed distributions
Computer Research and Modeling, 2024, v. 16, no. 5, pp. 1041-1077Currently, journal papers contain numerous examples of the use of heavy-tailed distributions for applied research on various complex systems. Models of extreme data are usually limited to a small set of distribution shapes that in this field of applied research historically been used. It is possible to increase the composition of the set of probability distributions shapes through comparing the measures of the distribution shapes and choosing the most suitable implementations. The example of a beta distribution of the second kind shown that the lack of definability of the moments of heavy-tailed implementations of the beta family of distributions limits the applicability of the existing classical methods of moments for studying the distributions shapes when are characterized heavy tails. For this reason, the development of new methods for comparing distributions based on quantile shape measures free from the restrictions on the shape parameters remains relevant study the possibility of constructing a space of quantile measures of shapes for comparing distributions with heavy tails. The operation purpose consists in computer research of creation possibility of space of the quantile’s measures for the comparing of distributions property with heavy tails. On the basis of computer simulation there the distributions implementations in measures space of shapes were been shown. Mapping distributions in space only of the parametrical measures of shapes has shown that the imposition of regions for heavy tails distribution made impossible compare the shape of distributions belonging to different type in the space of quantile measures of skewness and kurtosis. It is well known that shape information measures such as entropy and entropy uncertainty interval contain additional information about the shape measure of heavy-tailed distributions. In this paper, a quantile entropy coefficient is proposed as an additional independent measure of shape, which is based on the ratio of entropy and quantile uncertainty intervals. Also estimates of quantile entropy coefficients are obtained for a number of well-known heavy-tailed distributions. The possibility of comparing the distributions shapes with realizations of the beta distribution of the second kind is illustrated by the example of the lognormal distribution and the Pareto distribution. Due to mapping the position of stable distributions in the three-dimensional space of quantile measures of shapes estimate made it possible the shape parameters to of the beta distribution of the second kind, for which shape is closest to the Lévy shape. From the paper material it follows that the display of distributions in the three-dimensional space of quantile measures of the forms of skewness, kurtosis and entropy coefficient significantly expands the possibility of comparing the forms for distributions with heavy tails.
-
Mathematical model and computer analysis of tests for homogeneity of “dose–effect” dependence
Computer Research and Modeling, 2012, v. 4, no. 2, pp. 267-273Views (last year): 6.The given work is devoted to the comparison of two tests for homogeneity: chi-square test based on contingency tables of 2 × 2 and test for homogeneity based on asymptotic distributions of the summarized square error of a distribution function estimators in the model of ”dose–effect” dependence. The evaluation of test power is performed by means of computer simulation. In order to design efficiency functions the method of kernel regression estimator based on Nadaray–Watson estimator is used.
-
Mathematical modeling of oscillator hereditarity
Computer Research and Modeling, 2015, v. 7, no. 5, pp. 1001-1021Views (last year): 4. Citations: 12 (RSCI).The paper considers hereditarity oscillator which is characterized by oscillation equation with derivatives of fractional order $\beta$ and $\gamma$, which are defined in terms of Gerasimova-Caputo. Using Laplace transform were obtained analytical solutions and the Green’s function, which are determined through special functions of Mittag-Leffler and Wright generalized function. It is proved that for fixed values of $\beta = 2$ and $\gamma = 1$, the solution found becomes the classical solution for a harmonic oscillator. According to the obtained solutions were built calculated curves and the phase trajectories hereditarity oscillatory process. It was found that in the case of an external periodic influence on hereditarity oscillator may occur effects inherent in classical nonlinear oscillators.
-
Comparative analysis of finite difference method and finite volume method for unsteady natural convection and thermal radiation in a cubical cavity filled with a diathermic medium
Computer Research and Modeling, 2017, v. 9, no. 4, pp. 567-578Views (last year): 13. Citations: 1 (RSCI).Comparative analysis of two numerical methods for simulation of unsteady natural convection and thermal surface radiation within a differentially heated cubical cavity has been carried out. The considered domain of interest had two isothermal opposite vertical faces, while other walls are adiabatic. The walls surfaces were diffuse and gray, namely, their directional spectral emissivity and absorptance do not depend on direction or wavelength but can depend on surface temperature. For the reflected radiation we had two approaches such as: 1) the reflected radiation is diffuse, namely, an intensity of the reflected radiation in any point of the surface is uniform for all directions; 2) the reflected radiation is uniform for each surface of the considered enclosure. Mathematical models formulated both in primitive variables “velocity–pressure” and in transformed variables “vector potential functions – vorticity vector” have been performed numerically using finite volume method and finite difference methods, respectively. It should be noted that radiative heat transfer has been analyzed using the net-radiation method in Poljak approach.
Using primitive variables and finite volume method for the considered boundary-value problem we applied power-law for an approximation of convective terms and central differences for an approximation of diffusive terms. The difference motion and energy equations have been solved using iterative method of alternating directions. Definition of the pressure field associated with velocity field has been performed using SIMPLE procedure.
Using transformed variables and finite difference method for the considered boundary-value problem we applied monotonic Samarsky scheme for convective terms and central differences for diffusive terms. Parabolic equations have been solved using locally one-dimensional Samarsky scheme. Discretization of elliptic equations for vector potential functions has been conducted using symmetric approximation of the second-order derivatives. Obtained difference equation has been solved by successive over-relaxation method. Optimal value of the relaxation parameter has been found on the basis of computational experiments.
As a result we have found the similar distributions of velocity and temperature in the case of these two approaches for different values of Rayleigh number, that illustrates an operability of the used techniques. The efficiency of transformed variables with finite difference method for unsteady problems has been shown.
-
CABARET scheme implementation for free shear layer modeling
Computer Research and Modeling, 2017, v. 9, no. 6, pp. 881-903Views (last year): 17.In present paper we reexamine the properties of CABARET numerical scheme formulated for a weakly compressible fluid flow basing the results of free shear layer modeling. Kelvin–Helmholtz instability and successive generation of two-dimensional turbulence provide a wide field for a scheme analysis including temporal evolution of the integral energy and enstrophy curves, the vorticity patterns and energy spectra, as well as the dispersion relation for the instability increment. The most part of calculations is performed for Reynolds number $\text{Re} = 4 \times 10^5$ for square grids sequentially refined in the range of $128^2-2048^2$ nodes. An attention is paid to the problem of underresolved layers generating a spurious vortex during the vorticity layers roll-up. This phenomenon takes place only on a coarse grid with $128^2$ nodes, while the fully regularized evolution pattern of vorticity appears only when approaching $1024^2$-node grid. We also discuss the vorticity resolution properties of grids used with respect to dimensional estimates for the eddies at the borders of the inertial interval, showing that the available range of grids appears to be sufficient for a good resolution of small–scale vorticity patches. Nevertheless, we claim for the convergence achieved for the domains occupied by large-scale structures.
The generated turbulence evolution is consistent with theoretical concepts imposing the emergence of large vortices, which collect all the kinetic energy of motion, and solitary small-scale eddies. The latter resemble the coherent structures surviving in the filamentation process and almost noninteracting with other scales. The dissipative characteristics of numerical method employed are discussed in terms of kinetic energy dissipation rate calculated directly and basing theoretical laws for incompressible (via enstrophy curves) and compressible (with respect to the strain rate tensor and dilatation) fluid models. The asymptotic behavior of the kinetic energy and enstrophy cascades comply with two-dimensional turbulence laws $E(k) \propto k^{−3}, \omega^2(k) \propto k^{−1}$. Considering the instability increment as a function of dimensionless wave number shows a good agreement with other papers, however, commonly used method of instability growth rate calculation is not always accurate, so some modification is proposed. Thus, the implemented CABARET scheme possessing remarkably small numerical dissipation and good vorticity resolution is quite competitive approach compared to other high-order accuracy methods
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"