All issues
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
CABARET scheme implementation for free shear layer modeling
Computer Research and Modeling, 2017, v. 9, no. 6, pp. 881-903Views (last year): 17.In present paper we reexamine the properties of CABARET numerical scheme formulated for a weakly compressible fluid flow basing the results of free shear layer modeling. Kelvin–Helmholtz instability and successive generation of two-dimensional turbulence provide a wide field for a scheme analysis including temporal evolution of the integral energy and enstrophy curves, the vorticity patterns and energy spectra, as well as the dispersion relation for the instability increment. The most part of calculations is performed for Reynolds number $\text{Re} = 4 \times 10^5$ for square grids sequentially refined in the range of $128^2-2048^2$ nodes. An attention is paid to the problem of underresolved layers generating a spurious vortex during the vorticity layers roll-up. This phenomenon takes place only on a coarse grid with $128^2$ nodes, while the fully regularized evolution pattern of vorticity appears only when approaching $1024^2$-node grid. We also discuss the vorticity resolution properties of grids used with respect to dimensional estimates for the eddies at the borders of the inertial interval, showing that the available range of grids appears to be sufficient for a good resolution of small–scale vorticity patches. Nevertheless, we claim for the convergence achieved for the domains occupied by large-scale structures.
The generated turbulence evolution is consistent with theoretical concepts imposing the emergence of large vortices, which collect all the kinetic energy of motion, and solitary small-scale eddies. The latter resemble the coherent structures surviving in the filamentation process and almost noninteracting with other scales. The dissipative characteristics of numerical method employed are discussed in terms of kinetic energy dissipation rate calculated directly and basing theoretical laws for incompressible (via enstrophy curves) and compressible (with respect to the strain rate tensor and dilatation) fluid models. The asymptotic behavior of the kinetic energy and enstrophy cascades comply with two-dimensional turbulence laws $E(k) \propto k^{−3}, \omega^2(k) \propto k^{−1}$. Considering the instability increment as a function of dimensionless wave number shows a good agreement with other papers, however, commonly used method of instability growth rate calculation is not always accurate, so some modification is proposed. Thus, the implemented CABARET scheme possessing remarkably small numerical dissipation and good vorticity resolution is quite competitive approach compared to other high-order accuracy methods
-
Neural network methods for optimal control problems
Computer Research and Modeling, 2022, v. 14, no. 3, pp. 539-557In this study we discuss methods to solve optimal control problems based on neural network techniques. We study hierarchical dynamical two-level system for surface water quality control. The system consists of a supervisor (government) and a few agents (enterprises). We consider this problem from the point of agents. In this case we solve optimal control problem with constraints. To solve this problem, we use Pontryagin’s maximum principle, with which we obtain optimality conditions. To solve emerging ODEs, we use feedforward neural network. We provide a review of existing techniques to study such problems and a review of neural network’s training methods. To estimate the error of numerical solution, we propose to use defect analysis method, adapted for neural networks. This allows one to get quantitative error estimations of numerical solution. We provide examples of our method’s usage for solving synthetic problem and a surface water quality control model. We compare the results of this examples with known solution (when provided) and the results of shooting method. In all cases the errors, estimated by our method are of the same order as the errors compared with known solution. Moreover, we study surface water quality control problem when no solutions is provided by other methods. This happens because of relatively large time interval and/or the case of several agents. In the latter case we seek Nash equilibrium between agents. Thus, in this study we show the ability of neural networks to solve various problems including optimal control problems and differential games and we show the ability of quantitative estimation of an error. From the numerical results we conclude that the presence of the supervisor is necessary for achieving the sustainable development.
-
Situational resource allocation: review of technologies for solving problems based on knowledge systems
Computer Research and Modeling, 2025, v. 17, no. 4, pp. 543-566The article presents updated technologies for solving two classes of linear resource allocation problems with dynamically changing characteristics of situational management systems and awareness of experts (and/or trained robots). The search for solutions is carried out in an interactive mode of computational experiment using updatable knowledge systems about problems considered as constructive objects (in accordance with the methodology of formalization of knowledge about programmable problems created in the theory of S-symbols). The technologies are focused on implementation in the form of Internet services. The first class includes resource allocation problems solved by the method of targeted solution movement. The second is the problems of allocating a single resource in hierarchical systems, taking into account the priorities of expense items, which can be solved (depending on the specified mandatory and orienting requirements for the solution) either by the interval method of allocation (with input data and result represented by numerical segments), or by the targeted solution movement method. The problem statements are determined by requirements for solutions and specifications of their applicability, which are set by an expert based on the results of the portraits of the target and achieved situations analysis. Unlike well-known methods for solving resource allocation problems as linear programming problems, the method of targeted solution movement is insensitive to small data changes and allows to find feasible solutions when the constraint system is incompatible. In single-resource allocation technologies, the segmented representation of data and results allows a more adequate (compared to a point representation) reflection of the state of system resource space and increases the practical applicability of solutions. The technologies discussed in the article are programmatically implemented and used to solve the problems of resource basement for decisions, budget design taking into account the priorities of expense items, etc. The technology of allocating a single resource is implemented in the form of an existing online cost planning service. The methodological consistency of the technologies is confirmed by the results of comparison with known technologies for solving the problems under consideration.
Keywords: linear resource allocation problems, technologies for solving situational resource allocation problems, states of system’s resource space, profiles of situations, mandatory and orienting requirements for solutions, method of targeted solution movement, interval method of allocation, theory of S-symbols. -
Methodology and program for the storage and statistical analysis of the results of computer experiment
Computer Research and Modeling, 2013, v. 5, no. 4, pp. 589-595Views (last year): 1. Citations: 5 (RSCI).The problem of accumulation and the statistical analysis of computer experiment results are solved. The main experiment program is considered as the data source. The results of main experiment are collected on specially prepared sheet Excel with pre-organized structure for the accumulation, statistical processing and visualization of the data. The created method and the program are used at efficiency research of the scientific researches which are carried out by authors.
-
Analysis of additive and parametric noise effects on Morris – Lecar neuron model
Computer Research and Modeling, 2017, v. 9, no. 3, pp. 449-468Views (last year): 11.This paper is devoted to the analysis of the effect of additive and parametric noise on the processes occurring in the nerve cell. This study is carried out on the example of the well-known Morris – Lecar model described by the two-dimensional system of ordinary differential equations. One of the main properties of the neuron is the excitability, i.e., the ability to respond to external stimuli with an abrupt change of the electric potential on the cell membrane. This article considers a set of parameters, wherein the model exhibits the class 2 excitability. The dynamics of the system is studied under variation of the external current parameter. We consider two parametric zones: the monostability zone, where a stable equilibrium is the only attractor of the deterministic system, and the bistability zone, characterized by the coexistence of a stable equilibrium and a limit cycle. We show that in both cases random disturbances result in the phenomenon of the stochastic generation of mixed-mode oscillations (i. e., alternating oscillations of small and large amplitudes). In the monostability zone this phenomenon is associated with a high excitability of the system, while in the bistability zone, it occurs due to noise-induced transitions between attractors. This phenomenon is confirmed by changes of probability density functions for distribution of random trajectories, power spectral densities and interspike intervals statistics. The action of additive and parametric noise is compared. We show that under the parametric noise, the stochastic generation of mixed-mode oscillations is observed at lower intensities than under the additive noise. For the quantitative analysis of these stochastic phenomena we propose and apply an approach based on the stochastic sensitivity function technique and the method of confidence domains. In the case of a stable equilibrium, this confidence domain is an ellipse. For the stable limit cycle, this domain is a confidence band. The study of the mutual location of confidence bands and the boundary separating the basins of attraction for different noise intensities allows us to predict the emergence of noise-induced transitions. The effectiveness of this analytical approach is confirmed by the good agreement of theoretical estimations with results of direct numerical simulations.
-
Synchronous components of financial time series
Computer Research and Modeling, 2017, v. 9, no. 4, pp. 639-655The article proposes a method of joint analysis of multidimensional financial time series based on the evaluation of the set of properties of stock quotes in a sliding time window and the subsequent averaging of property values for all analyzed companies. The main purpose of the analysis is to construct measures of joint behavior of time series reacting to the occurrence of a synchronous or coherent component. The coherence of the behavior of the characteristics of a complex system is an important feature that makes it possible to evaluate the approach of the system to sharp changes in its state. The basis for the search for precursors of sharp changes is the general idea of increasing the correlation of random fluctuations of the system parameters as it approaches the critical state. The increments in time series of stock values have a pronounced chaotic character and have a large amplitude of individual noises, against which a weak common signal can be detected only on the basis of its correlation in different scalar components of a multidimensional time series. It is known that classical methods of analysis based on the use of correlations between neighboring samples are ineffective in the processing of financial time series, since from the point of view of the correlation theory of random processes, increments in the value of shares formally have all the attributes of white noise (in particular, the “flat spectrum” and “delta-shaped” autocorrelation function). In connection with this, it is proposed to go from analyzing the initial signals to examining the sequences of their nonlinear properties calculated in time fragments of small length. As such properties, the entropy of the wavelet coefficients is used in the decomposition into the Daubechies basis, the multifractal parameters and the autoregressive measure of signal nonstationarity. Measures of synchronous behavior of time series properties in a sliding time window are constructed using the principal component method, moduli values of all pairwise correlation coefficients, and a multiple spectral coherence measure that is a generalization of the quadratic coherence spectrum between two signals. The shares of 16 large Russian companies from the beginning of 2010 to the end of 2016 were studied. Using the proposed method, two synchronization time intervals of the Russian stock market were identified: from mid-December 2013 to mid- March 2014 and from mid-October 2014 to mid-January 2016.
Keywords: financial time series, wavelets, entropy, multi-fractals, predictability, synchronization.Views (last year): 12. Citations: 2 (RSCI). -
Analysis of the physics-informed neural network approach to solving ordinary differential equations
Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1621-1636Considered the application of physics-informed neural networks using multi layer perceptrons to solve Cauchy initial value problems in which the right-hand sides of the equation are continuous monotonically increasing, decreasing or oscillating functions. With the use of the computational experiments the influence of the construction of the approximate neural network solution, neural network structure, optimization algorithm and software implementation means on the learning process and the accuracy of the obtained solution is studied. The analysis of the efficiency of the most frequently used machine learning frameworks in software development with the programming languages Python and C# is carried out. It is shown that the use of C# language allows to reduce the time of neural networks training by 20–40%. The choice of different activation functions affects the learning process and the accuracy of the approximate solution. The most effective functions in the considered problems are sigmoid and hyperbolic tangent. The minimum of the loss function is achieved at the certain number of neurons of the hidden layer of a single-layer neural network for a fixed training time of the neural network model. It’s also mentioned that the complication of the network structure increasing the number of neurons does not improve the training results. At the same time, the size of the grid step between the points of the training sample, providing a minimum of the loss function, is almost the same for the considered Cauchy problems. Training single-layer neural networks, the Adam method and its modifications are the most effective to solve the optimization problems. Additionally, the application of twoand three-layer neural networks is considered. It is shown that in these cases it is reasonable to use the LBFGS algorithm, which, in comparison with the Adam method, in some cases requires much shorter training time achieving the same solution accuracy. The specificity of neural network training for Cauchy problems in which the solution is an oscillating function with monotonically decreasing amplitude is also investigated. For these problems, it is necessary to construct a neural network solution with variable weight coefficient rather than with constant one, which improves the solution in the grid cells located near by the end point of the solution interval.
-
Some relationships between thermodynamic characteristics and water vapor and carbon dioxide fluxes in a recently clear-cut area
Computer Research and Modeling, 2017, v. 9, no. 6, pp. 965-980Views (last year): 15. Citations: 1 (RSCI).The temporal variability of exergy of short-wave and long-wave radiation and its relationships with sensible heat, water vapor (H2O) and carbon dioxide (CO2) fluxes on a recently clear-cut area in a mixed coniferous and small-leaved forest in the Tver region is discussed. On the basis of the analysis of radiation and exergy efficiency coefficients suggested by Yu.M. Svirezhev it was shown that during the first eight months after clearcutting the forest ecosystem functions as a "heat engine" i.e. the processes of energy dissipation dominated over processes of biomass production. To validate the findings the statistical analysis of temporary variability of meteorological parameters, as well as, daily fluxes of sensible heat, H2O and CO2 was provided using the trigonometrical polynomials. The statistical models that are linearly depended on an exergy of short-wave and long-wave radiation were obtained for mean daily values of CO2 fluxes, gross primary production of regenerated vegetation and sensible heat fluxes. The analysis of these dependences is also confirmed the results obtained from processing the radiation and exergy efficiency coefficients. The splitting the time series into separate time intervals, e.g. “spring–summer” and “summer–autumn”, allowed revealing that the statistically significant relationships between atmospheric fluxes and exergy were amplified in summer months as the clear-cut area was overgrown by grassy and young woody vegetation. The analysis of linear relationships between time-series of latent heat fluxes and exergy showed their statistical insignificance. The linear relationships between latent heat fluxes and temperature were in turn statistically significant. The air temperature was a key factor improving the accuracy of the models, whereas effect of exergy was insignificant. The results indicated that at the time of active vegetation regeneration within the clear-cut area the seasonal variability of surface evaporation is mainly governed by temperature variation.
-
Simulation of spin wave amplification using the method of characteristics to the transport equation
Computer Research and Modeling, 2022, v. 14, no. 4, pp. 795-803The paper presents an analysis of the nonlinear equation of spin wave transport by the method of characteristics. The conclusion of a new mathematical model of spin wave propagation is presented for the solution of which the characteristic is applied. The behavior analysis of the behavior of the real and imaginary parts of the wave and its amplitude is performed. The phase portraits demonstrate the dependence of the desired function on the nonlinearity coefficient. It is established that the real and imaginary parts of the wave oscillate by studying the nature of the evolution of the initial wave profile by the phase plane method. The transition of trajectories from an unstable focus to a limiting cycle, which corresponds to the oscillation of the real and imaginary parts, is shown. For the amplitude of the wave, such a transition is characterized by its amplification or attenuation (depending on the nonlinearity coefficient and the chosen initial conditions) up to a certain threshold value. It is shown that the time of the transition process from amplification (attenuation) to stabilization of the amplitude also depends on the nonlinearity parameter. It was found out that at the interval of amplification of the amplitude of the spin wave, the time of the transition process decreases, and lower amplitude values correspond to higher parameters of nonlinearity.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"




