All issues
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
CABARET scheme implementation for free shear layer modeling
Computer Research and Modeling, 2017, v. 9, no. 6, pp. 881-903Views (last year): 17.In present paper we reexamine the properties of CABARET numerical scheme formulated for a weakly compressible fluid flow basing the results of free shear layer modeling. Kelvin–Helmholtz instability and successive generation of two-dimensional turbulence provide a wide field for a scheme analysis including temporal evolution of the integral energy and enstrophy curves, the vorticity patterns and energy spectra, as well as the dispersion relation for the instability increment. The most part of calculations is performed for Reynolds number $\text{Re} = 4 \times 10^5$ for square grids sequentially refined in the range of $128^2-2048^2$ nodes. An attention is paid to the problem of underresolved layers generating a spurious vortex during the vorticity layers roll-up. This phenomenon takes place only on a coarse grid with $128^2$ nodes, while the fully regularized evolution pattern of vorticity appears only when approaching $1024^2$-node grid. We also discuss the vorticity resolution properties of grids used with respect to dimensional estimates for the eddies at the borders of the inertial interval, showing that the available range of grids appears to be sufficient for a good resolution of small–scale vorticity patches. Nevertheless, we claim for the convergence achieved for the domains occupied by large-scale structures.
The generated turbulence evolution is consistent with theoretical concepts imposing the emergence of large vortices, which collect all the kinetic energy of motion, and solitary small-scale eddies. The latter resemble the coherent structures surviving in the filamentation process and almost noninteracting with other scales. The dissipative characteristics of numerical method employed are discussed in terms of kinetic energy dissipation rate calculated directly and basing theoretical laws for incompressible (via enstrophy curves) and compressible (with respect to the strain rate tensor and dilatation) fluid models. The asymptotic behavior of the kinetic energy and enstrophy cascades comply with two-dimensional turbulence laws $E(k) \propto k^{−3}, \omega^2(k) \propto k^{−1}$. Considering the instability increment as a function of dimensionless wave number shows a good agreement with other papers, however, commonly used method of instability growth rate calculation is not always accurate, so some modification is proposed. Thus, the implemented CABARET scheme possessing remarkably small numerical dissipation and good vorticity resolution is quite competitive approach compared to other high-order accuracy methods
-
Modeling of deformation processes in structure of flexible woven composites
Computer Research and Modeling, 2020, v. 12, no. 3, pp. 547-557Flexible woven composites are classified as high-tech innovative materials. Due to the combination of various components of the filler and reinforcement elements, such materials are used in construction, in the defense industry, in shipbuilding and aircraft construction, etc. In the domestic literature, insufficient attention is paid to woven composites that change their geometric structure of the reinforcing layer during deformation. This paper presents an analysis of the previously proposed complex approach to modeling the behavior of flexible woven composites under static uniaxial tension for further generalization of the approach to biaxial tension. The work is aimed at qualitative and quantitative description of mechanical deformation processes occurring in the structure of the studied materials under tension, which include straightening the strands of the reinforcing layer and increasing the value of mutual pressure of the cross-lying reinforcement strands. At the beginning of the deformation process, the straightening of the threads and the increase in mutual pressure of the threads are most intense. With the increase in the level of load, the change of these parameters slows down. For example, the bending of the reinforcement strands goes into the Central tension, and the value of the load from the mutual pressure is no longer increased (tends to constant). To simulate the described processes, the basic geometrical and mechanical parameters of the material affecting the process of forming are introduced, the necessary terminology and description of the characteristics are given. Due to the high geometric nonlinearity of the all processes described in the increments, as in the initial load values there is a significant deformation of the reinforcing layer. For the quantitative and qualitative description of mechanical deformation processes occurring in the reinforcing layer, analytical dependences are derived to determine the increment of the angle of straightening of reinforcement filaments and the load caused by the mutual pressure of the cross-lying filaments at each step of the load increment. For testing of obtained dependencies shows an example of their application for flexible woven composites brands VP4126, VP6131 and VP6545. The simulation results confirmed the assumptions about the processes of straightening the threads and slowing the increase in mutual pressure of the threads. The results and dependences presented in this paper are directly related to the further generalization of the previously proposed analytical models for biaxial tension, since stretching in two directions will significantly reduce the straightening of the threads and increase the amount of mutual pressure under similar loads.
-
Modern ways to overcome neural networks catastrophic forgetting and empirical investigations on their structural issues
Computer Research and Modeling, 2023, v. 15, no. 1, pp. 45-56This paper presents the results of experimental validation of some structural issues concerning the practical use of methods to overcome catastrophic forgetting of neural networks. A comparison of current effective methods like EWC (Elastic Weight Consolidation) and WVA (Weight Velocity Attenuation) is made and their advantages and disadvantages are considered. It is shown that EWC is better for tasks where full retention of learned skills is required on all the tasks in the training queue, while WVA is more suitable for sequential tasks with very limited computational resources, or when reuse of representations and acceleration of learning from task to task is required rather than exact retention of the skills. The attenuation of the WVA method must be applied to the optimization step, i. e. to the increments of neural network weights, rather than to the loss function gradient itself, and this is true for any gradient optimization method except the simplest stochastic gradient descent (SGD). The choice of the optimal weights attenuation function between the hyperbolic function and the exponent is considered. It is shown that hyperbolic attenuation is preferable because, despite comparable quality at optimal values of the hyperparameter of the WVA method, it is more robust to hyperparameter deviations from the optimal value (this hyperparameter in the WVA method provides a balance between preservation of old skills and learning a new skill). Empirical observations are presented that support the hypothesis that the optimal value of this hyperparameter does not depend on the number of tasks in the sequential learning queue. And, consequently, this hyperparameter can be picked up on a small number of tasks and used on longer sequences.
-
Identification of an object model in the presence of unknown disturbances with a wide frequency range based on the transition to signal increments and data sampling
Computer Research and Modeling, 2024, v. 16, no. 2, pp. 315-337The work is devoted to the problem of creating a model with stationary parameters using historical data under conditions of unknown disturbances. The case is considered when a representative sample of object states can be formed using historical data accumulated only over a significant period of time. It is assumed that unknown disturbances can act in a wide frequency range and may have low-frequency and trend components. In such a situation, including data from different time periods in the sample can lead to inconsistencies and greatly reduce the accuracy of the model. The paper provides an overview of approaches and methods for data harmonization. In this case, the main attention is paid to data sampling. An assessment is made of the applicability of various data sampling options as a tool for reducing the level of uncertainty. We propose a method for identifying a self-leveling object model using data accumulated over a significant period of time under conditions of unknown disturbances with a wide frequency range. The method is focused on creating a model with stationary parameters that does not require periodic reconfiguration to new conditions. The method is based on the combined use of sampling and presentation of data from individual periods of time in the form of increments relative to the initial point in time for the period. This makes it possible to reduce the number of parameters that characterize unknown disturbances with a minimum of assumptions that limit the application of the method. As a result, the dimensionality of the search problem is reduced and the computational costs associated with setting up the model are minimized. It is possible to configure both linear and, in some cases, nonlinear models. The method was used to develop a model of closed cooling of steel on a unit for continuous hot-dip galvanizing of steel strip. The model can be used for predictive control of thermal processes and for selecting strip speed. It is shown that the method makes it possible to develop a model of thermal processes from a closed cooling section under conditions of unknown disturbances, including low-frequency components.
-
Optimization of a hull form for decrease ship resistance to movement
Computer Research and Modeling, 2017, v. 9, no. 1, pp. 57-65Views (last year): 10. Citations: 1 (RSCI).Optimization of hull lines for the minimum resistance to movement is a problem of current interest in ship hydrodynamics. In practice, lines design is still to some extent an art. The usual approaches to decrease the ship resistance are based on the model experiment and/or CFD simulation, following the trial and error method. The paper presents a new method of in-detail hull form design based on the wave-based optimization approach. The method provides systematic variation of the hull geometrical form, which corresponds to alteration of longitudinal distribution of the hull volume, while its vertical volume distribution is fixed or highly controlled. It’s well known from the theoretical studies that the vertical distribution can't be optimized by condition of minimum wave resistance, thus it can be neglected for the optimization procedures. The method efficiency was investigated by application to the foreship of KCS, the well-known test object from the workshop Gothenburg-2000. The variations of the longitudinal distribution of the volume were set on the sectional area curve as finite volume increments and then transferred to the lines plan with the help of special frame transformation methods. The CFD towing simulations were carried out for the initial hull form and the six modified variants. According to the simulation results, examined modifications caused the resistance increments in the range 1.3–6.5 %. Optimization process was underpinned with the respective data analysis based on the new hypothesis, according to which, the resistance increments caused by separate longitudinal segments of hull form meet the principle of superposition. The achieved results, which are presented as the optimum distribution of volume present in the optimized designed hull form, which shows the interesting characteristics that its resistance has decrease by 8.9 % in respect to initial KCS hull form. Visualization of the wave patterns showed an attenuation of the transversal wave components, and the intensification of the diverging wave components.
-
Analysis of simplifications of numerical schemes for Langevin equation, effect of variations in the correlation of augmentations
Computer Research and Modeling, 2012, v. 4, no. 2, pp. 325-338Views (last year): 5. Citations: 4 (RSCI).The possibility to simplify the integration of Langevin equation using the variation of correlation between augmentation was researched. The analytical expression for a set of numerical schemes is presented. It’s shown that asymptotic limits for squared velocity depend on step size. The region of convergence and the convergence orders were estimated. It turned out that the incorrect correlation between increments decrease the accuracy down to the level of first-order methods for schemes based on precise solution.
-
Synchronous components of financial time series
Computer Research and Modeling, 2017, v. 9, no. 4, pp. 639-655The article proposes a method of joint analysis of multidimensional financial time series based on the evaluation of the set of properties of stock quotes in a sliding time window and the subsequent averaging of property values for all analyzed companies. The main purpose of the analysis is to construct measures of joint behavior of time series reacting to the occurrence of a synchronous or coherent component. The coherence of the behavior of the characteristics of a complex system is an important feature that makes it possible to evaluate the approach of the system to sharp changes in its state. The basis for the search for precursors of sharp changes is the general idea of increasing the correlation of random fluctuations of the system parameters as it approaches the critical state. The increments in time series of stock values have a pronounced chaotic character and have a large amplitude of individual noises, against which a weak common signal can be detected only on the basis of its correlation in different scalar components of a multidimensional time series. It is known that classical methods of analysis based on the use of correlations between neighboring samples are ineffective in the processing of financial time series, since from the point of view of the correlation theory of random processes, increments in the value of shares formally have all the attributes of white noise (in particular, the “flat spectrum” and “delta-shaped” autocorrelation function). In connection with this, it is proposed to go from analyzing the initial signals to examining the sequences of their nonlinear properties calculated in time fragments of small length. As such properties, the entropy of the wavelet coefficients is used in the decomposition into the Daubechies basis, the multifractal parameters and the autoregressive measure of signal nonstationarity. Measures of synchronous behavior of time series properties in a sliding time window are constructed using the principal component method, moduli values of all pairwise correlation coefficients, and a multiple spectral coherence measure that is a generalization of the quadratic coherence spectrum between two signals. The shares of 16 large Russian companies from the beginning of 2010 to the end of 2016 were studied. Using the proposed method, two synchronization time intervals of the Russian stock market were identified: from mid-December 2013 to mid- March 2014 and from mid-October 2014 to mid-January 2016.
Keywords: financial time series, wavelets, entropy, multi-fractals, predictability, synchronization.Views (last year): 12. Citations: 2 (RSCI). -
Simulation equatorial plasma bubbles started from plasma clouds
Computer Research and Modeling, 2019, v. 11, no. 3, pp. 463-476Views (last year): 14.Experimental, theoretical and numerical investigations of equatorial spread F, equatorial plasma bubbles (EPBs), plasma depletion shells, and plasma clouds are continued at new variety articles. Nonlinear growth, bifurcation, pinching, atomic and molecular ion dynamics are considered at there articles. But the authors of this article believe that not all parameters of EPB development are correct. For example, EPB bifurcation is highly questionable.
A maximum speed inside EPBs and a development time of EPB are defined and studied. EPBs starting from one, two or three zones of the increased density (initial plasma clouds). The development mechanism of EPB is the Rayleigh-Taylor instability (RTI). Time of the initial stage of EPB development went into EPB favorable time interval (in this case the increase linear increment is more than zero) and is 3000–7000 c for the Earth equatorial ionosphere.
Numerous computing experiments were conducted with use of the original two-dimensional mathematical and numerical model MI2, similar USA standard model SAMI2. This model MI2 is described in detail. The received results can be used both in other theoretical works and for planning and carrying out natural experiments for generation of F-spread in Earth ionosphere.
Numerical simulating was carried out for the geophysical conditions favorable for EPBs development. Numerical researches confirmed that development time of EPBs from initial irregularities with the increased density is significantly more than development time from zones of the lowered density. It is shown that developed irregularities interact among themselves strongly and not linearly even then when initial plasma clouds are strongly removed from each other. In addition, this interaction is stronger than interaction of EPBs starting from initial irregularities with the decreased density. The numerical experiments results showed the good consent of developed EPB parameters with experimental data and with theoretical researches of other authors.
-
On some properties of short-wave statistics of FOREX time series
Computer Research and Modeling, 2017, v. 9, no. 4, pp. 657-669Views (last year): 10.Financial mathematics is one of the most natural applications for the statistical analysis of time series. Financial time series reflect simultaneous activity of a large number of different economic agents. Consequently, one expects that methods of statistical physics and the theory of random processes can be applied to them.
In this paper, we provide a statistical analysis of time series of the FOREX currency market. Of particular interest is the comparison of the time series behavior depending on the way time is measured: physical time versus trading time measured in the number of elementary price changes (ticks). The experimentally observed statistics of the time series under consideration (euro–dollar for the first half of 2007 and for 2009 and British pound – dollar for 2007) radically differs depending on the choice of the method of time measurement. When measuring time in ticks, the distribution of price increments can be well described by the normal distribution already on a scale of the order of ten ticks. At the same time, when price increments are measured in real physical time, the distribution of increments continues to differ radically from the normal up to scales of the order of minutes and even hours.
To explain this phenomenon, we investigate the statistical properties of elementary increments in price and time. In particular, we show that the distribution of time between ticks for all three time series has a long (1-2 orders of magnitude) power-law tails with exponential cutoff at large times. We obtained approximate expressions for the distributions of waiting times for all three cases. Other statistical characteristics of the time series (the distribution of elementary price changes, pair correlation functions for price increments and for waiting times) demonstrate fairly simple behavior. Thus, it is the anomalously wide distribution of the waiting times that plays the most important role in the deviation of the distribution of increments from the normal. As a result, we discuss the possibility of applying a continuous time random walk (CTRW) model to describe the FOREX time series.
-
Numerical solution of a two-dimensional quasi-static problem of thermoplasticity: residual thermal stress calculation for a multipass welding of heterogeneous steels
Computer Research and Modeling, 2012, v. 4, no. 2, pp. 345-356Views (last year): 4. Citations: 6 (RSCI).A two-dimensional mathematical model was developed for estimating the stresses in welded joints formed during multipass welding of multilayer steels. The basis of the model is the system of equations that includes the Lagrange variational equation of incremental plasticity theory and the variational equation of heat conduction, which expresses the principle of M. Biot. Variational-difference method was used to solve the problems of heat conductivity and calculation of the transient temperature field, and then at each time step – for the quasi-static problem of thermoplasticity. The numerical scheme is based on triangular meshes, which gives a more accuracy in describing the boundaries of structural elements as compared to rectangular grids.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"