Результаты поиска по 'predictability':
Найдено статей: 75
  1. Grachev V.A., Nayshtut Yu.S.
    Buckling problems of thin elastic shells
    Computer Research and Modeling, 2018, v. 10, no. 6, pp. 775-787

    The article covers several mathematical problems relating to elastic stability of thin shells in view of inconsistencies that have been recently identified between the experimental data and the predictions based on the shallow- shell theory. It is highlighted that the contradictions were caused by new algorithms that enabled updating the values of the so called “low critical stresses” calculated in the 20th century and adopted as a buckling criterion for thin shallow shells by technical standards. The new calculations often find the low critical stress close to zero. Therefore, the low critical stress cannot be used as a safety factor for the buckling analysis of the thinwalled structure, and the equations of the shallow-shell theory need to be replaced with other differential equations. The new theory also requires a buckling criterion ensuring the match between calculations and experimental data.

    The article demonstrates that the contradiction with the new experiments can be resolved within the dynamic nonlinear three-dimensional theory of elasticity. The stress when bifurcation of dynamic modes occurs shall be taken as a buckling criterion. The nonlinear form of original equations causes solitary (solitonic) waves that match non-smooth displacements (patterns, dents) of the shells. It is essential that the solitons make an impact at all stages of loading and significantly increase closer to bifurcation. The solitonic solutions are illustrated based on the thin cylindrical momentless shell when its three-dimensional volume is simulated with twodimensional surface of the set thickness. It is noted that the pattern-generating waves can be detected (and their amplitudes can by identified) with acoustic or electromagnetic devices.

    Thus, it is technically possible to reduce the risk of failure of the thin shells by monitoring the shape of the surface with acoustic devices. The article concludes with a setting of the mathematical problems requiring the solution for the reliable numerical assessment of the buckling criterion for thin elastic shells.

    Views (last year): 23.
  2. Ivankov A.A., Finchenko V.S.
    Numerical study of thermal destruction of the ”Chelyabinsk” meteorite when entering the Earth’s atmosphere
    Computer Research and Modeling, 2013, v. 5, no. 6, pp. 941-956

    A mathematical model for the numerical study of thermal destruction of the "Chelyabinsk" meteorite when entering the Earth’s atmosphere is presented in the article. The study was conducted in the framework of an integrated approach, including the calculation of the meteorite trajectory associated with the physical processes connected with the meteorite motion. Together with the trajectory the flow field and radiation-convective heat
    transfer were determined as well as warming and destruction of the meteorite under the influence of the calculated heat load. An integrated approach allows to determine the trajectories of space objects more precisely, predict the area of their fall and destruction.

    Citations: 4 (RSCI).
  3. Dudarov S.P., Diev A.N., Fedosova N.A., Koltsova E.M.
    Simulation of properties of composite materials reinforced by carbon nanotubes using perceptron complexes
    Computer Research and Modeling, 2015, v. 7, no. 2, pp. 253-262

    Use of algorithms based on neural networks can be inefficient for small amounts of experimental data. Authors consider a solution of this problem in the context of modelling of properties of ceramic composite materials reinforced with carbon nanotubes using perceptron complex. This approach allowed us to obtain a mathematical description of the object of study with a minimal amount of input data (the amount of necessary experimental samples decreased 2–3.3 times). Authors considered different versions of perceptron complex structures. They found that the most appropriate structure has perceptron complex with breakthrough of two input variables. The relative error was only 6%. The selected perceptron complex was shown to be effective for predicting the properties of ceramic composites. The relative errors for output components were 0.3%, 4.2%, 0.4%, 2.9%, and 11.8%.

    Views (last year): 2. Citations: 1 (RSCI).
  4. Scherbakov A.V.
    Economy of Chernavskii
    Computer Research and Modeling, 2017, v. 9, no. 3, pp. 397-417

    The present article sets out the scientific approach of Dmitry Sergeevich Chernavskii to the modelling of economic processes. It recounts the history of works of Dmitry Sergeyevich on the economic front, its milestones and achievements. One of the most important advances in the economic analysis was the prediction by a team of scientists headed by D. S. Chernavskii, the major crises that have occurred in our country over the last 20 years, namely, the default of 1998, the crisis of industrial production in the second half of the 2000s, the 2008 crisis and the ensuing recession. As an example, the dynamic analysis of the global macroeconomic processes shows the model of functioning of the dollar as the world currency. On this particular example shows the possibility of seigniorage due to the issue of the dollar and the calculated “window of opportunity” that allows you to issue dollars as the global currency, without prejudice to its own economy.

    A model for the development of a closed society (without external economic relations) in the one-product approach is considered as an example of dynamic analysis of the economy of a separate state. The model is based on the principles of market economy, i.e. the dynamics of prices is determined by the balance of supply and demand. It is shown that in the general case, the state of market equilibrium is not unique. Several steady states with different levels of production and consumption are possible. Effect of addressed emission of money in underproductive state is considered. It is shown that, depending on its size it can lead to the transition to a highly productive condition, and just cause inflation without transition. The relationship of these results with the “Keynesian” and “monetarist” approaches is discussed.

    Views (last year): 5. Citations: 2 (RSCI).
  5. Popov D.I., Klimchik A.S.
    Stiffness modeling for anthropomorphic robots
    Computer Research and Modeling, 2019, v. 11, no. 4, pp. 631-651

    In the work modeling method of anthropomorphic platforms is presented. An elastostatic stiffness model is used to determine positioning errors in the robot’s lower limbs. One of the main problems in achieving a fast and stable gait are deflections caused by the flexibility in the elements of the robot. This problem was solved using virtual joint modeling to predict stiffness and deformation caused by the robot weight and external forces.

    To simulate a robot in the single-support phase, the robot is represented as a serial kinematic chain with a base at the supporting leg point of contact and an end effector in the swing leg foot. In the double support phase robot modeled as a parallel manipulator with an end effector in the pelvis. In this work, two cases of stiffness modeling are used: taking into account the compliance of the links and joints and taking into account only the compliance of joints. In the last case, joint compliances also include part of the link compliances. The joint stiffness parameters have been identified for two anthropomorphic robots: a small platform and a full-sized AR-601M.

    Deflections maps were calculated using identified stiffness parameters and showing errors depending on the position of the robot end effector in the workspace. The errors in Z directions have maximum amplitude, due to the influence of the robot mass on its structure.

    Views (last year): 3.
  6. Emaletdinova L.Y., Mukhametzyanov Z.I., Kataseva D.V., Kabirova A.N.
    A method of constructing a predictive neural network model of a time series
    Computer Research and Modeling, 2020, v. 12, no. 4, pp. 737-756

    This article studies a method of constructing a predictive neural network model of a time series based on determining the composition of input variables, constructing a training sample and training itself using the back propagation method. Traditional methods of constructing predictive models of the time series are: the autoregressive model, the moving average model or the autoregressive model — the moving average allows us to approximate the time series by a linear dependence of the current value of the output variable on a number of its previous values. Such a limitation as linearity of dependence leads to significant errors in forecasting.

    Mining Technologies using neural network modeling make it possible to approximate the time series by a nonlinear dependence. Moreover, the process of constructing of a neural network model (determining the composition of input variables, the number of layers and the number of neurons in the layers, choosing the activation functions of neurons, determining the optimal values of the neuron link weights) allows us to obtain a predictive model in the form of an analytical nonlinear dependence.

    The determination of the composition of input variables of neural network models is one of the key points in the construction of neural network models in various application areas that affect its adequacy. The composition of the input variables is traditionally selected from some physical considerations or by the selection method. In this work it is proposed to use the behavior of the autocorrelation and private autocorrelation functions for the task of determining the composition of the input variables of the predictive neural network model of the time series.

    In this work is proposed a method for determining the composition of input variables of neural network models for stationary and non-stationary time series, based on the construction and analysis of autocorrelation functions. Based on the proposed method in the Python programming environment are developed an algorithm and a program, determining the composition of the input variables of the predictive neural network model — the perceptron, as well as building the model itself. The proposed method was experimentally tested using the example of constructing a predictive neural network model of a time series that reflects energy consumption in different regions of the United States, openly published by PJM Interconnection LLC (PJM) — a regional network organization in the United States. This time series is non-stationary and is characterized by the presence of both a trend and seasonality. Prediction of the next values of the time series based on previous values and the constructed neural network model showed high approximation accuracy, which proves the effectiveness of the proposed method.

  7. Klenov S.L., Wegerle D., Kerner B.S., Schreckenberg M.
    Prediction of moving and unexpected motionless bottlenecks based on three-phase traffic theory
    Computer Research and Modeling, 2021, v. 13, no. 2, pp. 319-363

    We present a simulation methodology for the prediction of ЃgunexpectedЃh bottlenecks, i.e., the bottlenecks that occur suddenly and unexpectedly for drivers on a highway. Such unexpected bottlenecks can be either a moving bottleneck (MB) caused by a slow moving vehicle or a motionless bottleneck caused by a stopped vehicle (SV). Based on simulations of a stochastic microscopic traffic flow model in the framework of KernerЃfs three-phase traffic theory, we show that through the use of a small share of probe vehicles (FCD) randomly distributed in traffic flow the reliable prediction of ЃgunexpectedЃh bottlenecks is possible. We have found that the time dependence of the probability of MB and SV prediction as well as the accuracy of the estimation of MB and SV location depend considerably on sequences of phase transitions from free flow (F) to synchronized flow (S) (F→S transition) and back from synchronized flow to free flow (S→F transition) as well as on speed oscillations in synchronized flow at the bottleneck. In the simulation approach, the identification of F→S and S→F transitions at an unexpected bottleneck has been made in accordance with Kerner's three-phase traffic theory. The presented simulation methodology allows us both the prediction of the unexpected bottleneck that suddenly occurs on a highway and the distinguishing of the origin of the unexpected bottleneck, i.e., whether the unexpected bottleneck has occurred due to a MB or a SV.

  8. The work is devoted to the problem of creating a model with stationary parameters using historical data under conditions of unknown disturbances. The case is considered when a representative sample of object states can be formed using historical data accumulated only over a significant period of time. It is assumed that unknown disturbances can act in a wide frequency range and may have low-frequency and trend components. In such a situation, including data from different time periods in the sample can lead to inconsistencies and greatly reduce the accuracy of the model. The paper provides an overview of approaches and methods for data harmonization. In this case, the main attention is paid to data sampling. An assessment is made of the applicability of various data sampling options as a tool for reducing the level of uncertainty. We propose a method for identifying a self-leveling object model using data accumulated over a significant period of time under conditions of unknown disturbances with a wide frequency range. The method is focused on creating a model with stationary parameters that does not require periodic reconfiguration to new conditions. The method is based on the combined use of sampling and presentation of data from individual periods of time in the form of increments relative to the initial point in time for the period. This makes it possible to reduce the number of parameters that characterize unknown disturbances with a minimum of assumptions that limit the application of the method. As a result, the dimensionality of the search problem is reduced and the computational costs associated with setting up the model are minimized. It is possible to configure both linear and, in some cases, nonlinear models. The method was used to develop a model of closed cooling of steel on a unit for continuous hot-dip galvanizing of steel strip. The model can be used for predictive control of thermal processes and for selecting strip speed. It is shown that the method makes it possible to develop a model of thermal processes from a closed cooling section under conditions of unknown disturbances, including low-frequency components.

  9. Succi G., Ivanov V.V.
    Comparison of mobile operating systems based on models of growth reliability of the software
    Computer Research and Modeling, 2018, v. 10, no. 3, pp. 325-334

    Evaluation of software reliability is an important part of the process of developing modern software. Many studies are aimed at improving models for measuring and predicting the reliability of software products. However, little attention is paid to approaches to comparing existing systems in terms of software reliability. Despite the enormous importance for practice (and for managing software development), a complete and proven comparison methodology does not exist. In this article, we propose a software reliability comparison methodology in which software reliability growth models are widely used. The proposed methodology has the following features: it provides certain level of flexibility and abstraction while keeping objectivity, i.e. providing measurable comparison criteria. Also, given the comparison methodology with a set of SRGMs and evaluation criteria it becomes much easier to disseminate information about reliability of wide range of software systems. The methodology was evaluated on the example of three mobile operating systems with open source: Sailfish, Tizen, CyanogenMod.

    A byproduct of our study is a comparison of the three analyzed Open Source mobile operating systems. The goal of this research is to determine which OS is stronger in terms of reliability. To this end we have performed a GQM analysis and we have identified 3 questions and 8 metrics. Considering the comparison of metrics, it appears that Sailfish is in most case the best performing OS. However, it is also the OS that performs the worst in most cases. On the contrary, Tizen scores the best in 3 cases out of 8, but the worst only in one case out of 8.

    Views (last year): 29.
  10. Shabanov A.E., Petrov M.N., Chikitkin A.V.
    A multilayer neural network for determination of particle size distribution in Dynamic Light Scattering problem
    Computer Research and Modeling, 2019, v. 11, no. 2, pp. 265-273

    Solution of Dynamic Light Scattering problem makes it possible to determine particle size distribution (PSD) from the spectrum of the intensity of scattered light. As a result of experiment, an intensity curve is obtained. The experimentally obtained spectrum of intensity is compared with the theoretically expected spectrum, which is the Lorentzian line. The main task is to determine on the basis of these data the relative concentrations of particles of each class presented in the solution. The article presents a method for constructing and using a neural network trained on synthetic data to determine PSD in a solution in the range of 1–500 nm. The neural network has a fully connected layer of 60 neurons with the RELU activation function at the output, a layer of 45 neurons and the same activation function, a dropout layer and 2 layers with 15 and 1 neurons (network output). The article describes how the network has been trained and tested on synthetic and experimental data. On the synthetic data, the standard deviation metric (rmse) gave a value of 1.3157 nm. Experimental data were obtained for particle sizes of 200 nm, 400 nm and a solution with representatives of both sizes. The results of the neural network and the classical linear methods are compared. The disadvantages of the classical methods are that it is difficult to determine the degree of regularization: too much regularization leads to the particle size distribution curves are much smoothed out, and weak regularization gives oscillating curves and low reliability of the results. The paper shows that the neural network gives a good prediction for particles with a large size. For small sizes, the prediction is worse, but the error quickly decreases as the particle size increases.

    Views (last year): 16.
Pages: previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"