Результаты поиска по 'probability':
Найдено статей: 70
  1. Ryashko L.B., Slepukhina E.S.
    Analysis of additive and parametric noise effects on Morris – Lecar neuron model
    Computer Research and Modeling, 2017, v. 9, no. 3, pp. 449-468

    This paper is devoted to the analysis of the effect of additive and parametric noise on the processes occurring in the nerve cell. This study is carried out on the example of the well-known Morris – Lecar model described by the two-dimensional system of ordinary differential equations. One of the main properties of the neuron is the excitability, i.e., the ability to respond to external stimuli with an abrupt change of the electric potential on the cell membrane. This article considers a set of parameters, wherein the model exhibits the class 2 excitability. The dynamics of the system is studied under variation of the external current parameter. We consider two parametric zones: the monostability zone, where a stable equilibrium is the only attractor of the deterministic system, and the bistability zone, characterized by the coexistence of a stable equilibrium and a limit cycle. We show that in both cases random disturbances result in the phenomenon of the stochastic generation of mixed-mode oscillations (i. e., alternating oscillations of small and large amplitudes). In the monostability zone this phenomenon is associated with a high excitability of the system, while in the bistability zone, it occurs due to noise-induced transitions between attractors. This phenomenon is confirmed by changes of probability density functions for distribution of random trajectories, power spectral densities and interspike intervals statistics. The action of additive and parametric noise is compared. We show that under the parametric noise, the stochastic generation of mixed-mode oscillations is observed at lower intensities than under the additive noise. For the quantitative analysis of these stochastic phenomena we propose and apply an approach based on the stochastic sensitivity function technique and the method of confidence domains. In the case of a stable equilibrium, this confidence domain is an ellipse. For the stable limit cycle, this domain is a confidence band. The study of the mutual location of confidence bands and the boundary separating the basins of attraction for different noise intensities allows us to predict the emergence of noise-induced transitions. The effectiveness of this analytical approach is confirmed by the good agreement of theoretical estimations with results of direct numerical simulations.

    Views (last year): 11.
  2. Stepin Y.P., Leonov D.G., Papilina T.M., Stepankina O.A.
    System modeling, risks evaluation and optimization of a distributed computer system
    Computer Research and Modeling, 2020, v. 12, no. 6, pp. 1349-1359

    The article deals with the problem of a distributed system operation reliability. The system core is an open integration platform that provides interaction of varied software for modeling gas transportation. Some of them provide an access through thin clients on the cloud technology “software as a service”. Mathematical models of operation, transmission and computing are to ensure the operation of an automated dispatching system for oil and gas transportation. The paper presents a system solution based on the theory of Markov random processes and considers the stable operation stage. The stationary operation mode of the Markov chain with continuous time and discrete states is described by a system of Chapman–Kolmogorov equations with respect to the average numbers (mathematical expectations) of the objects in certain states. The objects of research are both system elements that are present in a large number – thin clients and computing modules, and individual ones – a server, a network manager (message broker). Together, they are interacting Markov random processes. The interaction is determined by the fact that the transition probabilities in one group of elements depend on the average numbers of other elements groups.

    The authors propose a multi-criteria dispersion model of risk assessment for such systems (both in the broad and narrow sense, in accordance with the IEC standard). The risk is the standard deviation of estimated object parameter from its average value. The dispersion risk model makes possible to define optimality criteria and whole system functioning risks. In particular, for a thin client, the following is calculated: the loss profit risk, the total risk of losses due to non-productive element states, and the total risk of all system states losses.

    Finally the paper proposes compromise schemes for solving the multi-criteria problem of choosing the optimal operation strategy based on the selected set of compromise criteria.

  3. Koganov A.V., Rakcheeva T.A., Prikhodko D.I.
    Comparative analysis of human adaptation to the growth of visual information in the tasks of recognizing formal symbols and meaningful images
    Computer Research and Modeling, 2021, v. 13, no. 3, pp. 571-586

    We describe an engineering-psychological experiment that continues the study of ways to adapt a person to the increasing complexity of logical problems by presenting a series of problems of increasing complexity, which is determined by the volume of initial data. Tasks require calculations in an associative or non-associative system of operations. By the nature of the change in the time of solving the problem, depending on the number of necessary operations, we can conclude that a purely sequential method of solving problems or connecting additional brain resources to the solution in parallel mode. In a previously published experimental work, a person in the process of solving an associative problem recognized color images with meaningful images. In the new study, a similar problem is solved for abstract monochrome geometric shapes. Analysis of the result showed that for the second case, the probability of the subject switching to a parallel method of processing visual information is significantly reduced. The research method is based on presenting a person with two types of tasks. One type of problem contains associative calculations and allows a parallel solution algorithm. Another type of problem is the control one, which contains problems in which calculations are not associative and parallel algorithms are ineffective. The task of recognizing and searching for a given object is associative. A parallel strategy significantly speeds up the solution with relatively small additional resources. As a control series of problems (to separate parallel work from the acceleration of a sequential algorithm), we use, as in the previous experiment, a non-associative comparison problem in cyclic arithmetic, presented in the visual form of the game “rock, paper, scissors”. In this problem, the parallel algorithm requires a large number of processors with a small efficiency coefficient. Therefore, the transition of a person to a parallel algorithm for solving this problem is almost impossible, and the acceleration of processing input information is possible only by increasing the speed. Comparing the dependence of the solution time on the volume of source data for two types of problems allows us to identify four types of strategies for adapting to the increasing complexity of the problem: uniform sequential, accelerated sequential, parallel computing (where possible), or undefined (for this method) strategy. The Reducing of the number of subjects, who switch to a parallel strategy when encoding input information with formal images, shows the effectiveness of codes that cause subject associations. They increase the speed of human perception and processing of information. The article contains a preliminary mathematical model that explains this phenomenon. It is based on the appearance of a second set of initial data, which occurs in a person as a result of recognizing the depicted objects.

  4. Pham C.T., Phan M.N., Tran T.T.
    Image classification based on deep learning with automatic relevance determination and structured Bayesian pruning
    Computer Research and Modeling, 2024, v. 16, no. 4, pp. 927-938

    Deep learning’s power stems from complex architectures; however, these can lead to overfitting, where models memorize training data and fail to generalize to unseen examples. This paper proposes a novel probabilistic approach to mitigate this issue. We introduce two key elements: Truncated Log-Uniform Prior and Truncated Log-Normal Variational Approximation, and Automatic Relevance Determination (ARD) with Bayesian Deep Neural Networks (BDNNs). Within the probabilistic framework, we employ a specially designed truncated log-uniform prior for noise. This prior acts as a regularizer, guiding the learning process towards simpler solutions and reducing overfitting. Additionally, a truncated log-normal variational approximation is used for efficient handling of the complex probability distributions inherent in deep learning models. ARD automatically identifies and removes irrelevant features or weights within a model. By integrating ARD with BDNNs, where weights have a probability distribution, we achieve a variational bound similar to the popular variational dropout technique. Dropout randomly drops neurons during training, encouraging the model not to rely heavily on any single feature. Our approach with ARD achieves similar benefits without the randomness of dropout, potentially leading to more stable training.

    To evaluate our approach, we have tested the model on two datasets: the Canadian Institute For Advanced Research (CIFAR-10) for image classification and a dataset of Macroscopic Images of Wood, which is compiled from multiple macroscopic images of wood datasets. Our method is applied to established architectures like Visual Geometry Group (VGG) and Residual Network (ResNet). The results demonstrate significant improvements. The model reduced overfitting while maintaining, or even improving, the accuracy of the network’s predictions on classification tasks. This validates the effectiveness of our approach in enhancing the performance and generalization capabilities of deep learning models.

  5. Pivovarova A.S., Steryakov A.A.
    Modeling the behavior proceeding market crash in a hierarchically organized financial market
    Computer Research and Modeling, 2011, v. 3, no. 2, pp. 215-222

    We consider the hierarchical model of financial crashes introduced by A. Johansen and D. Sornette which reproduces the log-periodic power law behavior of the price before the critical point. In order to build the generalization of this model we introduce the dependence of an influence exponent on an ultrametric distance between agents. Much attention is being paid to a problem of critical point universality which is investigated by comparison of probability density functions of the crash times corresponding to systems with various total numbers of agents.

    Views (last year): 1.
  6. Kolchev A.A., Nedopekin A.E.
    On one particular model of a mixture of the probability distributions in the radio measurements
    Computer Research and Modeling, 2012, v. 4, no. 3, pp. 563-568

    This paper presents a model mixture of probability distributions of signal and noise. Typically, when analyzing the data under conditions of uncertainty it is necessary to use nonparametric tests. However, such an analysis of nonstationary data in the presence of uncertainty on the mean of the distribution and its parameters may be ineffective. The model involves the implementation of a case of a priori non-parametric uncertainty in the processing of the signal at a time when the separation of signal and noise are related to different general population, is feasible.

    Views (last year): 3. Citations: 7 (RSCI).
  7. Shumov V.V.
    Consideration of psychological factors in models of the battle (conflict)
    Computer Research and Modeling, 2016, v. 8, no. 6, pp. 951-964

    The course and outcome of the battle is largely dependent on the morale of the troops, characterized by the percentage of loss in killed and wounded, in which the troops still continue to fight. Every fight is a psychological act of ending his rejection of one of the parties. Typically, models of battle psychological factor taken into account in the decision of Lanchester equations (the condition of equality of forces, when the number of one of the parties becomes zero). It is emphasized that the model Lanchester type satisfactorily describe the dynamics of the battle only in the initial stages. To resolve this contradiction is proposed to use a modification of Lanchester's equations, taking into account the fact that at any moment of the battle on the enemy firing not affected and did not abandon the battle fighters. The obtained differential equations are solved by numerical method and allow the dynamics to take into account the influence of psychological factor and evaluate the completion time of the conflict. Computational experiments confirm the known military theory is the fact that the fight usually ends in refusal of soldiers of one of the parties from its continuation (avoidance of combat in various forms). Along with models of temporal and spatial dynamics proposed to use a modification of the technology features of the conflict of S. Skaperdas, based on the principles of combat. To estimate the probability of victory of one side in the battle takes into account the interest of the maturing sides of the bloody casualties and increased military superiority.

    Views (last year): 7. Citations: 4 (RSCI).
  8. Priadein R.B., Stepantsov M.Y.
    On a possible approach to a sport game with discrete time simulation
    Computer Research and Modeling, 2017, v. 9, no. 2, pp. 271-279

    The paper proposes an approach to simulation of a sport game, consisting of a discrete set of separate competitions. According to this approach, such a competition is considered as a random processes, generally — a non-Markov’s one. At first we treat the flow of the game as a Markov’s process, obtaining recursive relationship between the probabilities of achieving certain states of score in a tennis match, as well as secondary indicators of the game, such as expectation and variance of the number of serves to finish the game. Then we use a simulation system, modeling the match, to allow an arbitrary change of the probabilities of the outcomes in the competitions that compose the match. We, for instance, allow the probabilities to depend on the results of previous competitions. Therefore, this paper deals with a modification of the model, previously proposed by the authors for sports games with continuous time.

    The proposed approach allows to evaluate not only the probability of the final outcome of the match, but also the probabilities of reaching each of the possible intermediate results, as well as secondary indicators of the game, such as the number of separate competitions it takes to finish the match. The paper includes a detailed description of the construction of a simulation system for a game of a tennis match. Then we consider simulating a set and the whole tennis match by analogy. We show some statements concerning fairness of tennis serving rules, understood as independence of the outcome of a competition on the right to serve first. We perform simulation of a cancelled ATP series match, obtaining its most probable intermediate and final outcomes for three different possible variants of the course of the match.

    The main result of this paper is the developed method of simulation of the match, applicable not only to tennis, but also to other types of sports games with discrete time.

    Views (last year): 9.
  9. Beloborodova E.I., Tamm M.V.
    On some properties of short-wave statistics of FOREX time series
    Computer Research and Modeling, 2017, v. 9, no. 4, pp. 657-669

    Financial mathematics is one of the most natural applications for the statistical analysis of time series. Financial time series reflect simultaneous activity of a large number of different economic agents. Consequently, one expects that methods of statistical physics and the theory of random processes can be applied to them.

    In this paper, we provide a statistical analysis of time series of the FOREX currency market. Of particular interest is the comparison of the time series behavior depending on the way time is measured: physical time versus trading time measured in the number of elementary price changes (ticks). The experimentally observed statistics of the time series under consideration (euro–dollar for the first half of 2007 and for 2009 and British pound – dollar for 2007) radically differs depending on the choice of the method of time measurement. When measuring time in ticks, the distribution of price increments can be well described by the normal distribution already on a scale of the order of ten ticks. At the same time, when price increments are measured in real physical time, the distribution of increments continues to differ radically from the normal up to scales of the order of minutes and even hours.

    To explain this phenomenon, we investigate the statistical properties of elementary increments in price and time. In particular, we show that the distribution of time between ticks for all three time series has a long (1-2 orders of magnitude) power-law tails with exponential cutoff at large times. We obtained approximate expressions for the distributions of waiting times for all three cases. Other statistical characteristics of the time series (the distribution of elementary price changes, pair correlation functions for price increments and for waiting times) demonstrate fairly simple behavior. Thus, it is the anomalously wide distribution of the waiting times that plays the most important role in the deviation of the distribution of increments from the normal. As a result, we discuss the possibility of applying a continuous time random walk (CTRW) model to describe the FOREX time series.

    Views (last year): 10.
  10. Koganov A.V., Rakcheeva T.A., Prikhodko D.I.
    Experimental identification of the organization of mental calculations of the person on the basis of algebras of different associativity
    Computer Research and Modeling, 2019, v. 11, no. 2, pp. 311-327

    The work continues research on the ability of a person to improve the productivity of information processing, using parallel work or improving the performance of analyzers. A person receives a series of tasks, the solution of which requires the processing of a certain amount of information. The time and the validity of the decision are recorded. The dependence of the average solution time on the amount of information in the problem is determined by correctly solved problems. In accordance with the proposed method, the problems contain calculations of expressions in two algebras, one of which is associative and the other is nonassociative. To facilitate the work of the subjects in the experiment were used figurative graphic images of elements of algebra. Non-associative calculations were implemented in the form of the game “rock-paper-scissors”. It was necessary to determine the winning symbol in the long line of these figures, considering that they appear sequentially from left to right and play with the previous winner symbol. Associative calculations were based on the recognition of drawings from a finite set of simple images. It was necessary to determine which figure from this set in the line is not enough, or to state that all the pictures are present. In each problem there was no more than one picture. Computation in associative algebra allows the parallel counting, and in the absence of associativity only sequential computations are possible. Therefore, the analysis of the time for solving a series of problems reveals a consistent uniform, sequential accelerated and parallel computing strategy. In the experiments it was found that all subjects used a uniform sequential strategy to solve non-associative problems. For the associative task, all subjects used parallel computing, and some have used parallel computing acceleration of the growth of complexity of the task. A small part of the subjects with a high complexity, judging by the evolution of the solution time, supplemented the parallel account with a sequential stage of calculations (possibly to control the solution). We develop a special method for assessing the rate of processing of input information by a person. It allowed us to estimate the level of parallelism of the calculation in the associative task. Parallelism of level from two to three was registered. The characteristic speed of information processing in the sequential case (about one and a half characters per second) is twice less than the typical speed of human image recognition. Apparently the difference in processing time actually spent on the calculation process. For an associative problem in the case of a minimum amount of information, the solution time is near to the non-associativity case or less than twice. This is probably due to the fact that for a small number of characters recognition almost exhausts the calculations for the used non-associative problem.

    Views (last year): 16.
Pages: « first previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"