All issues
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
On the kinetics of entropy of a system with discrete microscopic states
Computer Research and Modeling, 2023, v. 15, no. 5, pp. 1207-1236An isolated system, which possesses a discrete set of microscopic states, is considered. The system performs spontaneous random transitions between the microstates. Kinetic equations for the probabilities of the system staying in various microstates are formulated. A general dimensionless expression for entropy of such a system, which depends on the probability distribution, is considered. Two problems are stated: 1) to study the effect of possible unequal probabilities of different microstates, in particular, when the system is in its internal equilibrium, on the system entropy value, and 2) to study the kinetics of microstate probability distribution and entropy evolution of the system in nonequilibrium states. The kinetics for the rates of transitions between the microstates is assumed to be first-order. Two variants of the effects of possible nonequiprobability of the microstates are considered: i) the microstates form two subgroups the probabilities of which are similar within each subgroup but differ between the subgroups, and ii) the microstate probabilities vary arbitrarily around the point at which they are all equal. It is found that, under a fixed total number of microstates, the deviations of entropy from the value corresponding to the equiprobable microstate distribution are extremely small. The latter is a rigorous substantiation of the known hypothesis about the equiprobability of microstates under the thermodynamic equilibrium. On the other hand, based on several characteristic examples, it is shown that the structure of random transitions between the microstates exerts a considerable effect on the rate and mode of the establishment of the system internal equilibrium, on entropy time dependence and expression of the entropy production rate. Under definite schemes of these transitions, there are possibilities of fast and slow components in the transients and of the existence of transients in the form of damped oscillations. The condition of universality and stability of equilibrium microstate distribution is that for any pair of microstates, a sequence of transitions should exist, which provides the passage from one microstate to next, and, consequently, any microstate traps should be absent.
-
Mixed algorithm for modeling of charge transfer in DNA on long time intervals
Computer Research and Modeling, 2010, v. 2, no. 1, pp. 63-72Views (last year): 2. Citations: 2 (RSCI).Charge transfer in DNA is simulated by a discrete Holstein model «quantum particle + classical site chain + interaction». Thermostat temperature is taken into account as stochastic force, which acts on classical sites (Langevin equation). Thus dynamics of charge migration along the chain is described by ODE system with stochastic right-hand side. To integrate the system numerically, algorithms of order 1 or 2 are usually applied. We developed «mixed» algorithm having 4th order of accuracy for fast «quantum» variables (note that in quantum subsystem the condition «sum of probabilities of charge being on site is time-constant» must be held), and 2nd order for slow classical variables, which are affecting by stochastic force. The algorithm allows us to calculate trajectories on longer time intervals as compared to standard algorithms. Model calculations of polaron disruption in homogeneous chain caused by temperature fluctuations are given as an example.
-
Mathematical modeling of neutron transfers in nuclear reactions considering spin-orbit interaction
Computer Research and Modeling, 2010, v. 2, no. 4, pp. 393-401Views (last year): 4.The difference scheme for numerical solution of a time-dependant system of two Schrödinger equations with the operator of a spin-orbit interaction for a two-component spinor wave function is offered on the basis of a split method for a time-dependant Schrödinger equations. The computer simulation of the external neutrons’ wave functions evolution with different values of the full moment projection upon internuclear axis and probabilities of their transfer are executed for head-on collisions of 18O and 58Ni nuclei.
-
Analysis of additive and parametric noise effects on Morris – Lecar neuron model
Computer Research and Modeling, 2017, v. 9, no. 3, pp. 449-468Views (last year): 11.This paper is devoted to the analysis of the effect of additive and parametric noise on the processes occurring in the nerve cell. This study is carried out on the example of the well-known Morris – Lecar model described by the two-dimensional system of ordinary differential equations. One of the main properties of the neuron is the excitability, i.e., the ability to respond to external stimuli with an abrupt change of the electric potential on the cell membrane. This article considers a set of parameters, wherein the model exhibits the class 2 excitability. The dynamics of the system is studied under variation of the external current parameter. We consider two parametric zones: the monostability zone, where a stable equilibrium is the only attractor of the deterministic system, and the bistability zone, characterized by the coexistence of a stable equilibrium and a limit cycle. We show that in both cases random disturbances result in the phenomenon of the stochastic generation of mixed-mode oscillations (i. e., alternating oscillations of small and large amplitudes). In the monostability zone this phenomenon is associated with a high excitability of the system, while in the bistability zone, it occurs due to noise-induced transitions between attractors. This phenomenon is confirmed by changes of probability density functions for distribution of random trajectories, power spectral densities and interspike intervals statistics. The action of additive and parametric noise is compared. We show that under the parametric noise, the stochastic generation of mixed-mode oscillations is observed at lower intensities than under the additive noise. For the quantitative analysis of these stochastic phenomena we propose and apply an approach based on the stochastic sensitivity function technique and the method of confidence domains. In the case of a stable equilibrium, this confidence domain is an ellipse. For the stable limit cycle, this domain is a confidence band. The study of the mutual location of confidence bands and the boundary separating the basins of attraction for different noise intensities allows us to predict the emergence of noise-induced transitions. The effectiveness of this analytical approach is confirmed by the good agreement of theoretical estimations with results of direct numerical simulations.
-
System modeling, risks evaluation and optimization of a distributed computer system
Computer Research and Modeling, 2020, v. 12, no. 6, pp. 1349-1359The article deals with the problem of a distributed system operation reliability. The system core is an open integration platform that provides interaction of varied software for modeling gas transportation. Some of them provide an access through thin clients on the cloud technology “software as a service”. Mathematical models of operation, transmission and computing are to ensure the operation of an automated dispatching system for oil and gas transportation. The paper presents a system solution based on the theory of Markov random processes and considers the stable operation stage. The stationary operation mode of the Markov chain with continuous time and discrete states is described by a system of Chapman–Kolmogorov equations with respect to the average numbers (mathematical expectations) of the objects in certain states. The objects of research are both system elements that are present in a large number – thin clients and computing modules, and individual ones – a server, a network manager (message broker). Together, they are interacting Markov random processes. The interaction is determined by the fact that the transition probabilities in one group of elements depend on the average numbers of other elements groups.
The authors propose a multi-criteria dispersion model of risk assessment for such systems (both in the broad and narrow sense, in accordance with the IEC standard). The risk is the standard deviation of estimated object parameter from its average value. The dispersion risk model makes possible to define optimality criteria and whole system functioning risks. In particular, for a thin client, the following is calculated: the loss profit risk, the total risk of losses due to non-productive element states, and the total risk of all system states losses.
Finally the paper proposes compromise schemes for solving the multi-criteria problem of choosing the optimal operation strategy based on the selected set of compromise criteria.
-
Comparative analysis of human adaptation to the growth of visual information in the tasks of recognizing formal symbols and meaningful images
Computer Research and Modeling, 2021, v. 13, no. 3, pp. 571-586We describe an engineering-psychological experiment that continues the study of ways to adapt a person to the increasing complexity of logical problems by presenting a series of problems of increasing complexity, which is determined by the volume of initial data. Tasks require calculations in an associative or non-associative system of operations. By the nature of the change in the time of solving the problem, depending on the number of necessary operations, we can conclude that a purely sequential method of solving problems or connecting additional brain resources to the solution in parallel mode. In a previously published experimental work, a person in the process of solving an associative problem recognized color images with meaningful images. In the new study, a similar problem is solved for abstract monochrome geometric shapes. Analysis of the result showed that for the second case, the probability of the subject switching to a parallel method of processing visual information is significantly reduced. The research method is based on presenting a person with two types of tasks. One type of problem contains associative calculations and allows a parallel solution algorithm. Another type of problem is the control one, which contains problems in which calculations are not associative and parallel algorithms are ineffective. The task of recognizing and searching for a given object is associative. A parallel strategy significantly speeds up the solution with relatively small additional resources. As a control series of problems (to separate parallel work from the acceleration of a sequential algorithm), we use, as in the previous experiment, a non-associative comparison problem in cyclic arithmetic, presented in the visual form of the game “rock, paper, scissors”. In this problem, the parallel algorithm requires a large number of processors with a small efficiency coefficient. Therefore, the transition of a person to a parallel algorithm for solving this problem is almost impossible, and the acceleration of processing input information is possible only by increasing the speed. Comparing the dependence of the solution time on the volume of source data for two types of problems allows us to identify four types of strategies for adapting to the increasing complexity of the problem: uniform sequential, accelerated sequential, parallel computing (where possible), or undefined (for this method) strategy. The Reducing of the number of subjects, who switch to a parallel strategy when encoding input information with formal images, shows the effectiveness of codes that cause subject associations. They increase the speed of human perception and processing of information. The article contains a preliminary mathematical model that explains this phenomenon. It is based on the appearance of a second set of initial data, which occurs in a person as a result of recognizing the depicted objects.
-
Image classification based on deep learning with automatic relevance determination and structured Bayesian pruning
Computer Research and Modeling, 2024, v. 16, no. 4, pp. 927-938Deep learning’s power stems from complex architectures; however, these can lead to overfitting, where models memorize training data and fail to generalize to unseen examples. This paper proposes a novel probabilistic approach to mitigate this issue. We introduce two key elements: Truncated Log-Uniform Prior and Truncated Log-Normal Variational Approximation, and Automatic Relevance Determination (ARD) with Bayesian Deep Neural Networks (BDNNs). Within the probabilistic framework, we employ a specially designed truncated log-uniform prior for noise. This prior acts as a regularizer, guiding the learning process towards simpler solutions and reducing overfitting. Additionally, a truncated log-normal variational approximation is used for efficient handling of the complex probability distributions inherent in deep learning models. ARD automatically identifies and removes irrelevant features or weights within a model. By integrating ARD with BDNNs, where weights have a probability distribution, we achieve a variational bound similar to the popular variational dropout technique. Dropout randomly drops neurons during training, encouraging the model not to rely heavily on any single feature. Our approach with ARD achieves similar benefits without the randomness of dropout, potentially leading to more stable training.
To evaluate our approach, we have tested the model on two datasets: the Canadian Institute For Advanced Research (CIFAR-10) for image classification and a dataset of Macroscopic Images of Wood, which is compiled from multiple macroscopic images of wood datasets. Our method is applied to established architectures like Visual Geometry Group (VGG) and Residual Network (ResNet). The results demonstrate significant improvements. The model reduced overfitting while maintaining, or even improving, the accuracy of the network’s predictions on classification tasks. This validates the effectiveness of our approach in enhancing the performance and generalization capabilities of deep learning models.
-
Impact of spatial resolution on mobile robot path optimality in two-dimensional lattice models
Computer Research and Modeling, 2025, v. 17, no. 6, pp. 1131-1148This paper examines the impact of the spatial resolution of a discretized (lattice) representation of the environment on the efficiency and correctness of optimal pathfinding in complex environments. Scenarios are considered that may include bottlenecks, non-uniform obstacle distributions, and areas of increased safety requirements in the immediate vicinity of obstacles. Despite the widespread use of lattice representations of the environment in robotics due to their compatibility with sensor data and support for classical trajectory planning algorithms, the resolution of these lattices has a significant impact on both goal reachability and optimal path performance. An algorithm is proposed that combines environmental connectivity analysis, trajectory optimization, and geometric safety refinement. In the first stage, the Leath algorithm is used to estimate the reachability of the target point by identifying a connected component containing the starting position. Upon confirmation of the target point’s reachability, the A* algorithm is applied to the nodes of this component in the second stage to construct a path that simultaneously minimizes both the path length and the risk of collision. In the third stage, a refined obstacle distance estimate is performed for nodes located in safety zones using a combination of the Gilbert – Johnson –Keerthi (GJK) and expanding polyhedron (EPA) algorithms. Experimental analysis revealed a nonlinear relationship between the probability of the existence and effectiveness of an optimal path and the lattice parameters. Specifically, reducing the spatial resolution of the lattice increases the likelihood of connectivity loss and target unreachability, while increasing its spatial resolution increases computational complexity without a proportional improvement in the optimal path’s performance.
-
Modeling the behavior proceeding market crash in a hierarchically organized financial market
Computer Research and Modeling, 2011, v. 3, no. 2, pp. 215-222Views (last year): 1.We consider the hierarchical model of financial crashes introduced by A. Johansen and D. Sornette which reproduces the log-periodic power law behavior of the price before the critical point. In order to build the generalization of this model we introduce the dependence of an influence exponent on an ultrametric distance between agents. Much attention is being paid to a problem of critical point universality which is investigated by comparison of probability density functions of the crash times corresponding to systems with various total numbers of agents.
-
On one particular model of a mixture of the probability distributions in the radio measurements
Computer Research and Modeling, 2012, v. 4, no. 3, pp. 563-568Views (last year): 3. Citations: 7 (RSCI).This paper presents a model mixture of probability distributions of signal and noise. Typically, when analyzing the data under conditions of uncertainty it is necessary to use nonparametric tests. However, such an analysis of nonstationary data in the presence of uncertainty on the mean of the distribution and its parameters may be ineffective. The model involves the implementation of a case of a priori non-parametric uncertainty in the processing of the signal at a time when the separation of signal and noise are related to different general population, is feasible.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"




