All issues
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Determining the characteristics of a random process by comparing them with values based on models of distribution laws
Computer Research and Modeling, 2025, v. 17, no. 6, pp. 1105-1118The effectiveness of communication and data transmission systems (CSiPS), which are an integral part of modern systems in almost any field of science and technology, largely depends on the stability of the frequency of the generated signals. The signals generated in the CSiPD can be considered as processes, the frequency of which changes under the influence of a combination of external influences. Changing the frequency of the signals leads to a decrease in the signal-tonoise ratio (SNR) and, consequently, a deterioration in the characteristics of the signal-to-noise ratio, such as the probability of a bit error and bandwidth. It is most convenient to consider the description of such changes in the frequency of signals as random processes, the apparatus of which is widely used in the construction of mathematical models describing the functioning of systems and devices in various fields of science and technology. Moreover, in many cases, the characteristics of a random process, such as the distribution law, mathematical expectation, and variance, may be unknown or known with errors that do not allow us to obtain estimates of the signal parameters that are acceptable in accuracy. The article proposes an algorithm for solving the problem of determining the characteristics of a random process (signal frequency) based on a set of samples of its frequency, allowing to determine the sample mean, sample variance and the distribution law of frequency deviations in the general population. The basis of this algorithm is the comparison of the values of the observed random process measured over a certain time interval with a set of the same number of random values formed on the basis of model distribution laws. Distribution laws based on mathematical models of these systems and devices or corresponding to similar systems and devices can be considered as model distribution laws. When forming a set of random values for the accepted model distribution law, the sample mean value and variance obtained from the measurement results of the observed random process are used as mathematical expectation and variance. The feature of the algorithm is to compare the measured values of the observed random process ordered in ascending or descending order and the generated sets of values in accordance with the accepted models of distribution laws. The results of mathematical modeling illustrating the application of this algorithm are presented.
-
Game-theoretic model of coordinations of interests at innovative development of corporations
Computer Research and Modeling, 2016, v. 8, no. 4, pp. 673-684Views (last year): 9. Citations: 6 (RSCI).Dynamic game theoretic models of the corporative innovative development are investigated. The proposed models are based on concordance of private and public interests of agents. It is supposed that the structure of interests of each agent includes both private (personal interests) and public (interests of the whole company connected with its innovative development first) components. The agents allocate their personal resources between these two directions. The system dynamics is described by a difference (not differential) equation. The proposed model of innovative development is studied by simulation and the method of enumeration of the domains of feasible controls with a constant step. The main contribution of the paper consists in comparative analysis of efficiency of the methods of hierarchical control (compulsion or impulsion) for information structures of Stackelberg or Germeier (four structures) by means of the indices of system compatibility. The proposed model is a universal one and can be used for a scientifically grounded support of the programs of innovative development of any economic firm. The features of a specific company are considered in the process of model identification (a determination of the specific classes of model functions and numerical values of its parameters) which forms a separate complex problem and requires an analysis of the statistical data and expert estimations. The following assumptions about information rules of the hierarchical game are accepted: all players use open-loop strategies; the leader chooses and reports to the followers some values of administrative (compulsion) or economic (impulsion) control variables which can be only functions of time (Stackelberg games) or depend also on the followers’ controls (Germeier games); given the leader’s strategies all followers simultaneously and independently choose their strategies that gives a Nash equilibrium in the followers’ game. For a finite number of iterations the proposed algorithm of simulation modeling allows to build an approximate solution of the model or to conclude that it doesn’t exist. A reliability and efficiency of the proposed algorithm follow from the properties of the scenario method and the method of a direct ordered enumeration with a constant step. Some comprehensive conclusions about the comparative efficiency of methods of hierarchical control of innovations are received.
-
Estimation of natural frequencies of pure bending vibrations of composite nonlinearly elastic beams and circular plates
Computer Research and Modeling, 2017, v. 9, no. 6, pp. 945-953Views (last year): 14.In the paper, it is represented a linearization method for the stress-strain curves of nonlinearly deformable beams and circular plates in order to generalize the pure bending vibration equations. It is considered composite, on average isotropic prismatic beams of a constant rectangular cross-section and circular plates of a constant thickness made of nonlinearly elastic materials. The technique consists in determining the approximate Young’s moduli from the initial stress-strain state of beam and plate subjected to the action of the bending moment.
The paper proposes two criteria for linearization: the equality of the specific potential energy of deformation and the minimization of the standard deviation in the state equation approximation. The method allows obtaining in the closed form the estimated value of the natural frequencies of layered and structurally heterogeneous, on average isotropic nonlinearly elastic beams and circular plates. This makes it possible to significantly reduce the resources in the vibration analysis and modeling of these structural elements. In addition, the paper shows that the proposed linearization criteria allow to estimate the natural frequencies with the same accuracy.
Since in the general case even isotropic materials exhibit different resistance to tension and compression, it is considered the piecewise-linear Prandtl’s diagrams with proportionality limits and tangential Young’s moduli that differ under tension and compression as the stress-strain curves of the composite material components. As parameters of the stress-strain curve, it is considered the effective Voigt’s characteristics (under the hypothesis of strain homogeneity) for a longitudinally layered material structure; the effective Reuss’ characteristics (under the hypothesis of strain homogeneity) for a transversely layered beam and an axially laminated plate. In addition, the effective Young’s moduli and the proportionality limits, obtained by the author’s homogenization method, are given for a structurally heterogeneous, on average isotropic material. As an example, it is calculated the natural frequencies of two-phase beams depending on the component concentrations.
-
Estimation of anisotropy of seismic response from fractured geological objects
Computer Research and Modeling, 2018, v. 10, no. 2, pp. 231-240Views (last year): 11. Citations: 4 (RSCI).Seismic survey process is the common method of prospecting and exploration of deposits: oil and natural gas. Invented at the beginning of the XX century, it has received significant development and is currently used by almost all service oil companies. Its main advantages are the acceptable cost of fieldwork (in comparison with drilling wells) and the accuracy of estimating the characteristics of the subsurface area. However, with the discovery of non-traditional deposits (for example, the Arctic shelf, the Bazhenov Formation), the task of improving existing and creating new seismic data processing technologies became important. Significant development in this direction is possible with the use of numerical simulation of the propagation of seismic waves in realistic models of the geological medium, since it is possible to specify an arbitrary internal structure of the medium with subsequent evaluation of the synthetic signal-response.
The present work is devoted to the study of spatial dynamic processes occurring in geological medium containing fractured inclusions in the process of seismic exploration. The authors constructed a three-dimensional model of a layered massif containing a layer of fluid-saturated cracks, which makes it possible to estimate the signal-response when the structure of the inhomogeneous inclusion is varied. To describe physical processes, we use a system of equations for a linearly elastic body in partial derivatives of the second order, which is solved numerically by a grid-characteristic method on hexahedral grid. In this case, the crack planes are identified at the stage of constructing the grid, and further an additional correction is used to ensure a correct seismic response for the model parameters typical for geological media.
In the paper, three-component area seismograms with a common explosion point were obtained. On their basis, the effect of the structure of a fractured medium on the anisotropy of the seismic response recorded on the day surface at a different distance from the source was estimated. It is established that the kinematic characteristics of the signal remain constant, while the dynamic characteristics for ordered and disordered models can differ by tens of percents.
-
Machine learning interpretation of inter-well radiowave survey data
Computer Research and Modeling, 2019, v. 11, no. 4, pp. 675-684Views (last year): 3.Traditional geological search methods going to be ineffective. The exploration depth of kimberlite bodies and ore deposits has increased significantly. The only direct exploration method is to drill a system of wells to the depths that provide access to the enclosing rocks. Due to the high cost of drilling, the role of inter-well survey methods has increased. They allows to increase the mean well spacing without significantly reducing the kimberlite or ore body missing probability. The method of inter-well radio wave survey is effective to search for high contrast conductivity objects. The physics of the method based on the dependence of the electromagnetic wave propagation on the propagation medium conductivity. The source and receiver of electromagnetic radiation is an electric dipole, they are placed in adjacent wells. The distance between the source and receiver is known. Therefore we could estimate the medium absorption coefficient by the rate of radio wave amplitude decrease. Low electrical resistance rocks corresponds to high absorption of radio waves. The inter-well measurement data allows to estimate an effective electrical resistance (or conductivity) of the rock. Typically, the source and receiver are immersed in adjacent wells synchronously. The value of the of the electric field amplitude measured at the receiver site allows to estimate the average value of the attenuation coefficient on the line connecting the source and receiver. The measurements are taken during stops, approximately every 5 m. The distance between stops is much less than the distance between adjacent wells. This leads to significant spatial anisotropy in the measured data distribution. Drill grid covers a large area, and our point is to build a three-dimensional model of the distribution of the electrical properties of the inter-well space throughout the whole area. The anisotropy of spatial distribution makes hard to the use of standard geostatistics approach. To build a three-dimensional model of attenuation coefficient, we used one of machine learning theory methods, the method of nearest neighbors. In this method, the value of the absorption coefficient at a given point is calculated by $k$ nearest measurements. The number $k$ should be determined from additional reasons. The spatial distribution anisotropy effect can be reduced by changing the spatial scale in the horizontal direction. The scale factor $\lambda$ is one yet external parameter of the problem. To select the parameters $k$ and $\lambda$ values we used the determination coefficient. To demonstrate the absorption coefficient three-dimensional image construction we apply the procedure to the inter-well radio wave survey data. The data was obtained at one of the sites in Yakutia.
-
On the relations of stochastic convex optimization problems with empirical risk minimization problems on $p$-norm balls
Computer Research and Modeling, 2022, v. 14, no. 2, pp. 309-319In this paper, we consider convex stochastic optimization problems arising in machine learning applications (e. g., risk minimization) and mathematical statistics (e. g., maximum likelihood estimation). There are two main approaches to solve such kinds of problems, namely the Stochastic Approximation approach (online approach) and the Sample Average Approximation approach, also known as the Monte Carlo approach, (offline approach). In the offline approach, the problem is replaced by its empirical counterpart (the empirical risk minimization problem). The natural question is how to define the problem sample size, i. e., how many realizations should be sampled so that the quite accurate solution of the empirical problem be the solution of the original problem with the desired precision. This issue is one of the main issues in modern machine learning and optimization. In the last decade, a lot of significant advances were made in these areas to solve convex stochastic optimization problems on the Euclidean balls (or the whole space). In this work, we are based on these advances and study the case of arbitrary balls in the $p$-norms. We also explore the question of how the parameter $p$ affects the estimates of the required number of terms as a function of empirical risk.
In this paper, both convex and saddle point optimization problems are considered. For strongly convex problems, the existing results on the same sample sizes in both approaches (online and offline) were generalized to arbitrary norms. Moreover, it was shown that the strong convexity condition can be weakened: the obtained results are valid for functions satisfying the quadratic growth condition. In the case when this condition is not met, it is proposed to use the regularization of the original problem in an arbitrary norm. In contradistinction to convex problems, saddle point problems are much less studied. For saddle point problems, the sample size was obtained under the condition of $\gamma$-growth of the objective function. When $\gamma = 1$, this condition is the condition of sharp minimum in convex problems. In this article, it was shown that the sample size in the case of a sharp minimum is almost independent of the desired accuracy of the solution of the original problem.
-
Transport modeling: averaging price matrices
Computer Research and Modeling, 2023, v. 15, no. 2, pp. 317-327This paper considers various approaches to averaging the generalized travel costs calculated for different modes of travel in the transportation network. The mode of transportation is understood to mean both the mode of transport, for example, a car or public transport, and movement without the use of transport, for example, on foot. The task of calculating the trip matrices includes the task of calculating the total matrices, in other words, estimating the total demand for movements by all modes, as well as the task of splitting the matrices according to the mode, also called modal splitting. To calculate trip matrices, gravitational, entropy and other models are used, in which the probability of movement between zones is estimated based on a certain measure of the distance of these zones from each other. Usually, the generalized cost of moving along the optimal path between zones is used as a distance measure. However, the generalized cost of movement differs for different modes of movement. When calculating the total trip matrices, it becomes necessary to average the generalized costs by modes of movement. The averaging procedure is subject to the natural requirement of monotonicity in all arguments. This requirement is not met by some commonly used averaging methods, for example, averaging with weights. The problem of modal splitting is solved by applying the methods of discrete choice theory. In particular, within the framework of the theory of discrete choice, correct methods have been developed for averaging the utility of alternatives that are monotonic in all arguments. The authors propose some adaptation of the methods of the theory of discrete choice for application to the calculation of the average cost of movements in the gravitational and entropy models. The transfer of averaging formulas from the context of the modal splitting model to the trip matrix calculation model requires the introduction of new parameters and the derivation of conditions for the possible value of these parameters, which was done in this article. The issues of recalibration of the gravitational function, which is necessary when switching to a new averaging method, if the existing function is calibrated taking into account the use of the weighted average cost, were also considered. The proposed methods were implemented on the example of a small fragment of the transport network. The results of calculations are presented, demonstrating the advantage of the proposed methods.
-
High-precision estimation of the spatial orientation of the video camera of the vision system of the mobile robotic complex
Computer Research and Modeling, 2025, v. 17, no. 1, pp. 93-107The efficiency of mobile robotic systems (MRS) that monitor the traffic situation, urban infrastructure, consequences of emergency situations, etc., directly depends on the quality of vision systems, which are the most important part of MRS. In turn, the accuracy of image processing in vision systems depends to a great extent on the accuracy of spatial orientation of the video camera placed on the MRS. However, when video cameras are placed on the MRS, the level of errors of their spatial orientation increases sharply, caused by wind and seismic vibrations, movement of the MRS over rough terrain, etc. In this connection, the paper considers a general solution to the problem of stochastic estimation of spatial orientation parameters of video cameras in conditions of both random mast vibrations and arbitrary character of MRS movement. Since the methods of solving this problem on the basis of satellite measurements at high intensity of natural and artificial radio interference (the methods of formation of which are constantly being improved) are not able to provide the required accuracy of the solution, the proposed approach is based on the use of autonomous means of measurement — inertial and non-inertial. But when using them, the problem of building and stochastic estimation of the general model of video camera motion arises, the complexity of which is determined by arbitrary motion of the video camera, random mast oscillations, measurement disturbances, etc. The problem of stochastic estimation of the general model of video camera motion arises. Due to the unsolved nature of this problem, the paper considers the synthesis of both the video camera motion model in the most general case and the stochastic estimation of its state parameters. The developed algorithm for joint estimation of the spatial orientation parameters of the video camera placed on the mast of the MRS is invariant to the nature of motion of the mast, the video camera, and the MRS itself, providing stability and the required accuracy of estimation under the most general assumptions about the nature of interference of the sensitive elements of the autonomous measuring complex used. The results of the numerical experiment allow us to conclude that the proposed approach can be practically applied to solve the problem of the current spatial orientation of MRS and video cameras placed on them using inexpensive autonomous measuring devices.
-
Modelling of astrocyte morphology with space colonization algorithm
Computer Research and Modeling, 2025, v. 17, no. 3, pp. 465-481We examine a phenomenological algorithm for generating morphology of astrocytes, a major class of glial brain cells, based on morphometric data of rat brain protoplasmic astrocytes and observations of general cell development trends in vivo, based on current literature. We adapted the Space Colonization Algorithm (SCA) for procedural generation of astrocytic morphology from scratch. Attractor points used in generation were spatially distributed in the model volume according to the synapse distribution density in the rat hippocampus tissue during the first week of postnatal brain development. We analyzed and compared astrocytic morphology reconstructions at different brain development stages using morphometry estimation techniques such as Sholl analysis, number of bifurcations, number of terminals, total tree length, and maximum branching order. Using morphometric data from protoplasmic astrocytes of rats at different ages, we selected the necessary generation parameters to obtain the most realistic three-dimensional cell morphology models. We demonstrate that our proposed algorithm allows not only to obtain individual cell geometry but also recreate the phenomenon of tiling domain organization in the cell populations. In our algorithm tiling emerges due to the cell competition for territory and the assignment of unique attractor points to their processes, which then become unavailable to other cells and their processes. We further extend the original algorithm by splitting morphology generation in two phases, thereby simulating astrocyte tree structure development during the first and third-fourth weeks of rat postnatal brain development: rapid space exploration at the first stage and extensive branching at the second stage. To this end, we introduce two attractor types to separate two different growth strategies in time. We hypothesize that the extended algorithm with dynamic attractor generation can explain the formation process of fine astrocyte cell structures and maturation of astrocytic arborizations.
-
Permeability of lipid membranes. A molecular dynamic study
Computer Research and Modeling, 2009, v. 1, no. 4, pp. 423-436Views (last year): 20. Citations: 2 (RSCI).A correct model of lipid molecule (distearoylphosphatidylcholine, DSPC) and lipid membrane in water was constructed. Model lipid membrane is stable and has a reliable energy distribution among degrees of freedom. Also after equilibration model system has spatial parameters very similar to those of real DSPC membrane in liquid-crystalline phase. This model was used for studying of lipid membrane permeability to oxygen and water molecules and sodium ion. We obtained the values for transmembrane mobility and diffusion coefficients profiles, which we used for effective permeability coefficients calculation. We found lipid membranes to have significant diffusional resistance to penetration not only by charged particles, such as ions, but also by nonpolar molecules, such as oxygen molecule. We propose theoretical approach for calculation of particle flow across a membrane, as well as methods for estimation of distribution coefficients between bilayer and water phase.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"




