Результаты поиска по 'measure':
Найдено статей: 106
  1. Aleshin I.M., Malygin I.V.
    Machine learning interpretation of inter-well radiowave survey data
    Computer Research and Modeling, 2019, v. 11, no. 4, pp. 675-684

    Traditional geological search methods going to be ineffective. The exploration depth of kimberlite bodies and ore deposits has increased significantly. The only direct exploration method is to drill a system of wells to the depths that provide access to the enclosing rocks. Due to the high cost of drilling, the role of inter-well survey methods has increased. They allows to increase the mean well spacing without significantly reducing the kimberlite or ore body missing probability. The method of inter-well radio wave survey is effective to search for high contrast conductivity objects. The physics of the method based on the dependence of the electromagnetic wave propagation on the propagation medium conductivity. The source and receiver of electromagnetic radiation is an electric dipole, they are placed in adjacent wells. The distance between the source and receiver is known. Therefore we could estimate the medium absorption coefficient by the rate of radio wave amplitude decrease. Low electrical resistance rocks corresponds to high absorption of radio waves. The inter-well measurement data allows to estimate an effective electrical resistance (or conductivity) of the rock. Typically, the source and receiver are immersed in adjacent wells synchronously. The value of the of the electric field amplitude measured at the receiver site allows to estimate the average value of the attenuation coefficient on the line connecting the source and receiver. The measurements are taken during stops, approximately every 5 m. The distance between stops is much less than the distance between adjacent wells. This leads to significant spatial anisotropy in the measured data distribution. Drill grid covers a large area, and our point is to build a three-dimensional model of the distribution of the electrical properties of the inter-well space throughout the whole area. The anisotropy of spatial distribution makes hard to the use of standard geostatistics approach. To build a three-dimensional model of attenuation coefficient, we used one of machine learning theory methods, the method of nearest neighbors. In this method, the value of the absorption coefficient at a given point is calculated by $k$ nearest measurements. The number $k$ should be determined from additional reasons. The spatial distribution anisotropy effect can be reduced by changing the spatial scale in the horizontal direction. The scale factor $\lambda$ is one yet external parameter of the problem. To select the parameters $k$ and $\lambda$ values we used the determination coefficient. To demonstrate the absorption coefficient three-dimensional image construction we apply the procedure to the inter-well radio wave survey data. The data was obtained at one of the sites in Yakutia.

    Views (last year): 3.
  2. Bogomolov S.V.
    Stochastic formalization of the gas dynamic hierarchy
    Computer Research and Modeling, 2022, v. 14, no. 4, pp. 767-779

    Mathematical models of gas dynamics and its computational industry, in our opinion, are far from perfect. We will look at this problem from the point of view of a clear probabilistic micro-model of a gas from hard spheres, relying on both the theory of random processes and the classical kinetic theory in terms of densities of distribution functions in phase space, namely, we will first construct a system of nonlinear stochastic differential equations (SDE), and then a generalized random and nonrandom integro-differential Boltzmann equation taking into account correlations and fluctuations. The key feature of the initial model is the random nature of the intensity of the jump measure and its dependence on the process itself.

    Briefly recall the transition to increasingly coarse meso-macro approximations in accordance with a decrease in the dimensionalization parameter, the Knudsen number. We obtain stochastic and non-random equations, first in phase space (meso-model in terms of the Wiener — measure SDE and the Kolmogorov – Fokker – Planck equations), and then — in coordinate space (macro-equations that differ from the Navier – Stokes system of equations and quasi-gas dynamics systems). The main difference of this derivation is a more accurate averaging by velocity due to the analytical solution of stochastic differential equations with respect to the Wiener measure, in the form of which an intermediate meso-model in phase space is presented. This approach differs significantly from the traditional one, which uses not the random process itself, but its distribution function. The emphasis is placed on the transparency of assumptions during the transition from one level of detail to another, and not on numerical experiments, which contain additional approximation errors.

    The theoretical power of the microscopic representation of macroscopic phenomena is also important as an ideological support for particle methods alternative to difference and finite element methods.

  3. Podlipnova I.V., Persiianov M.I., Shvetsov V.I., Gasnikova E.V.
    Transport modeling: averaging price matrices
    Computer Research and Modeling, 2023, v. 15, no. 2, pp. 317-327

    This paper considers various approaches to averaging the generalized travel costs calculated for different modes of travel in the transportation network. The mode of transportation is understood to mean both the mode of transport, for example, a car or public transport, and movement without the use of transport, for example, on foot. The task of calculating the trip matrices includes the task of calculating the total matrices, in other words, estimating the total demand for movements by all modes, as well as the task of splitting the matrices according to the mode, also called modal splitting. To calculate trip matrices, gravitational, entropy and other models are used, in which the probability of movement between zones is estimated based on a certain measure of the distance of these zones from each other. Usually, the generalized cost of moving along the optimal path between zones is used as a distance measure. However, the generalized cost of movement differs for different modes of movement. When calculating the total trip matrices, it becomes necessary to average the generalized costs by modes of movement. The averaging procedure is subject to the natural requirement of monotonicity in all arguments. This requirement is not met by some commonly used averaging methods, for example, averaging with weights. The problem of modal splitting is solved by applying the methods of discrete choice theory. In particular, within the framework of the theory of discrete choice, correct methods have been developed for averaging the utility of alternatives that are monotonic in all arguments. The authors propose some adaptation of the methods of the theory of discrete choice for application to the calculation of the average cost of movements in the gravitational and entropy models. The transfer of averaging formulas from the context of the modal splitting model to the trip matrix calculation model requires the introduction of new parameters and the derivation of conditions for the possible value of these parameters, which was done in this article. The issues of recalibration of the gravitational function, which is necessary when switching to a new averaging method, if the existing function is calibrated taking into account the use of the weighted average cost, were also considered. The proposed methods were implemented on the example of a small fragment of the transport network. The results of calculations are presented, demonstrating the advantage of the proposed methods.

  4. Zhidkov E.P., Voloshina I.G., Polyakova R.V., Perepelkin E.E., Rossiyskaya N.S., Shavrina T.V., Yudin I.P.
    Computer modeling of magnet systems for physical setups
    Computer Research and Modeling, 2009, v. 1, no. 2, pp. 189-198

    This work gives results of numerical simulation of a superconducting magnetic focusing system. While modeling this system, special care was taken to achieve approximation accuracy over the condition u(∞)=0 by using Richardson method. The work presents the results of comparison of the magnetic field calculated distribution with measurements of the field performed on a modified magnet SP-40 of “MARUSYA” physical installation. This work also presents some results of numeric analysis of magnetic systems of “MARUSYA” physical installation with the purpose to study an opportunity of designing magnetic systems with predetermined characteristics of the magnetic field.

    Views (last year): 4. Citations: 2 (RSCI).
  5. Burlakov E.A.
    Relation between performance of organization and its structure during sudden and smoldering crises
    Computer Research and Modeling, 2016, v. 8, no. 4, pp. 685-706

    The article describes a mathematical model that simulates performance of a hierarchical organization during an early stage of a crisis. A distinguished feature of this stage of crisis is presence of so called early warning signals containing information on the approaching event. Employees are capable of catching the early warnings and of preparing the organization for the crisis based on the signals’ meaning. The efficiency of the preparation depends on both parameters of the organization and parameters of the crisis. The proposed simulation agentbased model is implemented on Java programming language and is used for conducting experiments via Monte- Carlo method. The goal of the experiments is to compare how centralized and decentralized organizational structures perform during sudden and smoldering crises. By centralized organizations we assume structures with high number of hierarchy levels and low number of direct reports of every manager, while decentralized organizations mean structures with low number of hierarchy levels and high number of direct reports of every manager. Sudden crises are distinguished by short early stage and low number of warning signals, while smoldering crises are defined as crises with long lasting early stage and high number of warning signals not necessary containing important information. Efficiency of the organizational performance during early stage of a crisis is measured by two parameters: percentage of early warnings which have been acted upon in order to prepare organization for the crisis, and time spent by top-manager on working with early warnings. As a result, we show that during early stage of smoldering crises centralized organizations process signals more efficiently than decentralized organizations, while decentralized organizations handle early warning signals more efficiently during early stage of sudden crises. However, occupation of top-managers during sudden crises is higher in decentralized organizations and it is higher in centralized organizations during smoldering crises. Thus, neither of the two classes of organizational structures is more efficient by the two parameters simultaneously. Finally, we conduct sensitivity analysis to verify the obtained results.

    Views (last year): 2. Citations: 2 (RSCI).
  6. Lyubushin A.A., Farkov Y.A.
    Synchronous components of financial time series
    Computer Research and Modeling, 2017, v. 9, no. 4, pp. 639-655

    The article proposes a method of joint analysis of multidimensional financial time series based on the evaluation of the set of properties of stock quotes in a sliding time window and the subsequent averaging of property values for all analyzed companies. The main purpose of the analysis is to construct measures of joint behavior of time series reacting to the occurrence of a synchronous or coherent component. The coherence of the behavior of the characteristics of a complex system is an important feature that makes it possible to evaluate the approach of the system to sharp changes in its state. The basis for the search for precursors of sharp changes is the general idea of increasing the correlation of random fluctuations of the system parameters as it approaches the critical state. The increments in time series of stock values have a pronounced chaotic character and have a large amplitude of individual noises, against which a weak common signal can be detected only on the basis of its correlation in different scalar components of a multidimensional time series. It is known that classical methods of analysis based on the use of correlations between neighboring samples are ineffective in the processing of financial time series, since from the point of view of the correlation theory of random processes, increments in the value of shares formally have all the attributes of white noise (in particular, the “flat spectrum” and “delta-shaped” autocorrelation function). In connection with this, it is proposed to go from analyzing the initial signals to examining the sequences of their nonlinear properties calculated in time fragments of small length. As such properties, the entropy of the wavelet coefficients is used in the decomposition into the Daubechies basis, the multifractal parameters and the autoregressive measure of signal nonstationarity. Measures of synchronous behavior of time series properties in a sliding time window are constructed using the principal component method, moduli values of all pairwise correlation coefficients, and a multiple spectral coherence measure that is a generalization of the quadratic coherence spectrum between two signals. The shares of 16 large Russian companies from the beginning of 2010 to the end of 2016 were studied. Using the proposed method, two synchronization time intervals of the Russian stock market were identified: from mid-December 2013 to mid- March 2014 and from mid-October 2014 to mid-January 2016.

    Views (last year): 12. Citations: 2 (RSCI).
  7. Madera A.G.
    Cluster method of mathematical modeling of interval-stochastic thermal processes in electronic systems
    Computer Research and Modeling, 2020, v. 12, no. 5, pp. 1023-1038

    A cluster method of mathematical modeling of interval-stochastic thermal processes in complex electronic systems (ES), is developed. In the cluster method, the construction of a complex ES is represented in the form of a thermal model, which is a system of clusters, each of which contains a core that combines the heat-generating elements falling into a given cluster, the cluster shell and a medium flow through the cluster. The state of the thermal process in each cluster and every moment of time is characterized by three interval-stochastic state variables, namely, the temperatures of the core, shell, and medium flow. The elements of each cluster, namely, the core, shell, and medium flow, are in thermal interaction between themselves and elements of neighboring clusters. In contrast to existing methods, the cluster method allows you to simulate thermal processes in complex ESs, taking into account the uneven distribution of temperature in the medium flow pumped into the ES, the conjugate nature of heat exchange between the medium flow in the ES, core and shells of clusters, and the intervalstochastic nature of thermal processes in the ES, caused by statistical technological variation in the manufacture and installation of electronic elements in ES and random fluctuations in the thermal parameters of the environment. The mathematical model describing the state of thermal processes in a cluster thermal model is a system of interval-stochastic matrix-block equations with matrix and vector blocks corresponding to the clusters of the thermal model. The solution to the interval-stochastic equations are statistical measures of the state variables of thermal processes in clusters - mathematical expectations, covariances between state variables and variance. The methodology for applying the cluster method is shown on the example of a real ES.

  8. Akimov S.V., Borisov D.V.
    Centrifugal pump modeling in FlowVision CFD software
    Computer Research and Modeling, 2023, v. 15, no. 4, pp. 907-919

    This paper presents a methodology for modeling centrifugal pumps using the example of the NM 1250 260 main oil centrifugal pump. We use FlowVision CFD software as the numerical modeling instrument. Bench tests and numerical modeling use water as a working fluid. The geometrical model of the pump is fully three-dimensional and includes the pump housing to account for leakages. In order to reduce the required computational resources, the methodology specifies leakages using flow rate rather than directly modeling them. Surface roughness influences flow through the wall function model. The wall function model uses an equivalent sand roughness, and a formula for converting real roughness into equivalent sand roughness is applied in this work. FlowVision uses the sliding mesh method for simulation of the rotation of the impeller. This approach takes into account the nonstationary interaction between the rotor and diffuser of the pump, allowing for accurate resolution of recirculation vortices that occur at low flow rates.

    The developed methodology has achieved high consistency between numerical simulations results and experiments at all pump operating conditions. The deviation in efficiency at nominal conditions is 0.42%, and in head is 1.9%. The deviation of calculated characteristics from experimental ones increases as the flow rate increases and reaches a maximum at the far-right point of the characteristic curve (up to 4.8% in head). This phenomenon occurs due to a slight mismatch between the geometric model of the impeller used in the calculation and the real pump model from the experiment. However, the average arithmetic relative deviation between numerical modeling and experiment for pump efficiency at 6 points is 0.39%, with an experimental efficiency measurement error of 0.72%. This meets the accuracy requirements for calculations. In the future, this methodology can be used for a series of optimization and strength calculations, as modeling does not require significant computational resources and takes into account the non-stationary nature of flow in the pump.

  9. Brazhe A.R., Brazhe N.A., Sosnovtseva O.V., Pavlov A.N., Mosekilde E., Maksimov G.V.
    Wavelet-based analysis of cell dynamics measured by interference microscopy
    Computer Research and Modeling, 2009, v. 1, no. 1, pp. 77-83

    Laser interference microscopy was used to study morphology and intracellular dynamics of erythrocytes, neurons and mast cells. We have found that changes of the local refractive index (RI) of cells have regular components that relate to the cooperative processes in the cellular submembrane and centre regions. We have shown that characteristic frequencies of RI dynamics differ for various cell types and can be used as markers of specific cellular processes.

    Views (last year): 1. Citations: 5 (RSCI).
  10. Kolchev A.A., Nedopekin A.E.
    On one particular model of a mixture of the probability distributions in the radio measurements
    Computer Research and Modeling, 2012, v. 4, no. 3, pp. 563-568

    This paper presents a model mixture of probability distributions of signal and noise. Typically, when analyzing the data under conditions of uncertainty it is necessary to use nonparametric tests. However, such an analysis of nonstationary data in the presence of uncertainty on the mean of the distribution and its parameters may be ineffective. The model involves the implementation of a case of a priori non-parametric uncertainty in the processing of the signal at a time when the separation of signal and noise are related to different general population, is feasible.

    Views (last year): 3. Citations: 7 (RSCI).
Pages: « first previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"