Результаты поиска по 'measure':
Найдено статей: 106
  1. Миньков Л.Л., Дик И.Г.
    Моделирование течения в гидроциклоне с дополнительным инжектором
    Computer Research and Modeling, 2011, v. 3, no. 1, pp. 63-76

    Статья представляет собой пример компьютерного моделирования в области инженерной механики. Численным методом находятся поля скорости в гидроциклоне, которые недоступны прямому измерению. Рассматривается численное моделирование трехмерной гидродинамики на основе k-ε RNG модели турбулентности в гидроциклоне со встроенным инжектором, содержащим 5 тангенциально направленных сопла. Показано, что направление движения инжектируемой жидкости зависит от расхода жидкости через инжектор. Расчеты показывают в соответствии с экспериментами, что зависимость сплит-параметра от расхода инжектируемой жидкости имеет немонотонный характер, связанный с отношением мощности основного потока и инжектируемой жидкости.

    Views (last year): 2. Citations: 5 (RSCI).
  2. The currently performed mathematical and computer modeling of thermal processes in technical systems is based on an assumption that all the parameters determining thermal processes are fully and unambiguously known and identified (i.e., determined). Meanwhile, experience has shown that parameters determining the thermal processes are of undefined interval-stochastic character, which in turn is responsible for the intervalstochastic nature of thermal processes in the electronic system. This means that the actual temperature values of each element in an technical system will be randomly distributed within their variation intervals. Therefore, the determinative approach to modeling of thermal processes that yields specific values of element temperatures does not allow one to adequately calculate temperature distribution in electronic systems. The interval-stochastic nature of the parameters determining the thermal processes depends on three groups of factors: (a) statistical technological variation of parameters of the elements when manufacturing and assembling the system; (b) the random nature of the factors caused by functioning of an technical system (fluctuations in current and voltage; power, temperatures, and flow rates of the cooling fluid and the medium inside the system); and (c) the randomness of ambient parameters (temperature, pressure, and flow rate). The interval-stochastic indeterminacy of the determinative factors in technical systems is irremediable; neglecting it causes errors when designing electronic systems. A method that allows modeling of unsteady interval-stochastic thermal processes in technical systems (including those upon interval indeterminacy of the determinative parameters) is developed in this paper. The method is based on obtaining and further solving equations for the unsteady statistical measures (mathematical expectations, variances and covariances) of the temperature distribution in an technical system at given variation intervals and the statistical measures of the determinative parameters. Application of the elaborated method to modeling of the interval-stochastic thermal process in a particular electronic system is considered.

    Views (last year): 15. Citations: 6 (RSCI).
  3. Abakumov A.I., Izrailsky Y.G.
    The stabilizing role of fish population structure under the influence of fishery and random environment variations
    Computer Research and Modeling, 2017, v. 9, no. 4, pp. 609-620

    We study the influence of fishery on a structured fish population under random changes of habitat conditions. The population parameters correspond to dominant pelagic fish species of Far-Eastern seas of the northwestern part of the Pacific Ocean (pollack, herring, sardine). Similar species inhabit various parts of the Word Ocean. The species body size distribution was chosen as a main population feature. This characteristic is easy to measure and adequately defines main specimen qualities such as age, maturity and other morphological and physiological peculiarities. Environmental fluctuations have a great influence on the individuals in early stages of development and have little influence on the vital activity of mature individuals. The fishery revenue was chosen as an optimality criterion. The main control characteristic is fishing effort. We have chosen quadratic dependence of fishing revenue on the fishing effort according to accepted economic ideas stating that the expenses grow with the production volume. The model study shows that the population structure ensures the increased population stability. The growth and drop out of the individuals’ due to natural mortality smoothens the oscillations of population density arising from the strong influence of the fluctuations of environment on young individuals. The smoothing part is played by diffusion component of the growth processes. The fishery in its turn smooths the fluctuations (including random fluctuations) of the environment and has a substantial impact upon the abundance of fry and the subsequent population dynamics. The optimal time-dependent fishing effort strategy was compared to stationary fishing effort strategy. It is shown that in the case of quickly changing habitat conditions and stochastic dynamics of population replenishment there exists a stationary fishing effort having approximately the same efficiency as an optimal time-dependent fishing effort. This means that a constant or weakly varying fishing effort can be very efficient strategy in terms of revenue.

    Views (last year): 6. Citations: 2 (RSCI).
  4. Shumixin A.G., Aleksandrova A.S.
    Identification of a controlled object using frequency responses obtained from a dynamic neural network model of a control system
    Computer Research and Modeling, 2017, v. 9, no. 5, pp. 729-740

    We present results of a study aimed at identification of a controlled object’s channels based on postprocessing of measurements with development of a model of a multiple-input controlled object and subsequent active modelling experiment. The controlled object model is developed using approximation of its behavior by a neural network model using trends obtained during a passive experiment in the mode of normal operation. Recurrent neural network containing feedback elements allows to simulate behavior of dynamic objects; input and feedback time delays allow to simulate behavior of inertial objects with pure delay. The model was taught using examples of the object’s operation with a control system and is presented by a dynamic neural network and a model of a regulator with a known regulation function. The neural network model simulates the system’s behavior and is used to conduct active computing experiments. Neural network model allows to obtain the controlled object’s response to an exploratory stimulus, including a periodic one. The obtained complex frequency response is used to evaluate parameters of the object’s transfer system using the least squares method. We present an example of identification of a channel of the simulated control system. The simulated object has two input ports and one output port and varying transport delays in transfer channels. One of the input ports serves as a controlling stimulus, the second is a controlled perturbation. The controlled output value changes as a result of control stimulus produced by the regulator operating according to the proportional-integral regulation law based on deviation of the controlled value from the task. The obtained parameters of the object’s channels’ transfer functions are close to the parameters of the input simulated object. The obtained normalized error of the reaction for a single step-wise stimulus of the control system model developed based on identification of the simulated control system doesn’t exceed 0.08. The considered objects pertain to the class of technological processes with continuous production. Such objects are characteristic of chemical, metallurgic, mine-mill, pulp and paper, and other industries.

    Views (last year): 10.
  5. Succi G., Ivanov V.V.
    Comparison of mobile operating systems based on models of growth reliability of the software
    Computer Research and Modeling, 2018, v. 10, no. 3, pp. 325-334

    Evaluation of software reliability is an important part of the process of developing modern software. Many studies are aimed at improving models for measuring and predicting the reliability of software products. However, little attention is paid to approaches to comparing existing systems in terms of software reliability. Despite the enormous importance for practice (and for managing software development), a complete and proven comparison methodology does not exist. In this article, we propose a software reliability comparison methodology in which software reliability growth models are widely used. The proposed methodology has the following features: it provides certain level of flexibility and abstraction while keeping objectivity, i.e. providing measurable comparison criteria. Also, given the comparison methodology with a set of SRGMs and evaluation criteria it becomes much easier to disseminate information about reliability of wide range of software systems. The methodology was evaluated on the example of three mobile operating systems with open source: Sailfish, Tizen, CyanogenMod.

    A byproduct of our study is a comparison of the three analyzed Open Source mobile operating systems. The goal of this research is to determine which OS is stronger in terms of reliability. To this end we have performed a GQM analysis and we have identified 3 questions and 8 metrics. Considering the comparison of metrics, it appears that Sailfish is in most case the best performing OS. However, it is also the OS that performs the worst in most cases. On the contrary, Tizen scores the best in 3 cases out of 8, but the worst only in one case out of 8.

    Views (last year): 29.
  6. Favorskaya A.V.
    Investigation the material properties of a plate by laser ultrasound using the analysis of multiple waves
    Computer Research and Modeling, 2019, v. 11, no. 4, pp. 653-673

    Ultrasound examination of material properties is a precision method for determining their elastic and strength properties in connection with the small wavelength formed in the material after impact of a laser beam. In this paper, the wave processes arising during these measurements are considered in detail. It is shown that full-wave numerical modeling allows us to study in detail the types of waves, topological characteristics of their profile, speed of arrival of waves at various points, identification the types of waves whose measurements are most optimal for examining a sample made of a specific material of a particular shape, and to develop measurement procedures.

    To carry out full-wave modeling, a grid-characteristic method on structured grids was used in this work and a hyperbolic system of equations that describes the propagation of elastic waves in the material of the thin plate under consideration on a specific example of a ratio of thickness to width of 1:10 was solved.

    To simulate an elastic front that arose in the plate due to a laser beam, a model of the corresponding initial conditions was proposed. A comparison of the wave effects that arise during its use in the case of a point source and with the data of physical experiments on the propagation of laser ultrasound in metal plates was made.

    A study was made on the basis of which the characteristic topological features of the wave processes under consideration were identified and revealed. The main types of elastic waves arising due to a laser beam are investigated, the possibility of their use for studying the properties of materials is analyzed. A method based on the analysis of multiple waves is proposed. The proposed method for studying the properties of a plate with the help of multiple waves on synthetic data was tested, and it showed good results.

    It should be noted that most of the studies of multiple waves are aimed at developing methods for their suppression. Multiple waves are not used to process the results of ultrasound studies due to the complexity of their detection in the recorded data of a physical experiment.

    Due to the use of full wave modeling and analysis of spatial dynamic wave processes, multiple waves are considered in detail in this work and it is proposed to divide materials into three classes, which allows using multiple waves to obtain information about the material of the plate.

    The main results of the work are the developed problem statements for the numerical simulation of the study of plates of a finite thickness by laser ultrasound; the revealed features of the wave phenomena arising in plates of a finite thickness; the developed method for studying the properties of the plate on the basis of multiple waves; the developed classification of materials.

    The results of the studies presented in this paper may be of interest not only for developments in the field of ultrasonic non-destructive testing, but also in the field of seismic exploration of the earth's interior, since the proposed approach can be extended to more complex cases of heterogeneous media and applied in geophysics.

    Views (last year): 3.
  7. Malovichko M.S., Petrov I.B.
    On numerical solution of joint inverse geophysical problems with structural constraints
    Computer Research and Modeling, 2020, v. 12, no. 2, pp. 329-343

    Inverse geophysical problems are difficult to solve due to their mathematically incorrect formulation and large computational complexity. Geophysical exploration in frontier areas is even more complicated due to the lack of reliable geological information. In this case, inversion methods that allow interpretation of several types of geophysical data together are recognized to be of major importance. This paper is dedicated to one of such inversion methods, which is based on minimization of the determinant of the Gram matrix for a set of model vectors. Within the framework of this approach, we minimize a nonlinear functional, which consists of squared norms of data residual of different types, the sum of stabilizing functionals and a term that measures the structural similarity between different model vectors. We apply this approach to seismic and electromagnetic synthetic data set. Specifically, we study joint inversion of acoustic pressure response together with controlled-source electrical field imposing structural constraints on resulting electrical conductivity and P-wave velocity distributions.

    We start off this note with the problem formulation and present the numerical method for inverse problem. We implemented the conjugate-gradient algorithm for non-linear optimization. The efficiency of our approach is demonstrated in numerical experiments, in which the true 3D electrical conductivity model was assumed to be known, but the velocity model was constructed during inversion of seismic data. The true velocity model was based on a simplified geology structure of a marine prospect. Synthetic seismic data was used as an input for our minimization algorithm. The resulting velocity model not only fit to the data but also has structural similarity with the given conductivity model. Our tests have shown that optimally chosen weight of the Gramian term may improve resolution of the final models considerably.

  8. Grachev V.A., Nayshtut Yu.S.
    Buckling prediction for shallow convex shells based on the analysis of nonlinear oscillations
    Computer Research and Modeling, 2023, v. 15, no. 5, pp. 1189-1205

    Buckling problems of thin elastic shells have become relevant again because of the discrepancies between the standards in many countries on how to estimate loads causing buckling of shallow shells and the results of the experiments on thinwalled aviation structures made of high-strength alloys. The main contradiction is as follows: the ultimate internal stresses at shell buckling (collapsing) turn out to be lower than the ones predicted by the adopted design theory used in the USA and European standards. The current regulations are based on the static theory of shallow shells that was put forward in the 1930s: within the nonlinear theory of elasticity for thin-walled structures there are stable solutions that significantly differ from the forms of equilibrium typical to small initial loads. The minimum load (the lowest critical load) when there is an alternative form of equilibrium was used as a maximum permissible one. In the 1970s it was recognized that this approach is unacceptable for complex loadings. Such cases were not practically relevant in the past while now they occur with thinner structures used under complex conditions. Therefore, the initial theory on bearing capacity assessments needs to be revised. The recent mathematical results that proved asymptotic proximity of the estimates based on two analyses (the three-dimensional dynamic theory of elasticity and the dynamic theory of shallow convex shells) could be used as a theory basis. This paper starts with the setting of the dynamic theory of shallow shells that comes down to one resolving integrodifferential equation (once the special Green function is constructed). It is shown that the obtained nonlinear equation allows for separation of variables and has numerous time-period solutions that meet the Duffing equation with “a soft spring”. This equation has been thoroughly studied; its numerical analysis enables finding an amplitude and an oscillation period depending on the properties of the Green function. If the shell is oscillated with the trial time-harmonic load, the movement of the surface points could be measured at the maximum amplitude. The study proposes an experimental set-up where resonance oscillations are generated with the trial load normal to the surface. The experimental measurements of the shell movements, the amplitude and the oscillation period make it possible to estimate the safety factor of the structure bearing capacity with non-destructive methods under operating conditions.

  9. Chernavskaya O.D.
    Dynamical theory of information as a basis for natural-constructive approach to modeling a cognitive process
    Computer Research and Modeling, 2017, v. 9, no. 3, pp. 433-447

    The main statements and inferences of the Dynamic Theory Information (DTI) are considered. It is shown that DTI provides the possibility two reveal two essentially important types of information: objective (unconventional) and subjective (conventional) informtion. There are two ways of obtaining information: reception (perception of an already existing one) and generation (production of new) information. It is shown that the processes of generation and perception of information should proceed in two different subsystems of the same cognitive system. The main points of the Natural-Constructivist Approach to modeling the cognitive process are discussed. It is shown that any neuromorphic approach faces the problem of Explanatory Gap between the “Brain” and the “Mind”, i. e. the gap between objectively measurable information about the ensemble of neurons (“Brain”) and subjective information about the human consciousness (“Mind”). The Natural-Constructive Cognitive Architecture developed within the framework of this approach is discussed. It is a complex block-hierarchical combination of several neuroprocessors. The main constructive feature of this architecture is splitting the whole system into two linked subsystems, by analogy with the hemispheres of the human brain. One of the subsystems is processing the new information, learning, and creativity, i.e. for the generation of information. Another subsystem is responsible for processing already existing information, i.e. reception of information. It is shown that the lowest (zero) level of the hierarchy is represented by processors that should record images of real objects (distributed memory) as a response to sensory signals, which is objective information (and refers to the “Brain”). The next hierarchy levels are represented by processors containing symbols of the recorded images. It is shown that symbols represent subjective (conventional) information created by the system itself and providing its individuality. The highest hierarchy levels containing the symbols of abstract concepts provide the possibility to interpret the concepts of “consciousness”, “sub-consciousness”, “intuition”, referring to the field of “Mind”, in terms of the ensemble of neurons. Thus, DTI provides an opportunity to build a model that allows us to trace how the “Mind” could emerge basing on the “Brain”.

    Views (last year): 6.
  10. We build new tests which permit to increase the human capacity for the information processing by the parallel execution of the several logic operations of prescribed type. For checking of the causes of the capacity increasing we develop the check tests on the same logic operations class in which the parallel organization of the calculations is low-effectively. We use the apparatus of the universal algebra and automat theory. This article is the extension of the cycle of the work, which investigates the human capacity for the parallel calculations. The general publications on this theme content in the references. The tasks in the described tests may to define in the form of the calculation of the result in the sequence of the same type operations from some algebra. If this operation is associative then the parallel calculation is effectively by successful grouping of process. In Theory of operations that is the using the simultaneous work several processors. Each processor transforms in the time unit the certain known number of the elements of the input date or the intermediate results (the processor productivity). Now it is not known what kind elements of date are using by the brain for the logical or mathematical calculation, and how many elements are treating in the time units. Therefore the test contains the sequence of the presentations of the tasks with different numbers of logical operations in the fixed alphabet. That is the measure of the complexity for the task. The analysis of the depending of the time for the task solution from the complexity gives the possible to estimate the processor productivity and the form of the calculate organization. For the sequence calculations only one processor is working, and the time of solution is a line function of complexity. If the new processors begin to work in parallel when the complexities of the task increase than the depending of the solution time from complexity is represented by the curve which is convex at the bottom. For the detection of situation when the man increases the speed of the single processor under the condition of the increasing complexity we use the task series with similar operations but in the no associate algebra. In such tasks the parallel calculation is little affectivity in the sense of the increasing efficiency by the increasing the number of processors. That is the check set of the tests. In article we consider still one class of the tests, which are based on the calculation of the trajectory of the formal automat state if the input sequence is determined. We investigate the special class of automats (relay) for which the construction affect on the affectivity of the parallel calculations of the final automat state. For all tests we estimate the affectivity of the parallel calculation. This article do not contained the experiment results.

    Views (last year): 14. Citations: 1 (RSCI).
Pages: « first previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"