Результаты поиска по 'data':
Найдено статей: 317
  1. Belean B., Belean C., Floare C., Varodi C., Bot A., Adam G.
    Grid based high performance computing in satellite imagery. Case study — Perona–Malik filter
    Computer Research and Modeling, 2015, v. 7, no. 3, pp. 399-406

    The present paper discusses an approach to the efficient satellite image processing which involves two steps. The first step assumes the distribution of the steadily increasing volume of satellite collected data through a Grid infrastructure. The second step assumes the acceleration of the solution of the individual tasks related to image processing by implementing execution codes which make heavy use of spatial and temporal parallelism. An instance of such execution code is the image processing by means of the iterative Perona–Malik filter within FPGA application specific hardware architecture.

    Views (last year): 3.
  2. Sviridenko A.B.
    Direct multiplicative methods for sparse matrices. Unbalanced linear systems.
    Computer Research and Modeling, 2016, v. 8, no. 6, pp. 833-860

    Small practical value of many numerical methods for solving single-ended systems of linear equations with ill-conditioned matrices due to the fact that these methods in the practice behave quite differently than in the case of precise calculations. Historically, sustainability is not enough attention was given, unlike in numerical algebra ‘medium-sized’, and emphasis is given to solving the problems of maximal order in data capabilities of the computer, including the expense of some loss of accuracy. Therefore, the main objects of study is the most appropriate storage of information contained in the sparse matrix; maintaining the highest degree of rarefaction at all stages of the computational process. Thus, the development of efficient numerical methods for solving unstable systems refers to the actual problems of computational mathematics.

    In this paper, the approach to the construction of numerically stable direct multiplier methods for solving systems of linear equations, taking into account sparseness of matrices, presented in packaged form. The advantage of the approach consists in minimization of filling the main lines of the multipliers without compromising accuracy of the results and changes in the position of the next processed row of the matrix are made that allows you to use static data storage formats. The storage format of sparse matrices has been studied and the advantage of this format consists in possibility of parallel execution any matrix operations without unboxing, which significantly reduces the execution time and memory footprint.

    Direct multiplier methods for solving systems of linear equations are best suited for solving problems of large size on a computer — sparse matrix systems allow you to get multipliers, the main row of which is also sparse, and the operation of multiplication of a vector-row of the multiplier according to the complexity proportional to the number of nonzero elements of this multiplier.

    As a direct continuation of this work is proposed in the basis for constructing a direct multiplier algorithm of linear programming to put a modification of the direct multiplier algorithm for solving systems of linear equations based on integration of technique of linear programming for methods to select the host item. Direct multiplicative methods of linear programming are best suited for the construction of a direct multiplicative algorithm set the direction of descent Newton methods in unconstrained optimization by integrating one of the existing design techniques significantly positive definite matrix of the second derivatives.

    Views (last year): 20. Citations: 2 (RSCI).
  3. Sviridenko A.B.
    Direct multiplicative methods for sparse matrices. Linear programming
    Computer Research and Modeling, 2017, v. 9, no. 2, pp. 143-165

    Multiplicative methods for sparse matrices are best suited to reduce the complexity of operations solving systems of linear equations performed on each iteration of the simplex method. The matrix of constraints in these problems of sparsely populated nonzero elements, which allows to obtain the multipliers, the main columns which are also sparse, and the operation of multiplication of a vector by a multiplier according to the complexity proportional to the number of nonzero elements of this multiplier. In addition, the transition to the adjacent basis multiplier representation quite easily corrected. To improve the efficiency of such methods requires a decrease in occupancy multiplicative representation of the nonzero elements. However, at each iteration of the algorithm to the sequence of multipliers added another. As the complexity of multiplication grows and linearly depends on the length of the sequence. So you want to run from time to time the recalculation of inverse matrix, getting it from the unit. Overall, however, the problem is not solved. In addition, the set of multipliers is a sequence of structures, and the size of this sequence is inconvenient is large and not precisely known. Multiplicative methods do not take into account the factors of the high degree of sparseness of the original matrices and constraints of equality, require the determination of initial basic feasible solution of the problem and, consequently, do not allow to reduce the dimensionality of a linear programming problem and the regular procedure of compression — dimensionality reduction of multipliers and exceptions of the nonzero elements from all the main columns of multipliers obtained in previous iterations. Thus, the development of numerical methods for the solution of linear programming problems, which allows to overcome or substantially reduce the shortcomings of the schemes implementation of the simplex method, refers to the current problems of computational mathematics.

    In this paper, the approach to the construction of numerically stable direct multiplier methods for solving problems in linear programming, taking into account sparseness of matrices, presented in packaged form. The advantage of the approach is to reduce dimensionality and minimize filling of the main rows of multipliers without compromising accuracy of the results and changes in the position of the next processed row of the matrix are made that allows you to use static data storage formats.

    As a direct continuation of this work is the basis for constructing a direct multiplicative algorithm set the direction of descent in the Newton methods for unconstrained optimization is proposed to put a modification of the direct multiplier method, linear programming by integrating one of the existing design techniques significantly positive definite matrix of the second derivatives.

    Views (last year): 10. Citations: 2 (RSCI).
  4. Akhmetvaleev A.M., Katasev A.S.
    Neural network model of human intoxication functional state determining in some problems of transport safety solution
    Computer Research and Modeling, 2018, v. 10, no. 3, pp. 285-293

    This article solves the problem of vehicles drivers intoxication functional statedetermining. Its solution is relevant in the transport security field during pre-trip medical examination. The problem solution is based on the papillomometry method application, which allows to evaluate the driver state by his pupillary reaction to illumination change. The problem is to determine the state of driver inebriation by the analysis of the papillogram parameters values — a time series characterizing the change in pupil dimensions upon exposure to a short-time light pulse. For the papillograms analysis it is proposed to use a neural network. A neural network model for determining the drivers intoxication functional state is developed. For its training, specially prepared data samples are used which are the values of the following parameters of pupillary reactions grouped into two classes of functional states of drivers: initial diameter, minimum diameter, half-constriction diameter, final diameter, narrowing amplitude, rate of constriction, expansion rate, latent reaction time, the contraction time, the expansion time, the half-contraction time, and the half-expansion time. An example of the initial data is given. Based on their analysis, a neural network model is constructed in the form of a single-layer perceptron consisting of twelve input neurons, twenty-five neurons of the hidden layer, and one output neuron. To increase the model adequacy using the method of ROC analysis, the optimal cut-off point for the classes of solutions at the output of the neural network is determined. A scheme for determining the drivers intoxication state is proposed, which includes the following steps: pupillary reaction video registration, papillogram construction, parameters values calculation, data analysis on the base of the neural network model, driver’s condition classification as “norm” or “rejection of the norm”, making decisions on the person being audited. A medical worker conducting driver examination is presented with a neural network assessment of his intoxication state. On the basis of this assessment, an opinion on the admission or removal of the driver from driving the vehicle is drawn. Thus, the neural network model solves the problem of increasing the efficiency of pre-trip medical examination by increasing the reliability of the decisions made.

    Views (last year): 42. Citations: 2 (RSCI).
  5. Simakov S.S.
    Modern methods of mathematical modeling of blood flow using reduced order methods
    Computer Research and Modeling, 2018, v. 10, no. 5, pp. 581-604

    The study of the physiological and pathophysiological processes in the cardiovascular system is one of the important contemporary issues, which is addressed in many works. In this work, several approaches to the mathematical modelling of the blood flow are considered. They are based on the spatial order reduction and/or use a steady-state approach. Attention is paid to the discussion of the assumptions and suggestions, which are limiting the scope of such models. Some typical mathematical formulations are considered together with the brief review of their numerical implementation. In the first part, we discuss the models, which are based on the full spatial order reduction and/or use a steady-state approach. One of the most popular approaches exploits the analogy between the flow of the viscous fluid in the elastic tubes and the current in the electrical circuit. Such models can be used as an individual tool. They also used for the formulation of the boundary conditions in the models using one dimensional (1D) and three dimensional (3D) spatial coordinates. The use of the dynamical compartment models allows describing haemodynamics over an extended period (by order of tens of cardiac cycles and more). Then, the steady-state models are considered. They may use either total spatial reduction or two dimensional (2D) spatial coordinates. This approach is used for simulation the blood flow in the region of microcirculation. In the second part, we discuss the models, which are based on the spatial order reduction to the 1D coordinate. The models of this type require relatively small computational power relative to the 3D models. Within the scope of this approach, it is also possible to include all large vessels of the organism. The 1D models allow simulation of the haemodynamic parameters in every vessel, which is included in the model network. The structure and the parameters of such a network can be set according to the literature data. It also exists methods of medical data segmentation. The 1D models may be derived from the 3D Navier – Stokes equations either by asymptotic analysis or by integrating them over a volume. The major assumptions are symmetric flow and constant shape of the velocity profile over a cross-section. These assumptions are somewhat restrictive and arguable. Some of the current works paying attention to the 1D model’s validation, to the comparing different 1D models and the comparing 1D models with clinical data. The obtained results reveal acceptable accuracy. It allows concluding, that the 1D approach can be used in medical applications. 1D models allow describing several dynamical processes, such as pulse wave propagation, Korotkov’s tones. Some physiological conditions may be included in the 1D models: gravity force, muscles contraction force, regulation and autoregulation.

    Views (last year): 62. Citations: 2 (RSCI).
  6. Matyushkin I.V., Zapletina M.A.
    Cellular automata review based on modern domestic publications
    Computer Research and Modeling, 2019, v. 11, no. 1, pp. 9-57

    The paper contains the analysis of the domestic publications issued in 2013–2017 years and devoted to cellular automata. The most of them concern on mathematical modeling. Scientometric schedules for 1990–2017 years have proved relevance of subject. The review allows to allocate the main personalities and the scientific directions/schools in modern Russian science, to reveal their originality or secondness in comparison with world science. Due to the authors choice of national publications basis instead of world, the paper claims the completeness and the fact is that about 200 items from the checked 526 references have an importance for science.

    In the Annex to the review provides preliminary information about CA — the Game of Life, a theorem about gardens of Eden, elementary CAs (together with the diagram of de Brujin), block Margolus’s CAs, alternating CAs. Attention is paid to three important for modeling semantic traditions of von Neumann, Zuse and Zetlin, as well as to the relationship with the concepts of neural networks and Petri nets. It is allocated conditional 10 works, which should be familiar to any specialist in CA. Some important works of the 1990s and later are listed in the Introduction.

    Then the crowd of publications is divided into categories: the modification of the CA and other network models (29 %), Mathematical properties of the CA and the connection with mathematics (5 %), Hardware implementation (3 %), Software implementation (5 %), Data Processing, recognition and Cryptography (8 %), Mechanics, physics and chemistry (20 %), Biology, ecology and medicine (15 %), Economics, urban studies and sociology (15 %). In parentheses the share of subjects in the array are indicated. There is an increase in publications on CA in the humanitarian sphere, as well as the emergence of hybrid approaches, leading away from the classic CA definition.

    Views (last year): 58.
  7. Kholodov Y.A.
    Development of network computational models for the study of nonlinear wave processes on graphs
    Computer Research and Modeling, 2019, v. 11, no. 5, pp. 777-814

    In various applications arise problems modeled by nonlinear partial differential equations on graphs (networks, trees). In order to study such problems and various extreme situations arose in the problems of designing and optimizing networks developed the computational model based on solving the corresponding boundary problems for partial differential equations of hyperbolic type on graphs (networks, trees). As applications, three different problems were chosen solved in the framework of the general approach of network computational models. The first was modeling of traffic flow. In solving this problem, a macroscopic approach was used in which the transport flow is described by a nonlinear system of second-order hyperbolic equations. The results of numerical simulations showed that the model developed as part of the proposed approach well reproduces the real situation various sections of the Moscow transport network on significant time intervals and can also be used to select the most optimal traffic management strategy in the city. The second was modeling of data flows in computer networks. In this problem data flows of various connections in packet data network were simulated as some continuous medium flows. Conceptual and mathematical network models are proposed. The numerical simulation was carried out in comparison with the NS-2 network simulation system. The results showed that in comparison with the NS-2 packet model the developed streaming model demonstrates significant savings in computing resources while ensuring a good level of similarity and allows us to simulate the behavior of complex globally distributed IP networks. The third was simulation of the distribution of gas impurities in ventilation networks. It was developed the computational mathematical model for the propagation of finely dispersed or gas impurities in ventilation networks using the gas dynamics equations by numerical linking of regions of different sizes. The calculations shown that the model with good accuracy allows to determine the distribution of gas-dynamic parameters in the pipeline network and solve the problems of dynamic ventilation management.

  8. The paper concerns the study of the Rice statistical distribution’s peculiarities which cause the possibility of its efficient application in solving the tasks of high precision phase measuring in optics. The strict mathematical proof of the Rician distribution’s stable character is provided in the example of the differential signal consideration, namely: it has been proved that the sum or the difference of two Rician signals also obey the Rice distribution. Besides, the formulas have been obtained for the parameters of the resulting summand or differential signal’s Rice distribution. Based upon the proved stable character of the Rice distribution a new original technique of the high precision measuring of the two quasi-harmonic signals’ phase shift has been elaborated in the paper. This technique is grounded in the statistical analysis of the measured sampled data for the amplitudes of the both signals and for the amplitude of the third signal which is equal to the difference of the two signals to be compared in phase. The sought-for phase shift of two quasi-harmonic signals is being calculated from the geometrical considerations as an angle of a triangle which sides are equal to the three indicated signals’ amplitude values having been reconstructed against the noise background. Thereby, the proposed technique of measuring the phase shift using the differential signal analysis, is based upon the amplitude measurements only, what significantly decreases the demands to the equipment and simplifies the technique implementation in practice. The paper provides both the strict mathematical substantiation of a new phase shift measuring technique and the results of its numerical testing. The elaborated method of high precision phase measurements may be efficiently applied for solving a wide circle of tasks in various areas of science and technology, in particular — at distance measuring, in communication systems, in navigation, etc.

  9. Berger A.I., Guda S.A.
    Optimal threshold selection algorithms for multi-label classification: property study
    Computer Research and Modeling, 2022, v. 14, no. 6, pp. 1221-1238

    Multi-label classification models arise in various areas of life, which is explained by an increasing amount of information that requires prompt analysis. One of the mathematical methods for solving this problem is a plug-in approach, at the first stage of which, for each class, a certain ranking function is built, ordering all objects in some way, and at the second stage, the optimal thresholds are selected, the objects on one side of which are assigned to the current class, and on the other — to the other. Thresholds are chosen to maximize the target quality measure. The algorithms which properties are investigated in this article are devoted to the second stage of the plug-in approach which is the choice of the optimal threshold vector. This step becomes non-trivial if the $F$-measure of average precision and recall is used as the target quality assessment since it does not allow independent threshold optimization in each class. In problems of extreme multi-label classification, the number of classes can reach hundreds of thousands, so the original optimization problem is reduced to the problem of searching a fixed point of a specially introduced transformation $\boldsymbol V$, defined on a unit square on the plane of average precision $P$ and recall $R$. Using this transformation, two algorithms are proposed for optimization: the $F$-measure linearization method and the method of $\boldsymbol V$ domain analysis. The properties of algorithms are studied when applied to multi-label classification data sets of various sizes and origin, in particular, the dependence of the error on the number of classes, on the $F$-measure parameter, and on the internal parameters of methods under study. The peculiarity of both algorithms work when used for problems with the domain of $\boldsymbol V$, containing large linear boundaries, was found. In case when the optimal point is located in the vicinity of these boundaries, the errors of both methods do not decrease with an increase in the number of classes. In this case, the linearization method quite accurately determines the argument of the optimal point, while the method of $\boldsymbol V$ domain analysis — the polar radius.

  10. Zhluktov S.V., Aksenov A.A., Kuranosov N.S.
    Simulation of turbulent compressible flows in the FlowVision software
    Computer Research and Modeling, 2023, v. 15, no. 4, pp. 805-825

    Simulation of turbulent compressible gas flows using turbulence models $k-\varepsilon$ standard (KES), $k-\varepsilon$ FlowVision (KEFV) and SST $k-\omega$ is discussed in the given article. A new version of turbulence model KEFV is presented. The results of its testing are shown. Numerical investigation of the discharge of an over-expanded jet from a conic nozzle into unlimited space is performed. The results are compared against experimental data. The dependence of the results on computational mesh is demonstrated. The dependence of the results on turbulence specified at the nozzle inlet is demonstrated. The conclusion is drawn about necessity to allow for compressibility in two-parametric turbulence models. The simple method proposed by Wilcox in 1994 suits well for this purpose. As a result, the range of applicability of the three aforementioned two-parametric turbulence models is essentially extended. Particular values of the constants responsible for the account of compressibility in the Wilcox approach are proposed. It is recommended to specify these values in simulations of compressible flows with use of models KES, KEFV, and SST.

    In addition, the question how to obtain correct characteristics of supersonic turbulent flows using two-parametric turbulence models is considered. The calculations on different grids have shown that specifying a laminar flow at the inlet to the nozzle and wall functions at its surfaces, one obtains the laminar core of the flow up to the fifth Mach disk. In order to obtain correct flow characteristics, it is necessary either to specify two parameters characterizing turbulence of the inflowing gas, or to set a “starting” turbulence in a limited volume enveloping the region of presumable laminar-turbulent transition next to the exit from the nozzle. The latter possibility is implemented in model KEFV.

Pages: previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"