Результаты поиска по 'data visualization':
Найдено статей: 20
  1. Rusanova Ya.M., Cherdyntseva M.I.
    Visualization of three-dimensional scenes. Technology for data storing and manipulating
    Computer Research and Modeling, 2009, v. 1, no. 2, pp. 119-127

    This article is devoted to some problems of declaring and storing information for objects' visualization. The storage structure and resources control technology can be applied for real-time visualization of three-dimensional scenes. Such instruments as Sample Framework from DirectX SDK and Direct3D Extension Library (D3DX) were used in the implementation.

    Views (last year): 2. Citations: 2 (RSCI).
  2. The paper provides a solution of a task of calculating the parameters of a Rician distributed signal on the basis of the maximum likelihood principle in limiting cases of large and small values of the signal-tonoise ratio. The analytical formulas are obtained for the solution of the maximum likelihood equations’ system for the required signal and noise parameters for both the one-parameter approximation, when only one parameter is being calculated on the assumption that the second one is known a-priori, and for the two-parameter task, when both parameters are a-priori unknown. The direct calculation of required signal and noise parameters by formulas allows escaping the necessity of time resource consuming numerical solving the nonlinear equations’ s system and thus optimizing the duration of computer processing of signals and images. There are presented the results of computer simulation of a task confirming the theoretical conclusions. The task is meaningful for the purposes of Rician data processing, in particular, magnetic-resonance visualization.

    Views (last year): 2.
  3. The paper provides a solution of the two-parameter task of joint signal and noise estimation at data analysis within the conditions of the Rice distribution by the techniques of mathematical statistics: the maximum likelihood method and the variants of the method of moments. The considered variants of the method of moments include the following techniques: the joint signal and noise estimation on the basis of measuring the 2-nd and the 4-th moments (MM24) and on the basis of measuring the 1-st and the 2-nd moments (MM12). For each of the elaborated methods the explicit equations’ systems have been obtained for required parameters of the signal and noise. An important mathematical result of the investigation consists in the fact that the solution of the system of two nonlinear equations with two variables — the sought for signal and noise parameters — has been reduced to the solution of just one equation with one unknown quantity what is important from the view point of both the theoretical investigation of the proposed technique and its practical application, providing the possibility of essential decreasing the calculating resources required for the technique’s realization. The implemented theoretical analysis has resulted in an important practical conclusion: solving the two-parameter task does not lead to the increase of required numerical resources if compared with the one-parameter approximation. The task is meaningful for the purposes of the rician data processing, in particular — the image processing in the systems of magnetic-resonance visualization. The theoretical conclusions have been confirmed by the results of the numerical experiment.

    Views (last year): 2. Citations: 2 (RSCI).
  4. Usanov M.S., Kulberg N.S., Morozov S.P.
    Development of anisotropic nonlinear noise-reduction algorithm for computed tomography data with context dynamic threshold
    Computer Research and Modeling, 2019, v. 11, no. 2, pp. 233-248

    The article deals with the development of the noise-reduction algorithm based on anisotropic nonlinear data filtering of computed tomography (CT). Analysis of domestic and foreign literature has shown that the most effective algorithms for noise reduction of CT data use complex methods for analyzing and processing data, such as bilateral, adaptive, three-dimensional and other types of filtrations. However, a combination of such techniques is rarely used in practice due to long processing time per slice. In this regard, it was decided to develop an efficient and fast algorithm for noise-reduction based on simplified bilateral filtration method with three-dimensional data accumulation. The algorithm was developed on C ++11 programming language in Microsoft Visual Studio 2015. The main difference of the developed noise reduction algorithm is the use an improved mathematical model of CT noise, based on the distribution of Poisson and Gauss from the logarithmic value, developed earlier by our team. This allows a more accurate determination of the noise level and, thus, the threshold of data processing. As the result of the noise reduction algorithm, processed CT data with lower noise level were obtained. Visual evaluation of the data showed the increased information content of the processed data, compared to original data, the clarity of the mapping of homogeneous regions, and a significant reduction in noise in processing areas. Assessing the numerical results of the algorithm showed a decrease in the standard deviation (SD) level by more than 6 times in the processed areas, and high rates of the determination coefficient showed that the data were not distorted and changed only due to the removal of noise. Usage of newly developed context dynamic threshold made it possible to decrease SD level on every area of data. The main difference of the developed threshold is its simplicity and speed, achieved by preliminary estimation of the data array and derivation of the threshold values that are put in correspondence with each pixel of the CT. The principle of its work is based on threshold criteria, which fits well both into the developed noise reduction algorithm based on anisotropic nonlinear filtration, and another algorithm of noise-reduction. The algorithm successfully functions as part of the MultiVox workstation and is being prepared for implementation in a single radiological network of the city of Moscow.

    Views (last year): 21.
  5. Yakovleva T.V.
    Signal and noise parameters’ determination at rician data analysis by method of moments of lower odd orders
    Computer Research and Modeling, 2017, v. 9, no. 5, pp. 717-728

    The paper develops a new mathematical method of the joint signal and noise parameters determination at the Rice statistical distribution by method of moments based upon the analysis of data for the 1-st and the 3-rd raw moments of the random rician value. The explicit equations’ system have been obtained for required parameters of the signal and noise. In the limiting case of the small value of the signal-to-noise ratio the analytical formulas have been derived that allow calculating the required parameters without the necessity of solving the equations numerically. The technique having been elaborated in the paper ensures an efficient separation of the informative and noise components of the data to be analyzed without any a-priori restrictions, just based upon the processing of the results of the signal’s sampled measurements. The task is meaningful for the purposes of the rician data processing, in particular in the systems of magnetic-resonance visualization, in ultrasound visualization systems, at the optical signals’ analysis in range measuring systems, in radio location, etc. The results of the investigation have shown that the two parameter task solution of the proposed technique does not lead to the increase in demanded volume of computing resources compared with the one parameter task being solved in approximation that the second parameter of the task is known a-priori There are provided the results of the elaborated technique’s computer simulation. The results of the signal and noise parameters’ numerical calculation have confirmed the efficiency of the elaborated technique. There has been conducted the comparison of the accuracy of the sought-for parameters estimation by the technique having been developed in this paper and by the previously elaborated method of moments based upon processing the measured data for lower even moments of the signal to be analyzed.

    Views (last year): 10. Citations: 1 (RSCI).
  6. Pechenyuk A.V.
    Optimization of a hull form for decrease ship resistance to movement
    Computer Research and Modeling, 2017, v. 9, no. 1, pp. 57-65

    Optimization of hull lines for the minimum resistance to movement is a problem of current interest in ship hydrodynamics. In practice, lines design is still to some extent an art. The usual approaches to decrease the ship resistance are based on the model experiment and/or CFD simulation, following the trial and error method. The paper presents a new method of in-detail hull form design based on the wave-based optimization approach. The method provides systematic variation of the hull geometrical form, which corresponds to alteration of longitudinal distribution of the hull volume, while its vertical volume distribution is fixed or highly controlled. It’s well known from the theoretical studies that the vertical distribution can't be optimized by condition of minimum wave resistance, thus it can be neglected for the optimization procedures. The method efficiency was investigated by application to the foreship of KCS, the well-known test object from the workshop Gothenburg-2000. The variations of the longitudinal distribution of the volume were set on the sectional area curve as finite volume increments and then transferred to the lines plan with the help of special frame transformation methods. The CFD towing simulations were carried out for the initial hull form and the six modified variants. According to the simulation results, examined modifications caused the resistance increments in the range 1.3–6.5 %. Optimization process was underpinned with the respective data analysis based on the new hypothesis, according to which, the resistance increments caused by separate longitudinal segments of hull form meet the principle of superposition. The achieved results, which are presented as the optimum distribution of volume present in the optimized designed hull form, which shows the interesting characteristics that its resistance has decrease by 8.9 % in respect to initial KCS hull form. Visualization of the wave patterns showed an attenuation of the transversal wave components, and the intensification of the diverging wave components.

    Views (last year): 10. Citations: 1 (RSCI).
  7. Abgaryan K.K., Eliseev S.V., Zhuravlev A.A., Reviznikov D.L.
    High-speed penetration. Discrete-element simulation and experiments
    Computer Research and Modeling, 2017, v. 9, no. 6, pp. 937-944

    The paper presents the results of numerical simulation and experimental data on the high-speed penetration of the impactor into the obstacle. In the calculations, a discrete-element model has been used, based on the representation of the impactor and the target by a set of close packed interconnected particles. This class of models finds an increasingly wide application in the problems of high-speed interaction of bodies. In the previous works of the authors, the questions of application of the discrete-element model to the problem of the penetration of spherical impactors into massive targets were considered. On the basis of a comparative analysis of the data of computational and physical experiments, it was found out that for a wide class of high-speed penetration problems, a high accuracy of discrete-element modeling can be achieved using the two-parameter Lennard–Jones potential. The binding energy was identified as a function of the dynamic hardness of materials. It was shown that the use of this approach makes it possible to describe accurately the penetration process in the range of impactor velocities 500–2500 m/c.

    In this paper, we compare the results of discrete-element modeling with experimental data on penetration of high-strength targets of different thickness by steel impactors. The use of computational parallelization technologies on graphic processors in combination with 3D visualization and animation of the results makes it possible to obtain detailed spatio-temporal patterns of the penetration process and compare them with experimental data.

    A comparative analysis of the experimental and calculated data has shown a sufficiently high accuracy of discrete-element modeling for a wide range of target thicknesses: for thin targets pierced with preservation of the integrity of the deformed impactor, for targets of medium thickness, pierced with practically complete fragmentation of the impactor at the exit from the target, and for thick impenetrable targets.

    Views (last year): 13. Citations: 4 (RSCI).
  8. The mathematical model, finite-difference schemes and algorithms for computation of transient thermoand hydrodynamic processes involved in commissioning the unified system including the oil producing well, electrical submersible pump and fractured-porous reservoir with bottom water are developed. These models are implemented in the computer package to simulate transient processes with simultaneous visualization of their results along with computations. An important feature of the package Oil-RWP is its interaction with the special external program GCS which simulates the work of the surface electric control station and data exchange between these two programs. The package Oil-RWP sends telemetry data and current parameters of the operating submersible unit to the program module GCS (direct coupling). The station controller analyzes incoming data and generates the required control parameters for the submersible pump. These parameters are sent to Oil-RWP (feedback). Such an approach allows us to consider the developed software as the “Intellectual Well System”.

    Some principal results of the simulations can be briefly presented as follows. The transient time between inaction and quasi-steady operation of the producing well depends on the well stream watering, filtration and capacitive parameters of oil reservoir, physical-chemical properties of phases and technical characteristics of the submersible unit. For the large time solution of the nonstationary equations governing the nonsteady processes is practically identical to the inverse quasi-stationary problem solution with the same initial data. The developed software package is an effective tool for analysis, forecast and optimization of the exploiting parameters of the unified oil-producing complex during its commissioning into the operating regime.

  9. Winn A.P., Kyaw H., Troyanovskyi V.M., Aung Y.L.
    Methodology and program for the storage and statistical analysis of the results of computer experiment
    Computer Research and Modeling, 2013, v. 5, no. 4, pp. 589-595

    The problem of accumulation and the statistical analysis of computer experiment results are solved. The main experiment program is considered as the data source. The results of main experiment are collected on specially prepared sheet Excel with pre-organized structure for the accumulation, statistical processing and visualization of the data. The created method and the program are used at efficiency research of the scientific researches which are carried out by authors.

    Views (last year): 1. Citations: 5 (RSCI).
  10. Koganov A.V., Rakcheeva T.A., Prikhodko D.I.
    Comparative analysis of human adaptation to the growth of visual information in the tasks of recognizing formal symbols and meaningful images
    Computer Research and Modeling, 2021, v. 13, no. 3, pp. 571-586

    We describe an engineering-psychological experiment that continues the study of ways to adapt a person to the increasing complexity of logical problems by presenting a series of problems of increasing complexity, which is determined by the volume of initial data. Tasks require calculations in an associative or non-associative system of operations. By the nature of the change in the time of solving the problem, depending on the number of necessary operations, we can conclude that a purely sequential method of solving problems or connecting additional brain resources to the solution in parallel mode. In a previously published experimental work, a person in the process of solving an associative problem recognized color images with meaningful images. In the new study, a similar problem is solved for abstract monochrome geometric shapes. Analysis of the result showed that for the second case, the probability of the subject switching to a parallel method of processing visual information is significantly reduced. The research method is based on presenting a person with two types of tasks. One type of problem contains associative calculations and allows a parallel solution algorithm. Another type of problem is the control one, which contains problems in which calculations are not associative and parallel algorithms are ineffective. The task of recognizing and searching for a given object is associative. A parallel strategy significantly speeds up the solution with relatively small additional resources. As a control series of problems (to separate parallel work from the acceleration of a sequential algorithm), we use, as in the previous experiment, a non-associative comparison problem in cyclic arithmetic, presented in the visual form of the game “rock, paper, scissors”. In this problem, the parallel algorithm requires a large number of processors with a small efficiency coefficient. Therefore, the transition of a person to a parallel algorithm for solving this problem is almost impossible, and the acceleration of processing input information is possible only by increasing the speed. Comparing the dependence of the solution time on the volume of source data for two types of problems allows us to identify four types of strategies for adapting to the increasing complexity of the problem: uniform sequential, accelerated sequential, parallel computing (where possible), or undefined (for this method) strategy. The Reducing of the number of subjects, who switch to a parallel strategy when encoding input information with formal images, shows the effectiveness of codes that cause subject associations. They increase the speed of human perception and processing of information. The article contains a preliminary mathematical model that explains this phenomenon. It is based on the appearance of a second set of initial data, which occurs in a person as a result of recognizing the depicted objects.

Pages: next

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"