Результаты поиска по 'testing':
Найдено статей: 132
  1. Petrov M.N., Zimina S.V., Dyachenko D.L., Dubodelov A.V., Simakov S.S.
    Dual-pass Feature-Fused SSD model for detecting multi-scale images of workers on the construction site
    Computer Research and Modeling, 2023, v. 15, no. 1, pp. 57-73

    When recognizing workers on images of a construction site obtained from surveillance cameras, a situation is typical in which the objects of detection have a very different spatial scale relative to each other and other objects. An increase in the accuracy of detection of small objects can be achieved by using the Feature-Fused modification of the SSD detector. Together with the use of overlapping image slicing on the inference, this model copes well with the detection of small objects. However, the practical use of this approach requires manual adjustment of the slicing parameters. This reduces the accuracy of object detection on scenes that differ from the scenes used in training, as well as large objects. In this paper, we propose an algorithm for automatic selection of image slicing parameters depending on the ratio of the characteristic geometric dimensions of objects in the image. We have developed a two-pass version of the Feature-Fused SSD detector for automatic determination of optimal image slicing parameters. On the first pass, a fast truncated version of the detector is used, which makes it possible to determine the characteristic sizes of objects of interest. On the second pass, the final detection of objects with slicing parameters selected after the first pass is performed. A dataset was collected with images of workers on a construction site. The dataset includes large, small and diverse images of workers. To compare the detection results for a one-pass algorithm without splitting the input image, a one-pass algorithm with uniform splitting, and a two-pass algorithm with the selection of the optimal splitting, we considered tests for the detection of separately large objects, very small objects, with a high density of objects both in the foreground and in the background, only in the background. In the range of cases we have considered, our approach is superior to the approaches taken in comparison, allows us to deal well with the problem of double detections and demonstrates a quality of 0.82–0.91 according to the mAP (mean Average Precision) metric.

  2. An algorithm is proposed to identify parameters of a 2D vortex structure used on information about the flow velocity at a finite (small) set of reference points. The approach is based on using a set of point vortices as a model system and minimizing a functional that compares the model and known sets of velocity vectors in the space of model parameters. For numerical implementation, the method of gradient descent with step size control, approximation of derivatives by finite differences, and the analytical expression of the velocity field induced by the point vortex model are used. An experimental analysis of the operation of the algorithm on test flows is carried out: one and a system of several point vortices, a Rankine vortex, and a Lamb dipole. According to the velocity fields of test flows, the velocity vectors utilized for identification were arranged in a randomly distributed set of reference points (from 3 to 200 pieces). Using the computations, it was determined that: the algorithm converges to the minimum from a wide range of initial approximations; the algorithm converges in all cases when the reference points are located in areas where the streamlines of the test and model systems are topologically equivalent; if the streamlines of the systems are not topologically equivalent, then the percentage of successful calculations decreases, but convergence can also take place; when the method converges, the coordinates of the vortices of the model system are close to the centers of the vortices of the test configurations, and in many cases, the values of their circulations also; con-vergence depends more on location than on the number of vectors used for identification. The results of the study allow us to recommend the proposed algorithm for identifying 2D vortex structures whose streamlines are topologically close to systems of point vortices.

  3. Kazorin V.I., Kholodov Y.A.
    Framework sumo-atclib for adaptive traffic control modeling
    Computer Research and Modeling, 2024, v. 16, no. 1, pp. 69-78

    This article proposes the sumo-atclib framework, which provides a convenient uniform interface for testing adaptive control algorithms with different limitations, for example, restrictions on phase durations, phase sequences, restrictions on the minimum time between control actions, which uses the open source microscopic transport modeling environment SUMO. The framework shares the functionality of controllers (class TrafficController) and a monitoring and detection system (class StateObserver), which repeats the architecture of real traffic light objects and adaptive control systems and simplifies the testing of new algorithms, since combinations of different controllers and vehicle detection systems can be freely varied. Also, unlike most existing solutions, the road class Road has been added, which combines a set of lanes, this allows, for example, to determine the adjacency of regulated intersections, in cases when the number of lanes changes on the way from one intersection to another, and therefore the road graph is divided into several edges. At the same time, the algorithms themselves use the same interface and are abstracted from the specific parameters of the detectors, network topologies, that is, it is assumed that this solution will allow the transport engineer to test ready-made algorithms for a new scenario, without the need to adapt them to new conditions, which speeds up the development process of the control system, and reduces design overhead. At the moment, the package contains examples of MaxPressure algorithms and the Q-learning reinforcement learning method, the database of examples is also being updated. The framework also includes a set of SUMO scripts for testing algorithms, which includes both synthetic maps and well-verified SUMO scripts such as Cologne and Ingolstadt. In addition, the framework provides a set of automatically calculated metrics, such as total travel time, delay time, average speed; the framework also provides a ready-made example for visualization of metrics.

  4. Elizarova T.G., Zherikov A.V., Kalachinskaya I.S.
    Numerical solution of quasi-hydrodynamic equations on non-structured triangle mesh
    Computer Research and Modeling, 2009, v. 1, no. 2, pp. 181-188

    A new flow modeling method on unstructured grid was proposed. As a basis system this method used quasi-hydro-dynamic equations. The finite volume method vas used for solving these equations. The Delaunay triangulation was used for constructing mesh. This proposed method was tested in modeling of incompressible flow through a channel with complex profile. The acquired results showed that the proposed method could be used in flow modeling in unstructured grid.

    Views (last year): 1.
  5. Cheremisina E.N., Senner A.E.
    The use of GIS INTEGRO in searching tasks for oil and gas deposits
    Computer Research and Modeling, 2015, v. 7, no. 3, pp. 439-444

    GIS INTEGRO is the geo-information software system forming the basis for the integrated interpretation of geophysical data in researching a deep structure of Earth. GIS INTEGRO combines a variety of computational and analytical applications for the solution of geological and geophysical problems. It includes various interfaces that allow you to change the form of representation of data (raster, vector, regular and irregular network of observations), the conversion unit of map projections, application blocks, including block integrated data analysis and decision prognostic and diagnostic tasks.

    The methodological approach is based on integration and integrated analysis of geophysical data on regional profiles, geophysical potential fields and additional geological information on the study area. Analytical support includes packages transformations, filtering, statistical processing, calculation, finding of lineaments, solving direct and inverse tasks, integration of geographic information.

    Technology and software and analytical support was tested in solving problems tectonic zoning in scale 1:200000, 1:1000000 in Yakutia, Kazakhstan, Rostov region, studying the deep structure of regional profiles 1:S, 1-SC, 2-SAT, 3-SAT and 2-DV, oil and gas forecast in the regions of Eastern Siberia, Brazil.

    The article describes two possible approaches of parallel calculations for data processing 2D or 3D nets in the field of geophysical research. As an example presented realization in the environment of GRID of the application software ZondGeoStat (statistical sensing), which create 3D net model on the basis of data 2d net. The experience has demonstrated the high efficiency of the use of environment of GRID during realization of calculations in field of geophysical researches.

    Views (last year): 4.
  6. Kalashnikov S.V., Krivoschapov A.A., Mitin A.L., Nikolaev N.V.
    Computational investigation of aerodynamic performance of the generic flying-wing aircraft model using FlowVision computational code
    Computer Research and Modeling, 2017, v. 9, no. 1, pp. 67-74

    Modern approach to modernization of the experimental techniques involves design of mathematical models of the wind-tunnel, which are also referred to as Electronic of Digital Wind-Tunnels. They are meant to supplement experimental data with computational analysis. Using Electronic Wind-Tunnels is supposed to provide accurate information on aerodynamic performance of an aircraft basing on a set of experimental data, to obtain agreement between data from different test facilities and perform comparison between computational results for flight conditions and data with the presence of support system and test section.

    Completing this task requires some preliminary research, which involves extensive wind-tunnel testing as well as RANS-based computational research with the use of supercomputer technologies. At different stages of computational investigation one may have to model not only the aircraft itself but also the wind-tunnel test section and the model support system. Modelling such complex geometries will inevitably result in quite complex vertical and separated flows one will have to simulate. Another problem is that boundary layer transition is often present in wind-tunnel testing due to quite small model scales and therefore low Reynolds numbers.

    In the current article the first stage of the Electronic Wind-Tunnel design program is covered. This stage involves computational investigation of aerodynamic characteristics of the generic flying-wing UAV model previously tested in TsAGI T-102 wind-tunnel. Since this stage is preliminary the model was simulated without taking test-section and support system geometry into account. The boundary layer was considered to be fully turbulent.

    For the current research FlowVision computational code was used because of its automatic grid generation feature and stability of the solver when simulating complex flows. A two-equation k–ε turbulence model was used with special wall functions designed to properly capture flow separation. Computed lift force and drag force coefficients for different angles-of-attack were compared to the experimental data.

    Views (last year): 10. Citations: 1 (RSCI).
  7. We build new tests which permit to increase the human capacity for the information processing by the parallel execution of the several logic operations of prescribed type. For checking of the causes of the capacity increasing we develop the check tests on the same logic operations class in which the parallel organization of the calculations is low-effectively. We use the apparatus of the universal algebra and automat theory. This article is the extension of the cycle of the work, which investigates the human capacity for the parallel calculations. The general publications on this theme content in the references. The tasks in the described tests may to define in the form of the calculation of the result in the sequence of the same type operations from some algebra. If this operation is associative then the parallel calculation is effectively by successful grouping of process. In Theory of operations that is the using the simultaneous work several processors. Each processor transforms in the time unit the certain known number of the elements of the input date or the intermediate results (the processor productivity). Now it is not known what kind elements of date are using by the brain for the logical or mathematical calculation, and how many elements are treating in the time units. Therefore the test contains the sequence of the presentations of the tasks with different numbers of logical operations in the fixed alphabet. That is the measure of the complexity for the task. The analysis of the depending of the time for the task solution from the complexity gives the possible to estimate the processor productivity and the form of the calculate organization. For the sequence calculations only one processor is working, and the time of solution is a line function of complexity. If the new processors begin to work in parallel when the complexities of the task increase than the depending of the solution time from complexity is represented by the curve which is convex at the bottom. For the detection of situation when the man increases the speed of the single processor under the condition of the increasing complexity we use the task series with similar operations but in the no associate algebra. In such tasks the parallel calculation is little affectivity in the sense of the increasing efficiency by the increasing the number of processors. That is the check set of the tests. In article we consider still one class of the tests, which are based on the calculation of the trajectory of the formal automat state if the input sequence is determined. We investigate the special class of automats (relay) for which the construction affect on the affectivity of the parallel calculations of the final automat state. For all tests we estimate the affectivity of the parallel calculation. This article do not contained the experiment results.

    Views (last year): 14. Citations: 1 (RSCI).
  8. Svistunov I.N., Kolokol A.S.
    An analysis of interatomic potentials for vacancy diffusion simulation in concentrated Fe–Cr alloys
    Computer Research and Modeling, 2018, v. 10, no. 1, pp. 87-101

    The study tested correctness of three interatomic potentials available in the scientific literature in reproducing a vacancy diffusion in concentrated Fe–Cr alloys by molecular dynamic simulations. It was necessary for further detailed study of vacancy diffusion mechanism in these alloys with Cr content 5–25 at.% at temperatures in the range of 600–1000 K. The analysis of the potentials was performed on alloys models with Cr content 10, 20, 50 at.%. The consideration of the model with chromium content 50 at.% was necessary for further study of diffusion processes in chromium-rich precipitates in these alloys. The formation energies and the atomic mobilities of iron and chromium atoms were calculated and analyzed in the alloys via an artificially created vacancy for all used potentials. A time dependence of mean squared displacement of atoms was chosen as а main characteristic for the analysis of atomic mobilities. The simulation of vacancy formation energies didn’t show qualitative differences between the investigated potentials. The study of atomic mobilities showed a poor reproduction of vacancy diffusion in the simulated alloys by the concentration-dependent model (CDM), which strongly underestimated the mobility of chromium atoms via vacancy in the investigated range of temperature and chromium content. Also it was established, that the two-band model (2BM) of potentials in its original and modified version doesn’t have such drawbacks. This allows one to use these potentials in simulations of vacancy diffusion mechanism in Fe–Cr alloys. Both potentials show a significant dependence of the ratio of chromium and iron atomic mobilities on temperature and Cr content in simulated alloys. The quantitative data of the diffusion coefficients of atoms obtained by these potentials also differ significantly.

    Views (last year): 14.
  9. Zhluktov S.V., Aksenov A.A., Savitskiy D.V.
    High-Reynolds number calculations of turbulent heat transfer in FlowVision software
    Computer Research and Modeling, 2018, v. 10, no. 4, pp. 461-481

    This work presents the model of heat wall functions FlowVision (WFFV), which allows simulation of nonisothermal flows of fluid and gas near solid surfaces on relatively coarse grids with use of turbulence models. The work follows the research on the development of wall functions applicable in wide range of the values of quantity y+. Model WFFV assumes smooth profiles of the tangential component of velocity, turbulent viscosity, temperature, and turbulent heat conductivity near a solid surface. Possibility of using a simple algebraic model for calculation of variable turbulent Prandtl number is investigated in this study (the turbulent Prandtl number enters model WFFV as parameter). The results are satisfactory. The details of implementation of model WFFV in the FlowVision software are explained. In particular, the boundary condition for the energy equation used in high-Reynolds number calculations of non-isothermal flows is considered. The boundary condition is deduced for the energy equation written via thermodynamic enthalpy and via full enthalpy. The capability of the model is demonstrated on two test problems: flow of incompressible fluid past a plate and supersonic flow of gas past a plate (M = 3).

    Analysis of literature shows that there exists essential ambiguity in experimental data and, as a consequence, in empirical correlations for the Stanton number (that being a dimensionless heat flux). The calculations suggest that the default values of the model parameters, automatically specified in the program, allow calculations of heat fluxes at extended solid surfaces with engineering accuracy. At the same time, it is obvious that one cannot invent universal wall functions. For this reason, the controls of model WFFV are made accessible from the FlowVision interface. When it is necessary, a user can tune the model for simulation of the required type of flow.

    The proposed model of wall functions is compatible with all the turbulence models implemented in the FlowVision software: the algebraic model of Smagorinsky, the Spalart-Allmaras model, the SST $k-\omega$ model, the standard $k-\varepsilon$ model, the $k-\varepsilon$ model of Abe, Kondoh, Nagano, the quadratic $k-\varepsilon$ model, and $k-\varepsilon$ model FlowVision.

    Views (last year): 23.
  10. Sorokin K.E., Byvaltsev P.M., Aksenov A.A., Zhluktov S.V., Savitskiy D.V., Babulin A.A., Shevyakov V.I.
    Numerical simulation of ice accretion in FlowVision software
    Computer Research and Modeling, 2020, v. 12, no. 1, pp. 83-96

    Certifying a transport airplane for the flights under icing conditions requires calculations aimed at definition of the dimensions and shapes of the ice bodies formed on the airplane surfaces. Up to date, software developed in Russia for simulation of ice accretion, which would be authorized by Russian certifying supervisory authority, is absent. This paper describes methodology IceVision recently developed in Russia on the basis of software FlowVision for calculations of ice accretion on airplane surfaces.

    The main difference of methodology IceVision from the other approaches, known from literature, consists in using technology Volume Of Fluid (VOF — volume of fluid in cell) for tracking the surface of growing ice body. The methodology assumes solving a time-depended problem of continuous grows of ice body in the Euler formulation. The ice is explicitly present in the computational domain. The energy equation is integrated inside the ice body. In the other approaches, changing the ice shape is taken into account by means of modifying the aerodynamic surface and using Lagrangian mesh. In doing so, the heat transfer into ice is allowed for by an empirical model.

    The implemented mathematical model provides capability to simulate formation of rime (dry) and glaze (wet) ice. It automatically identifies zones of rime and glaze ice. In a rime (dry) ice zone, the temperature of the contact surface between air and ice is calculated with account of ice sublimation and heat conduction inside the ice. In a glaze (wet) ice zone, the flow of the water film over the ice surface is allowed for. The film freezes due to evaporation and heat transfer inside the air and the ice. Methodology IceVision allows for separation of the film. For simulation of the two-phase flow of the air and droplets, a multi-speed model is used within the Euler approach. Methodology IceVision allows for size distribution of droplets. The computational algorithm takes account of essentially different time scales for the physical processes proceeding in the course of ice accretion, viz., air-droplets flow, water flow, and ice growth. Numerical solutions of validation test problems demonstrate efficiency of methodology IceVision and reliability of FlowVision results.

Pages: « first previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"