Результаты поиска по 'computer analysis':
Найдено статей: 129
  1. Koganov A.V., Rakcheeva T.A., Prikhodko D.I.
    Experimental identification of the organization of mental calculations of the person on the basis of algebras of different associativity
    Computer Research and Modeling, 2019, v. 11, no. 2, pp. 311-327

    The work continues research on the ability of a person to improve the productivity of information processing, using parallel work or improving the performance of analyzers. A person receives a series of tasks, the solution of which requires the processing of a certain amount of information. The time and the validity of the decision are recorded. The dependence of the average solution time on the amount of information in the problem is determined by correctly solved problems. In accordance with the proposed method, the problems contain calculations of expressions in two algebras, one of which is associative and the other is nonassociative. To facilitate the work of the subjects in the experiment were used figurative graphic images of elements of algebra. Non-associative calculations were implemented in the form of the game “rock-paper-scissors”. It was necessary to determine the winning symbol in the long line of these figures, considering that they appear sequentially from left to right and play with the previous winner symbol. Associative calculations were based on the recognition of drawings from a finite set of simple images. It was necessary to determine which figure from this set in the line is not enough, or to state that all the pictures are present. In each problem there was no more than one picture. Computation in associative algebra allows the parallel counting, and in the absence of associativity only sequential computations are possible. Therefore, the analysis of the time for solving a series of problems reveals a consistent uniform, sequential accelerated and parallel computing strategy. In the experiments it was found that all subjects used a uniform sequential strategy to solve non-associative problems. For the associative task, all subjects used parallel computing, and some have used parallel computing acceleration of the growth of complexity of the task. A small part of the subjects with a high complexity, judging by the evolution of the solution time, supplemented the parallel account with a sequential stage of calculations (possibly to control the solution). We develop a special method for assessing the rate of processing of input information by a person. It allowed us to estimate the level of parallelism of the calculation in the associative task. Parallelism of level from two to three was registered. The characteristic speed of information processing in the sequential case (about one and a half characters per second) is twice less than the typical speed of human image recognition. Apparently the difference in processing time actually spent on the calculation process. For an associative problem in the case of a minimum amount of information, the solution time is near to the non-associativity case or less than twice. This is probably due to the fact that for a small number of characters recognition almost exhausts the calculations for the used non-associative problem.

    Views (last year): 16.
  2. Ameenuddin M., Anand M.
    CFD analysis of hemodynamics in idealized abdominal aorta-renal artery junction: preliminary study to locate atherosclerotic plaque
    Computer Research and Modeling, 2019, v. 11, no. 4, pp. 695-706

    Atherosclerotic diseases such as carotid artery diseases (CAD) and chronic kidney diseases (CKD) are the major causes of death worldwide. The onset of these atherosclerotic diseases in the arteries are governed by complex blood flow dynamics and hemodynamic parameters. Atherosclerosis in renal arteries leads to reduction in arterial efficiency, which ultimately leads to Reno-vascular hypertension. This work attempts to identify the localization of atherosclerotic plaque in human abdominal aorta — renal artery junction using Computational fluid dynamics (CFD).

    The atherosclerosis prone regions in an idealized human abdominal aorta-renal artery junction are identified by calculating relevant hemodynamic indicators from computational simulations using the rheologically accurate shear-thinning Yeleswarapu model for human blood. Blood flow is numerically simulated in a 3-D model of the artery junction using ANSYS FLUENT v18.2.

    Hemodynamic indicators calculated are average wall shear stress (AWSS), oscillatory shear index (OSI), and relative residence time (RRT). Simulations of pulsatile flow (f=1.25 Hz, Re = 1000) show that low AWSS, and high OSI manifest in the regions of renal artery downstream of the junction and on the infrarenal section of the abdominal aorta lateral to the junction. High RRT, which is a relative index and dependent on AWSS and OSI, is found to overlap with the low AWSS and high OSI at the cranial surface of renal artery proximal to the junction and on the surface of the abdominal aorta lateral to the bifurcation: this indicates that these regions of the junction are prone to atherosclerosis. The results match qualitatively with the findings reported in literature and serve as initial step to illustrate utility of CFD for the location of atherosclerotic plaque.

    Views (last year): 3.
  3. Lobacheva L.V., Borisova E.V.
    Simulation of pollution migration processes at municipal solid waste landfills
    Computer Research and Modeling, 2020, v. 12, no. 2, pp. 369-385

    The article reports the findings of an investigation into pollution migration processes at the municipal solid waste (MSW) landfill located in the water protection zone of Lake Seliger (Tver Region). The distribution of pollutants is investigated and migration parameters are determined in field and laboratory conditions at the landfill site. A mathematical model describing physical and chemical processes of substance migration in soil strata is constructed. Pollutant migration is found to be due to a variety of factors. The major ones, having a significant impact on the migration of MSW ingredients and taken into account mathematically, include convective transport, diffusion and sorption processes. A modified mathematical model differs from its conventional counterparts by considering a number of parameters reflecting the decrease in the concentration of ammonium and nitrate nitrogen ions in ground water (transpiration by plant roots, dilution with infiltration waters, etc.). An analytical solution to assess the pollutant spread from the landfill is presented. The mathematical model provides a set of simulation models helping to obtain a computational solution of specific problems, vertical and horizontal migration of substances in the underground flow. Numerical experiments, analytical solutions, as well as field and laboratory data was studied the dynamics of pollutant distribution in the object under study up to the lake. A long-term forecast for the spread of landfill pollution is made. Simulation experiments showed that some zones of clean groundwater interact with those of contaminated groundwater during the pollution migration from the landfill, each characterized by a different pollutant content. The data of a computational experiments and analytical calculations are consistent with the findings of field and laboratory investigations of the object and give grounds to recommend the proposed models for predicting pollution migration from a landfill. The analysis of the pollution migration simulation allows to substantiate the numerical estimates of the increase in $NH_4^+$ and $NO_3^-$ ion concentration with the landfill operation time. It is found that, after 100 years following the landfill opening, toxic filtrate components will fill the entire pore space from the landfill to the lake resulting in a significant deterioration of the ecosystem of Lake Seliger.

  4. The article deals with the nonlinear boundary-value problem of hydrogen permeability corresponding to the following experiment. A membrane made of the target structural material heated to a sufficiently high temperature serves as the partition in the vacuum chamber. Degassing is performed in advance. A constant pressure of gaseous (molecular) hydrogen is built up at the inlet side. The penetrating flux is determined by mass-spectrometry in the vacuum maintained at the outlet side.

    A linear model of dependence on concentration is adopted for the coefficient of dissolved atomic hydrogen diffusion in the bulk. The temperature dependence conforms to the Arrhenius law. The surface processes of dissolution and sorptiondesorption are taken into account in the form of nonlinear dynamic boundary conditions (differential equations for the dynamics of surface concentrations of atomic hydrogen). The characteristic mathematical feature of the boundary-value problem is that concentration time derivatives are included both in the diffusion equation and in the boundary conditions with quadratic nonlinearity. In terms of the general theory of functional differential equations, this leads to the so-called neutral type equations and requires a more complex mathematical apparatus. An iterative computational algorithm of second-(higher- )order accuracy is suggested for solving the corresponding nonlinear boundary-value problem based on explicit-implicit difference schemes. To avoid solving the nonlinear system of equations at every time step, we apply the explicit component of difference scheme to slower sub-processes.

    The results of numerical modeling are presented to confirm the fitness of the model to experimental data. The degrees of impact of variations in hydrogen permeability parameters (“derivatives”) on the penetrating flux and the concentration distribution of H atoms through the sample thickness are determined. This knowledge is important, in particular, when designing protective structures against hydrogen embrittlement or membrane technologies for producing high-purity hydrogen. The computational algorithm enables using the model in the analysis of extreme regimes for structural materials (pressure drops, high temperatures, unsteady heating), identifying the limiting factors under specific operating conditions, and saving on costly experiments (especially in deuterium-tritium investigations).

  5. Mazzara M.
    Deriving specifications of dependable systems
    Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1637-1650

    Although human skills are heavily involved in the Requirements Engineering process, in particular, in requirements elicitation, analysis and specification, still methodology and formalism play a determining role in providing clarity and enabling analysis. In this paper, we propose a method for deriving formal specifications, which are applicable to dependable software systems. First, we clarify what the method itself is. Computer science has a proliferation of languages and methods, but the difference between the two is not always clear. This is a conceptual contribution. Furthermore, we propose the idea of Layered Fault Tolerant Specification (LFTS). The principle consists in layering specifications in (at least) two different layers: one for normal behaviors and others (if more than one) for abnormal behaviors. Abnormal behaviors are described in terms of an Error Injector (EI), which represent a model of the expected erroneous interference coming from the environment. This structure has been inspired by the notion of an idealized Fault Tolerant component, but the combination of LFTS and EI using rely guarantee thinking to describe interference is our second contribution. The overall result is the definition of a method for the specification of systems that do not run in isolation but in the real, physical world. We propose an approach that is pragmatic to its target audience: techniques must scale and be usable by non-experts, if they are to make it into an industrial setting. This article is making tentative steps, but the recent trends in Software Engineering such as Microservices, smart and software-defined buildings, M2M micropayments and Devops are relevant fields continue the investigation concerning dependability and rely guarantee thinking.

  6. Nikolsky I.M.
    Classifier size optimisation in segmentation of three-dimensional point images of wood vegetation
    Computer Research and Modeling, 2025, v. 17, no. 4, pp. 665-675

    The advent of laser scanning technologies has revolutionized forestry. Their use made it possible to switch from studying woodlands using manual measurements to computer analysis of stereo point images called point clouds.

    Automatic calculation of some tree parameters (such as trunk diameter) using a point cloud requires the removal of foliage points. To perform this operation, a preliminary segmentation of the stereo image into the “foliage” and “trunk” classes is required. The solution to this problem often involves the use of machine learning methods.

    One of the most popular classifiers used for segmentation of stereo images of trees is a random forest. This classifier is quite demanding on the amount of memory. At the same time, the size of the machine learning model can be critical if it needs to be sent by wire, which is required, for example, when performing distributed learning. In this paper, the goal is to find a classifier that would be less demanding in terms of memory, but at the same time would have comparable segmentation accuracy. The search is performed among classifiers such as logistic regression, naive Bayes classifier, and decision tree. In addition, a method for segmentation refinement performed by a decision tree using logistic regression is being investigated.

    The experiments were conducted on data from the collection of the University of Heidelberg. The collection contains hand-marked stereo images of trees of various species, both coniferous and deciduous, typical of the forests of Central Europe.

    It has been shown that classification using a decision tree, adjusted using logistic regression, is able to produce a result that is only slightly inferior to the result of a random forest in accuracy, while spending less time and RAM. The difference in balanced accuracy is no more than one percent on all the clouds considered, while the total size and inference time of the decision tree and logistic regression classifiers is an order of magnitude smaller than of the random forest classifier.

  7. Lipovko P.O., Loganchuk M.L.
    Component analysis of binary media using acoustic reflectoimpedancemetry
    Computer Research and Modeling, 2015, v. 7, no. 2, pp. 301-313

    A computer model of component analysis of binary media, based on application of a new method acoustic reflecto-impedancemetry and realized in graphic programming environment LabVIEW is considered. Prospects of metrological and instrumental provisions of experimental applications of the model are discussed.

    Citations: 4 (RSCI).
  8. Matjushev T.V., Dvornikov M.V.
    The analysis of respiratory reactions of the person in the conditions of the changed gas environment on mathematical model
    Computer Research and Modeling, 2017, v. 9, no. 2, pp. 281-296

    The aim of the work was to study and develop methods of forecasting the dynamics of the human respiratory reactions, based on mathematical modeling. To achieve this goal have been set and solved the following tasks: developed and justified the overall structure and formalized description of the model Respiro-reflex system; built and implemented the algorithm in software models of gas exchange of the body; computational experiments and checking the adequacy of the model-based Lite-ture data and our own experimental studies.

    In this embodiment, a new comprehensive model entered partial model modified version of physicochemical properties and blood acid-base balance. In developing the model as the basis of a formalized description was based on the concept of separation of physiologically-fi system of regulation on active and passive subsystems regulation. Development of the model was carried out in stages. Integrated model of gas exchange consisted of the following special models: basic biophysical models of gas exchange system; model physicochemical properties and blood acid-base balance; passive mechanisms of gas exchange model developed on the basis of mass balance equations Grodinza F.; chemical regulation model developed on the basis of a multifactor model D. Gray.

    For a software implementation of the model, calculations were made in MatLab programming environment. To solve the equations of the method of Runge–Kutta–Fehlberga. It is assumed that the model will be presented in the form of a computer research program, which allows implements vat various hypotheses about the mechanism of the observed processes. Calculate the expected value of the basic indicators of gas exchange under giperkap Britain and hypoxia. The results of calculations as the nature of, and quantity is good enough co-agree with the data obtained in the studies on the testers. The audit on Adek-vatnost confirmed that the error calculation is within error of copper-to-biological experiments. The model can be used in the theoretical prediction of the dynamics of the respiratory reactions of the human body in a changed atmosphere.

    Views (last year): 5.
  9. Il’ichev V.G., Kulygin V.V., Dashkevich L.V.
    On possible changes in phytocenoses of the Sea of Azov under climate warming
    Computer Research and Modeling, 2017, v. 9, no. 6, pp. 981-991

    Base long-term modern scenarios of hydrochemical and temperature regimes of the Sea of Azov were considered. New schemes of modeling mechanisms of algal adaptation to changes in the hydrochemical regime and temperature were proposed. In comparison to the traditional ecological-evolutionary schemes, these models have a relatively small dimension, high speed and allow carrying out various calculations on long-term perspective (evolutionally significant times). Based on the ecology-evolutionary model of the lower trophic levels the impact of these environmental factors on the dynamics and microevolution of algae in the Sea of Azov was estimated. In each scenario, the calculations were made for 100 years, with the final values of the variables and parameters not depending on the choice of the initial values. In the process of such asymptotic computer analysis, it was found that as a result of climate warming and temperature adaptation of organisms, the average annual biomass of thermophilic algae (Pyrrophyta and Cyanophyta) naturally increases. However, for a number of diatom algae (Bacillariophyta), even with their temperature adaptation, the average annual biomass may unexpectedly decrease. Probably, this phenomenon is associated with a toughening of competition between species with close temperature parameters of existence. The influence of the variation in the chemical composition of the Don River’s flow on the dynamics of nutrients and algae of the Sea of Azov was also investigated. It turned out that the ratio of organic forms of nitrogen and phosphorus in sea waters varies little. This stabilization phenomenon will take place for all high-productive reservoirs with low flow, due to autochthonous origin of larger part of organic matter in water bodies of this type.

    Views (last year): 11.
  10. Shleymovich M.P., Dagaeva M.V., Katasev A.S., Lyasheva S.A., Medvedev M.V.
    The analysis of images in control systems of unmanned automobiles on the base of energy features model
    Computer Research and Modeling, 2018, v. 10, no. 3, pp. 369-376

    The article shows the relevance of research work in the field of creating control systems for unmanned vehicles based on computer vision technologies. Computer vision tools are used to solve a large number of different tasks, including to determine the location of the car, detect obstacles, determine a suitable parking space. These tasks are resource intensive and have to be performed in real time. Therefore, it is important to develop effective models, methods and tools that ensure the achievement of the required time and accuracy for use in unmanned vehicle control systems. In this case, the choice of the image representation model is important. In this paper, we consider a model based on the wavelet transform, which makes it possible to form features characterizing the energy estimates of the image points and reflecting their significance from the point of view of the contribution to the overall image energy. To form a model of energy characteristics, a procedure is performed based on taking into account the dependencies between the wavelet coefficients of various levels and the application of heuristic adjustment factors for strengthening or weakening the influence of boundary and interior points. On the basis of the proposed model, it is possible to construct descriptions of images their characteristic features for isolating and analyzing, including for isolating contours, regions, and singular points. The effectiveness of the proposed approach to image analysis is due to the fact that the objects in question, such as road signs, road markings or car numbers that need to be detected and identified, are characterized by the relevant features. In addition, the use of wavelet transforms allows to perform the same basic operations to solve a set of tasks in onboard unmanned vehicle systems, including for tasks of primary processing, segmentation, description, recognition and compression of images. The such unified approach application will allow to reduce the time for performing all procedures and to reduce the requirements for computing resources of the on-board system of an unmanned vehicle.

    Views (last year): 31. Citations: 1 (RSCI).
Pages: « first previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"