Результаты поиска по 'data processing':
Найдено статей: 164
  1. Usanov M.S., Kulberg N.S., Yakovleva T.V., Morozov S.P.
    Determination of CT dose by means of noise analysis
    Computer Research and Modeling, 2018, v. 10, no. 4, pp. 525-533

    The article deals with the process of creating an effective algorithm for determining the amount of emitted quanta from an X-ray tube in computer tomography (CT) studies. An analysis of domestic and foreign literature showed that most of the work in the field of radiometry and radiography takes the tabulated values of X-ray absorption coefficients into account, while individual dose factors are not taken into account at all since many studies are lacking the Dose Report. Instead, an average value is used to simplify the calculation of statistics. In this regard, it was decided to develop a method to detect the amount of ionizing quanta by analyzing the noise of CT data. As the basis of the algorithm, we used Poisson and Gauss distribution mathematical model of owns’ design of logarithmic value. The resulting mathematical model was tested on the CT data of a calibration phantom consisting of three plastic cylinders filled with water, the X-ray absorption coefficient of which is known from the table values. The data were obtained from several CT devices from different manufacturers (Siemens, Toshiba, GE, Phillips). The developed algorithm made it possible to calculate the number of emitted X-ray quanta per unit time. These data, taking into account the noise level and the radiuses of the cylinders, were converted to X-ray absorption values, after which a comparison was made with tabulated values. As a result of this operation, the algorithm used with CT data of various configurations, experimental data were obtained, consistent with the theoretical part and the mathematical model. The results showed good accuracy of the algorithm and mathematical apparatus, which shows reliability of the obtained data. This mathematical model is already used in the noise reduction program of the CT of own design, where it participates as a method of creating a dynamic threshold of noise reduction. At the moment, the algorithm is being processed to work with real data from computer tomography of patients.

    Views (last year): 23. Citations: 1 (RSCI).
  2. Krasnyakov I.V., Bratsun D.A., Pismen L.M.
    Mathematical modeling of carcinoma growth with a dynamic change in the phenotype of cells
    Computer Research and Modeling, 2018, v. 10, no. 6, pp. 879-902

    In this paper, we proposed a two-dimensional chemo-mechanical model of the growth of invasive carcinoma in epithelial tissue. Each cell is modeled by an elastic polygon, changing its shape and size under the influence of pressure forces acting from the tissue. The average size and shape of the cells have been calibrated on the basis of experimental data. The model allows to describe the dynamic deformations in epithelial tissue as a collective evolution of cells interacting through the exchange of mechanical and chemical signals. The general direction of tumor growth is controlled by a pre-established linear gradient of nutrient concentration. Growth and deformation of the tissue occurs due to the mechanisms of cell division and intercalation. We assume that carcinoma has a heterogeneous structure made up of cells of different phenotypes that perform various functions in the tumor. The main parameter that determines the phenotype of a cell is the degree of its adhesion to the adjacent cells. Three main phenotypes of cancer cells are distinguished: the epithelial (E) phenotype is represented by internal tumor cells, the mesenchymal (M) phenotype is represented by single cells and the intermediate phenotype is represented by the frontal tumor cells. We assume also that the phenotype of each cell under certain conditions can change dynamically due to epithelial-mesenchymal (EM) and inverse (ME) transitions. As for normal cells, we define the main E-phenotype, which is represented by ordinary cells with strong adhesion to each other. In addition, the normal cells that are adjacent to the tumor undergo a forced EM-transition and form an M-phenotype of healthy cells. Numerical simulations have shown that, depending on the values of the control parameters as well as a combination of possible phenotypes of healthy and cancer cells, the evolution of the tumor can result in a variety of cancer structures reflecting the self-organization of tumor cells of different phenotypes. We compare the structures obtained numerically with the morphological structures revealed in clinical studies of breast carcinoma: trabecular, solid, tubular, alveolar and discrete tumor structures with ameboid migration. The possible scenario of morphogenesis for each structure is discussed. We describe also the metastatic process during which a single cancer cell of ameboid phenotype moves due to intercalation in healthy epithelial tissue, then divides and undergoes a ME transition with the appearance of a secondary tumor.

    Views (last year): 46.
  3. Orlova E.V.
    Model for operational optimal control of financial recourses distribution in a company
    Computer Research and Modeling, 2019, v. 11, no. 2, pp. 343-358

    A critical analysis of existing approaches, methods and models to solve the problem of financial resources operational management has been carried out in the article. A number of significant shortcomings of the presented models were identified, limiting the scope of their effective usage. There are a static nature of the models, probabilistic nature of financial flows are not taken into account, daily amounts of receivables and payables that significantly affect the solvency and liquidity of the company are not identified. This necessitates the development of a new model that reflects the essential properties of the planning financial flows system — stochasticity, dynamism, non-stationarity.

    The model for the financial flows distribution has been developed. It bases on the principles of optimal dynamic control and provides financial resources planning ensuring an adequate level of liquidity and solvency of a company and concern initial data uncertainty. The algorithm for designing the objective cash balance, based on principles of a companies’ financial stability ensuring under changing financial constraints, is proposed.

    Characteristic of the proposed model is the presentation of the cash distribution process in the form of a discrete dynamic process, for which a plan for financial resources allocation is determined, ensuring the extremum of an optimality criterion. Designing of such plan is based on the coordination of payments (cash expenses) with the cash receipts. This approach allows to synthesize different plans that differ in combinations of financial outflows, and then to select the best one according to a given criterion. The minimum total costs associated with the payment of fines for non-timely financing of expenses were taken as the optimality criterion. Restrictions in the model are the requirement to ensure the minimum allowable cash balances for the subperiods of the planning period, as well as the obligation to make payments during the planning period, taking into account the maturity of these payments. The suggested model with a high degree of efficiency allows to solve the problem of financial resources distribution under uncertainty over time and receipts, coordination of funds inflows and outflows. The practical significance of the research is in developed model application, allowing to improve the financial planning quality, to increase the management efficiency and operational efficiency of a company.

    Views (last year): 33.
  4. Chernov I.A.
    High-throughput identification of hydride phase-change kinetics models
    Computer Research and Modeling, 2020, v. 12, no. 1, pp. 171-183

    Metal hydrides are an interesting class of chemical compounds that can reversibly bind a large amount of hydrogen and are, therefore, of interest for energy applications. Understanding the factors affecting the kinetics of hydride formation and decomposition is especially important. Features of the material, experimental setup and conditions affect the mathematical description of the processes, which can undergo significant changes during the processing of experimental data. The article proposes a general approach to numerical modeling of the formation and decomposition of metal hydrides and solving inverse problems of estimating material parameters from measurement data. The models are divided into two classes: diffusive ones, that take into account the gradient of hydrogen concentration in the metal lattice, and models with fast diffusion. The former are more complex and take the form of non-classical boundary value problems of parabolic type. A rather general approach to the grid solution of such problems is described. The second ones are solved relatively simply, but can change greatly when model assumptions change. Our experience in processing experimental data shows that a flexible software tool is needed; a tool that allows, on the one hand, building models from standard blocks, freely changing them if necessary, and, on the other hand, avoiding the implementation of routine algorithms. It also should be adapted for high-performance systems of different paradigms. These conditions are satisfied by the HIMICOS library presented in the paper, which has been tested on a large number of experimental data. It allows simulating the kinetics of formation and decomposition of metal hydrides, as well as related tasks, at three levels of abstraction. At the low level, the user defines the interface procedures, such as calculating the time layer based on the previous layer or the entire history, calculating the observed value and the independent variable from the task variables, comparing the curve with the reference. Special algorithms can be used for solving quite general parabolic-type boundary value problems with free boundaries and with various quasilinear (i.e., linear with respect to the derivative only) boundary conditions, as well as calculating the distance between the curves in different metric spaces and with different normalization. This is the middle level of abstraction. At the high level, it is enough to choose a ready tested model for a particular material and modify it in relation to the experimental conditions.

  5. Gubaydullin I.M., Yazovtseva O.S.
    Investigation of the averaged model of coked catalyst oxidative regeneration
    Computer Research and Modeling, 2021, v. 13, no. 1, pp. 149-161

    The article is devoted to the construction and investigation of an averaged mathematical model of an aluminum-cobalt-molybdenum hydrocracking catalyst oxidative regeneration. The oxidative regeneration is an effective means of restoring the activity of the catalyst when its granules are coating with coke scurf.

    The mathematical model of this process is a nonlinear system of ordinary differential equations, which includes kinetic equations for reagents’ concentrations and equations for changes in the temperature of the catalyst granule and the reaction mixture as a result of isothermal reactions and heat transfer between the gas and the catalyst layer. Due to the heterogeneity of the oxidative regeneration process, some of the equations differ from the standard kinetic ones and are based on empirical data. The article discusses the scheme of chemical interaction in the regeneration process, which the material balance equations are compiled on the basis of. It reflects the direct interaction of coke and oxygen, taking into account the degree of coverage of the coke granule with carbon-hydrogen and carbon-oxygen complexes, the release of carbon monoxide and carbon dioxide during combustion, as well as the release of oxygen and hydrogen inside the catalyst granule. The change of the radius and, consequently, the surface area of coke pellets is taken into account. The adequacy of the developed averaged model is confirmed by an analysis of the dynamics of the concentrations of substances and temperature.

    The article presents a numerical experiment for a mathematical model of oxidative regeneration of an aluminum-cobalt-molybdenum hydrocracking catalyst. The experiment was carried out using the Kutta–Merson method. This method belongs to the methods of the Runge–Kutta family, but is designed to solve stiff systems of ordinary differential equations. The results of a computational experiment are visualized.

    The paper presents the dynamics of the concentrations of substances involved in the oxidative regeneration process. A conclusion on the adequacy of the constructed mathematical model is drawn on the basis of the correspondence of the obtained results to physicochemical laws. The heating of the catalyst granule and the release of carbon monoxide with a change in the radius of the granule for various degrees of initial coking are analyzed. There are a description of the results.

    In conclusion, the main results and examples of problems which can be solved using the developed mathematical model are noted.

  6. Malikov Z.M., Nazarov F.K.
    Study of turbulence models for calculating a strongly swirling flow in an abrupt expanding channel
    Computer Research and Modeling, 2021, v. 13, no. 4, pp. 793-805

    In this paper, compared fundamentally different turbulence models for calculating a strongly swirling flow in an abrupt expanding pipe. This task is not only of great importance in practice, but also in theoretical terms. Because in such a flow a very complex anisotropic turbulence with recirculation zones arises and the study of the ongoing processes allows us to find an answer to many questions about turbulence. The flow under consideration has been well studied experimentally. Therefore, it is a very complex and interesting test problem for turbulence models. In the paper compared the numerical results of the one-parameter vt-92 model, the SSG/LRR-RSMw2012 Reynolds stress method and the new two-fluid model. These models are very different from each other. Because the Boussinesq hypothesis is used in the one-parameter vt-92 model, in the SSG/LRR-RSM-w2012 model, its own equation is written for each stress, and for the new two-fluid model, the basis is a completely different approach to turbulence. A feature of the approach to turbulence for the new two-fluid model is that it allows one to obtain a closed system of equations. Comparison of these models is carried out not only by the correspondence of their results with experimental data, but also by the computational resources expended on the numerical implementation of these models. Therefore, in this work, for all models, the same technique was used to numerically calculate the turbulent swirling flow at the Reynolds number $Re=3\cdot 10^4$ and the swirl parameter $S_w=0.6$. In the paper showed that the new two-fluid model is effective for the study of turbulent flows, because has good accuracy in describing complex anisotropic turbulent flows and is simple enough for numerical implementation.

  7. Laser damage to transparent solids is a major limiting factor output power of laser systems. For laser rangefinders, the most likely destruction cause of elements of the optical system (lenses, mirrors) actually, as a rule, somewhat dusty, is not an optical breakdown as a result of avalanche, but such a thermal effect on the dust speck deposited on an element of the optical system (EOS), which leads to its ignition. It is the ignition of a speck of dust that initiates the process of EOS damage.

    The corresponding model of this process leading to the ignition of a speck of dust takes into account the nonlinear Stefan –Boltzmann law of thermal radiation and the infinite thermal effect of periodic radiation on the EOS and the speck of dust. This model is described by a nonlinear system of differential equations for two functions: the EOS temperature and the dust particle temperature. It is proved that due to the accumulating effect of periodic thermal action, the process of reaching the dust speck ignition temperature occurs almost at any a priori possible changes in this process of the thermophysical parameters of the EOS and the dust speck, as well as the heat exchange coefficients between them and the surrounding air. Averaging these parameters over the variables related to both the volume and the surfaces of the dust speck and the EOS is correct under the natural constraints specified in the paper. The entire really significant spectrum of thermophysical parameters is covered thanks to the use of dimensionless units in the problem (including numerical results).

    A thorough mathematical study of the corresponding nonlinear system of differential equations made it possible for the first time for the general case of thermophysical parameters and characteristics of the thermal effect of periodic laser radiation to find a formula for the value of the permissible radiation intensity that does not lead to the destruction of the EOS as a result of the ignition of a speck of dust deposited on the EOS. The theoretical value of the permissible intensity found in the general case in the special case of the data from the Grasse laser ranging station (south of France) almost matches that experimentally observed in the observatory.

    In parallel with the solution of the main problem, we derive a formula for the power absorption coefficient of laser radiation by an EOS expressed in terms of four dimensionless parameters: the relative intensity of laser radiation, the relative illumination of the EOS, the relative heat transfer coefficient from the EOS to the surrounding air, and the relative steady-state temperature of the EOS.

  8. Petrov A.P., Podlipskaia O.G., Pronchev G.B.
    Modeling the dynamics of public attention to extended processes on the example of the COVID-19 pandemic
    Computer Research and Modeling, 2022, v. 14, no. 5, pp. 1131-1141

    The dynamics of public attention to COVID-19 epidemic is studied. The level of public attention is described by the daily number of search requests in Google made by users from a given country. In the empirical part of the work, data on the number of requests and the number of infected cases for a number of countries are considered. It is shown that in all cases the maximum of public attention occurs earlier than the maximum daily number of newly infected individuals. Thus, for a certain period of time, the growth of the epidemics occurs in parallel with the decline in public attention to it. It is also shown that the decline in the number of requests is described by an exponential function of time. In order to describe the revealed empirical pattern, a mathematical model is proposed, which is a modification of the model of the decline in attention after a one-time political event. The model develops the approach that considers decision-making by an individual as a member of the society in which the information process takes place. This approach assumes that an individual’s decision about whether or not to make a request on a given day about COVID is based on two factors. One of them is an attitude that reflects the individual’s long-term interest in a given topic and accumulates the individual’s previous experience, cultural preferences, social and economic status. The second is the dynamic factor of public attention to the epidemic, which changes during the process under consideration under the influence of informational stimuli. With regard to the subject under consideration, information stimuli are related to epidemic dynamics. The behavioral hypothesis is that if on some day the sum of the attitude and the dynamic factor exceeds a certain threshold value, then on that day the individual in question makes a search request on the topic of COVID. The general logic is that the higher the rate of infection growth, the higher the information stimulus, the slower decreases public attention to the pandemic. Thus, the constructed model made it possible to correlate the rate of exponential decrease in the number of requests with the rate of growth in the number of cases. The regularity found with the help of the model was tested on empirical data. It was found that the Student’s statistic is 4.56, which allows us to reject the hypothesis of the absence of a correlation with a significance level of 0.01.

  9. The creation of a virtual laboratory stand that allows one to obtain reliable characteristics that can be proven as actual, taking into account errors and noises (which is the main distinguishing feature of a computational experiment from model studies) is one of the main problems of this work. It considers the following task: there is a rectangular waveguide in the single operating mode, on the wide wall of which a technological hole is cut, through which a sample for research is placed into the cavity of the transmission line. The recovery algorithm is as follows: the laboratory measures the network parameters (S11 and/or S21) in the transmission line with the sample. In the computer model of the laboratory stand, the sample geometry is reconstructed and an iterative process of optimization (or sweeping) of the electrophysical parameters is started, the mask of this process is the experimental data, and the stop criterion is the interpretive estimate of proximity (or residual). It is important to note that the developed computer model, along with its apparent simplicity, is initially ill-conditioned. To set up a computational experiment, the Comsol modeling environment is used. The results of the computational experiment with a good degree of accuracy coincided with the results of laboratory studies. Thus, experimental verification was carried out for several significant components, both the computer model in particular and the algorithm for restoring the target parameters in general. It is important to note that the computer model developed and described in this work may be effectively used for a computational experiment to restore the full dielectric parameters of a complex geometry target. Weak bianisotropy effects can also be detected, including chirality, gyrotropy, and material nonreciprocity. The resulting model is, by definition, incomplete, but its completeness is the highest of the considered options, while at the same time, the resulting model is well conditioned. Particular attention in this work is paid to the modeling of a coaxial-waveguide transition, it is shown that the use of a discrete-element approach is preferable to the direct modeling of the geometry of a microwave device.

  10. Salenek I.A., Seliverstov Y.A., Seliverstov S.A., Sofronova E.A.
    Improving the quality of route generation in SUMO based on data from detectors using reinforcement learning
    Computer Research and Modeling, 2024, v. 16, no. 1, pp. 137-146

    This work provides a new approach for constructing high-precision routes based on data from transport detectors inside the SUMO traffic modeling package. Existing tools such as flowrouter and routeSampler have a number of disadvantages, such as the lack of interaction with the network in the process of building routes. Our rlRouter uses multi-agent reinforcement learning (MARL), where the agents are incoming lanes and the environment is the road network. By performing actions to launch vehicles, agents receive a reward for matching data from transport detectors. Parameter Sharing DQN with the LSTM backbone of the Q-function was used as an algorithm for multi-agent reinforcement learning.

    Since the rlRouter is trained inside the SUMO simulation, it can restore routes better by taking into account the interaction of vehicles within the network with each other and with the network infrastructure. We have modeled diverse traffic situations on three different junctions in order to compare the performance of SUMO’s routers with the rlRouter. We used Mean Absoluter Error (MAE) as the measure of the deviation from both cumulative detectors and routes data. The rlRouter achieved the highest compliance with the data from the detectors. We also found that by maximizing the reward for matching detectors, the resulting routes also get closer to the real ones. Despite the fact that the routes recovered using rlRouter are superior to the routes obtained using SUMO tools, they do not fully correspond to the real ones, due to the natural limitations of induction-loop detectors. To achieve more plausible routes, it is necessary to equip junctions with other types of transport counters, for example, camera detectors.

Pages: « first previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"