Результаты поиска по 'measure':
Найдено статей: 126
  1. Aristov V.V., Stroganov A.V., Yastrebov A.D.
    Application of the kinetic type model for study of a spatial spread of COVID-19
    Computer Research and Modeling, 2021, v. 13, no. 3, pp. 611-627

    A simple model based on a kinetic-type equation is proposed to describe the spread of a virus in space through the migration of virus carriers from a certain center. The consideration is carried out on the example of three countries for which such a one-dimensional model is applicable: Russia, Italy and Chile. The geographical location of these countries and their elongation in the direction from the centers of infection (Moscow, Milan and Lombardia in general, as well as Santiago, respectively) makes it possible to use such an approximation. The aim is to determine the dynamic density of the infected in time and space. The model is two-parameter. The first parameter is the value of the average spreading rate associated with the transfer of infected moving by transport vehicles. The second parameter is the frequency of the decrease of the infected as they move through the country, which is associated with the passengers reaching their destination, as well as with quarantine measures. The parameters are determined from the actual known data for the first days of the spatial spread of the epidemic. An analytical solution is being built; simple numerical methods are also used to obtain a series of calculations. The geographical spread of the disease is a factor taken into account in the model, the second important factor is that contact infection in the field is not taken into account. Therefore, the comparison of the calculated values with the actual data in the initial period of infection coincides with the real data, then these data become higher than the model data. Those no less model calculations allow us to make some predictions. In addition to the speed of infection, a similar “speed of recovery” is possible. When such a speed is found for the majority of the country's population, a conclusion is made about the beginning of a global recovery, which coincides with real data.

  2. Lukianchenko P.P., Danilov A.M., Bugaev A.S., Gorbunov E.I., Pashkov R.A., Ilyina P.G., Gadzhimirzayev Sh.M.
    Approach to Estimating the Dynamics of the Industry Consolidation Level
    Computer Research and Modeling, 2023, v. 15, no. 1, pp. 129-140

    In this article we propose a new approach to the analysis of econometric industry parameters for the industry consolidation level. The research is based on the simple industry automatic control model. The state of the industry is measured by quarterly obtained econometric parameters from each industry’s company provided by the tax control regulator. An approach to analysis of the industry, which does not provide for tracking the economy of each company, but explores the parameters of the set of all companies as a whole, is proposed. Quarterly obtained econometric parameters from each industry’s company are Income, Quantity of employers, Taxes, and Income from Software Licenses. The ABC analysis method was modified by ABCD analysis (D — companies with zero-level impact to industry metrics) and used to make the results obtained for different indicators comparable. Pareto charts were formed for the set of econometric indicators.

    To estimate the industry monopolization, the Herfindahl – Hirschman index was calculated for the most sensitive companies metrics. Using the HHI approach, it was proved that COVID-19 does not lead to changes in the monopolization of the Russian IT industry.

    As the most visually obvious approach to the industry visualization, scattering diagrams in combination with the Pareto graph colors were proposed. The affect of the accreditation procedure is clearly observed by scattering diagram in combination with red/black dots for accredited and nonaccredited companies respectively.

    The last reported result is the proposal to use the Licenses End-to-End Product Identification as the market structure control instrument. It is the basis to avoid the multiple accounting of the licenses reselling within the chain of software distribution.

    The results of research could be the basis for future IT industry analysis and simulation on the agent based approach.

  3. Nedbailo Y.A., Surchenko A.V., Bychkov I.N.
    Reducing miss rate in a non-inclusive cache with inclusive directory of a chip multiprocessor
    Computer Research and Modeling, 2023, v. 15, no. 3, pp. 639-656

    Although the era of exponential performance growth in computer chips has ended, processor core numbers have reached 16 or more even in general-purpose desktop CPUs. As DRAM throughput is unable to keep pace with this computing power growth, CPU designers need to find ways of lowering memory traffic per instruction. The straightforward way to do this is to reduce the miss rate of the last-level cache. Assuming “non-inclusive cache, inclusive directory” (NCID) scheme already implemented, three ways of reducing the cache miss rate further were studied.

    The first is to achieve more uniform usage of cache banks and sets by employing hash-based interleaving and indexing. In the experiments in SPEC CPU2017 refrate tests, even the simplest XOR-based hash functions demonstrated a performance increase of 3.2%, 9.1%, and 8.2% for CPU configurations with 16, 32, and 64 cores and last-level cache banks, comparable to the results of more complex matrix-, division- and CRC-based functions.

    The second optimisation is aimed at reducing replication at different cache levels by means of automatically switching to the exclusive scheme when it appears optimal. A known scheme of this type, FLEXclusion, was modified for use in NCID caches and showed an average performance gain of 3.8%, 5.4 %, and 7.9% for 16-, 32-, and 64-core configurations.

    The third optimisation is to increase the effective cache capacity using compression. The compression rate of the inexpensive and fast BDI*-HL (Base-Delta-Immediate Modified, Half-Line) algorithm, designed for NCID, was measured, and the respective increase in cache capacity yielded roughly 1% of the average performance increase.

    All three optimisations can be combined and demonstrated a performance gain of 7.7%, 16% and 19% for CPU configurations with 16, 32, and 64 cores and banks, respectively.

  4. Popov D.I.
    Calibration of an elastostatic manipulator model using AI-based design of experiment
    Computer Research and Modeling, 2023, v. 15, no. 6, pp. 1535-1553

    This paper demonstrates the advantages of using artificial intelligence algorithms for the design of experiment theory, which makes possible to improve the accuracy of parameter identification for an elastostatic robot model. Design of experiment for a robot consists of the optimal configuration-external force pairs for the identification algorithms and can be described by several main stages. At the first stage, an elastostatic model of the robot is created, taking into account all possible mechanical compliances. The second stage selects the objective function, which can be represented by both classical optimality criteria and criteria defined by the desired application of the robot. At the third stage the optimal measurement configurations are found using numerical optimization. The fourth stage measures the position of the robot body in the obtained configurations under the influence of an external force. At the last, fifth stage, the elastostatic parameters of the manipulator are identified based on the measured data.

    The objective function required to finding the optimal configurations for industrial robot calibration is constrained by mechanical limits both on the part of the possible angles of rotation of the robot’s joints and on the part of the possible applied forces. The solution of this multidimensional and constrained problem is not simple, therefore it is proposed to use approaches based on artificial intelligence. To find the minimum of the objective function, the following methods, also sometimes called heuristics, were used: genetic algorithms, particle swarm optimization, simulated annealing algorithm, etc. The obtained results were analyzed in terms of the time required to obtain the configurations, the optimal value, as well as the final accuracy after applying the calibration. The comparison showed the advantages of the considered optimization techniques based on artificial intelligence over the classical methods of finding the optimal value. The results of this work allow us to reduce the time spent on calibration and increase the positioning accuracy of the robot’s end-effector after calibration for contact operations with high loads, such as machining and incremental forming.

  5. Chernov I.A.
    High-throughput identification of hydride phase-change kinetics models
    Computer Research and Modeling, 2020, v. 12, no. 1, pp. 171-183

    Metal hydrides are an interesting class of chemical compounds that can reversibly bind a large amount of hydrogen and are, therefore, of interest for energy applications. Understanding the factors affecting the kinetics of hydride formation and decomposition is especially important. Features of the material, experimental setup and conditions affect the mathematical description of the processes, which can undergo significant changes during the processing of experimental data. The article proposes a general approach to numerical modeling of the formation and decomposition of metal hydrides and solving inverse problems of estimating material parameters from measurement data. The models are divided into two classes: diffusive ones, that take into account the gradient of hydrogen concentration in the metal lattice, and models with fast diffusion. The former are more complex and take the form of non-classical boundary value problems of parabolic type. A rather general approach to the grid solution of such problems is described. The second ones are solved relatively simply, but can change greatly when model assumptions change. Our experience in processing experimental data shows that a flexible software tool is needed; a tool that allows, on the one hand, building models from standard blocks, freely changing them if necessary, and, on the other hand, avoiding the implementation of routine algorithms. It also should be adapted for high-performance systems of different paradigms. These conditions are satisfied by the HIMICOS library presented in the paper, which has been tested on a large number of experimental data. It allows simulating the kinetics of formation and decomposition of metal hydrides, as well as related tasks, at three levels of abstraction. At the low level, the user defines the interface procedures, such as calculating the time layer based on the previous layer or the entire history, calculating the observed value and the independent variable from the task variables, comparing the curve with the reference. Special algorithms can be used for solving quite general parabolic-type boundary value problems with free boundaries and with various quasilinear (i.e., linear with respect to the derivative only) boundary conditions, as well as calculating the distance between the curves in different metric spaces and with different normalization. This is the middle level of abstraction. At the high level, it is enough to choose a ready tested model for a particular material and modify it in relation to the experimental conditions.

  6. Laser damage to transparent solids is a major limiting factor output power of laser systems. For laser rangefinders, the most likely destruction cause of elements of the optical system (lenses, mirrors) actually, as a rule, somewhat dusty, is not an optical breakdown as a result of avalanche, but such a thermal effect on the dust speck deposited on an element of the optical system (EOS), which leads to its ignition. It is the ignition of a speck of dust that initiates the process of EOS damage.

    The corresponding model of this process leading to the ignition of a speck of dust takes into account the nonlinear Stefan –Boltzmann law of thermal radiation and the infinite thermal effect of periodic radiation on the EOS and the speck of dust. This model is described by a nonlinear system of differential equations for two functions: the EOS temperature and the dust particle temperature. It is proved that due to the accumulating effect of periodic thermal action, the process of reaching the dust speck ignition temperature occurs almost at any a priori possible changes in this process of the thermophysical parameters of the EOS and the dust speck, as well as the heat exchange coefficients between them and the surrounding air. Averaging these parameters over the variables related to both the volume and the surfaces of the dust speck and the EOS is correct under the natural constraints specified in the paper. The entire really significant spectrum of thermophysical parameters is covered thanks to the use of dimensionless units in the problem (including numerical results).

    A thorough mathematical study of the corresponding nonlinear system of differential equations made it possible for the first time for the general case of thermophysical parameters and characteristics of the thermal effect of periodic laser radiation to find a formula for the value of the permissible radiation intensity that does not lead to the destruction of the EOS as a result of the ignition of a speck of dust deposited on the EOS. The theoretical value of the permissible intensity found in the general case in the special case of the data from the Grasse laser ranging station (south of France) almost matches that experimentally observed in the observatory.

    In parallel with the solution of the main problem, we derive a formula for the power absorption coefficient of laser radiation by an EOS expressed in terms of four dimensionless parameters: the relative intensity of laser radiation, the relative illumination of the EOS, the relative heat transfer coefficient from the EOS to the surrounding air, and the relative steady-state temperature of the EOS.

  7. Kochetkova E.V.
    Modeling of the supply–demand imbalance in engineering labor market
    Computer Research and Modeling, 2021, v. 13, no. 6, pp. 1249-1273

    Nowadays the situation of supply-demand imbalances in the professionals’ labor markets causes human capital losses as far as hampers scientific and innovation development. In Russia, supply-demand imbalances in the engineering labor market are associated with deindustrialization processes and manufacturing decline, resulted in a negative public perception of the engineering profession and high rates of graduates not working within the specialty or changing their occupation.

    For analysis of the supply-demand imbalances in the engineering labor market, we elaborated a macroeconomic model. The model consists of 14 blocks, including blocks for demand and supply for engineers and technicians, along with the blocks for macroeconomic indicators as industry and service sector output, capital investment. Using this model, we forecasted the perspective supply-demand imbalances in the engineering labor market in a short-term period and examined the parameters of getting supply-demand balance in the medium-term perspective.

    The results obtained show that situation of more balanced supply and demand for engineering labor is possible if there is simultaneous increase in the share of investments in fixed assets of manufacturing and relative wages in industry, besides getting to balance is facilitated by a decrease of the share of graduates not working by specialty. It is worth noting that a decrease in the share of graduates not working by specialty may be affected whether by the growth of relative wages in industry and number of vacancies or by the implementation of measures aimed at improving the working conditions of the engineering workforce and increasing the attractiveness of the profession. To summarize, in the case of the simplest scenario, not considering additional measures of working conditions improvement and increasing the attractiveness of the profession, the conditions of supply-demand balance achievement implies slightly lower growth rates of investment in industry than required in scenarios that involve increasing the share of engineers and technicians working in their specialty after graduation. The latter case, where a gradual decrease in the proportion of those who do not work in engineering specialty is expected, requires, probably, higher investment costs for attracting specialists and creating new jobs, as well as additional measures to strengthen the attractiveness of the engineering profession.

  8. The creation of a virtual laboratory stand that allows one to obtain reliable characteristics that can be proven as actual, taking into account errors and noises (which is the main distinguishing feature of a computational experiment from model studies) is one of the main problems of this work. It considers the following task: there is a rectangular waveguide in the single operating mode, on the wide wall of which a technological hole is cut, through which a sample for research is placed into the cavity of the transmission line. The recovery algorithm is as follows: the laboratory measures the network parameters (S11 and/or S21) in the transmission line with the sample. In the computer model of the laboratory stand, the sample geometry is reconstructed and an iterative process of optimization (or sweeping) of the electrophysical parameters is started, the mask of this process is the experimental data, and the stop criterion is the interpretive estimate of proximity (or residual). It is important to note that the developed computer model, along with its apparent simplicity, is initially ill-conditioned. To set up a computational experiment, the Comsol modeling environment is used. The results of the computational experiment with a good degree of accuracy coincided with the results of laboratory studies. Thus, experimental verification was carried out for several significant components, both the computer model in particular and the algorithm for restoring the target parameters in general. It is important to note that the computer model developed and described in this work may be effectively used for a computational experiment to restore the full dielectric parameters of a complex geometry target. Weak bianisotropy effects can also be detected, including chirality, gyrotropy, and material nonreciprocity. The resulting model is, by definition, incomplete, but its completeness is the highest of the considered options, while at the same time, the resulting model is well conditioned. Particular attention in this work is paid to the modeling of a coaxial-waveguide transition, it is shown that the use of a discrete-element approach is preferable to the direct modeling of the geometry of a microwave device.

  9. Salenek I.A., Seliverstov Y.A., Seliverstov S.A., Sofronova E.A.
    Improving the quality of route generation in SUMO based on data from detectors using reinforcement learning
    Computer Research and Modeling, 2024, v. 16, no. 1, pp. 137-146

    This work provides a new approach for constructing high-precision routes based on data from transport detectors inside the SUMO traffic modeling package. Existing tools such as flowrouter and routeSampler have a number of disadvantages, such as the lack of interaction with the network in the process of building routes. Our rlRouter uses multi-agent reinforcement learning (MARL), where the agents are incoming lanes and the environment is the road network. By performing actions to launch vehicles, agents receive a reward for matching data from transport detectors. Parameter Sharing DQN with the LSTM backbone of the Q-function was used as an algorithm for multi-agent reinforcement learning.

    Since the rlRouter is trained inside the SUMO simulation, it can restore routes better by taking into account the interaction of vehicles within the network with each other and with the network infrastructure. We have modeled diverse traffic situations on three different junctions in order to compare the performance of SUMO’s routers with the rlRouter. We used Mean Absoluter Error (MAE) as the measure of the deviation from both cumulative detectors and routes data. The rlRouter achieved the highest compliance with the data from the detectors. We also found that by maximizing the reward for matching detectors, the resulting routes also get closer to the real ones. Despite the fact that the routes recovered using rlRouter are superior to the routes obtained using SUMO tools, they do not fully correspond to the real ones, due to the natural limitations of induction-loop detectors. To achieve more plausible routes, it is necessary to equip junctions with other types of transport counters, for example, camera detectors.

  10. Lukyantsev D.S., Afanasiev N.T., Tanaev A.B., Chudaev S.O.
    Numerical-analytical modeling of gravitational lensing of the electromagnetic waves in random-inhomogeneous space plasma
    Computer Research and Modeling, 2024, v. 16, no. 2, pp. 433-443

    Instrument of numerical-analytical modeling of characteristics of propagation of electromagnetic waves in chaotic space plasma with taking into account effects of gravitation is developed for interpretation of data of measurements of astrophysical precision instruments of new education. The task of propagation of waves in curved (Riemann’s) space is solved in Euclid’s space by introducing of the effective index of refraction of vacuum. The gravitational potential can be calculated for various model of distribution of mass of astrophysical objects and at solution of Poisson’s equation. As a result the effective index of refraction of vacuum can be evaluated. Approximate model of the effective index of refraction is suggested with condition that various objects additively contribute in total gravitational field. Calculation of the characteristics of electromagnetic waves in the gravitational field of astrophysical objects is performed by the approximation of geometrical optics with condition that spatial scales of index of refraction a lot more wavelength. Light differential equations in Euler’s form are formed the basis of numerical-analytical instrument of modeling of trajectory characteristic of waves. Chaotic inhomogeneities of space plasma are introduced by model of spatial correlation function of index of refraction. Calculations of refraction scattering of waves are performed by the approximation of geometrical optics. Integral equations for statistic moments of lateral deviations of beams in picture plane of observer are obtained. Integrals for moments are reduced to system of ordinary differential equations the firsts order with using analytical transformations for cooperative numerical calculation of arrange and meansquare deviations of light. Results of numerical-analytical modeling of trajectory picture of propagation of electromagnetic waves in interstellar space with taking into account impact of gravitational fields of space objects and refractive scattering of waves on inhomogeneities of index of refraction of surrounding plasma are shown. Based on the results of modeling quantitative estimation of conditions of stochastic blurring of the effect of gravitational lensing of electromagnetic waves at various frequency ranges is performed. It’s shown that operating frequencies of meter range of wavelengths represent conditional low-frequency limit for observational of the effect of gravitational lensing in stochastic space plasma. The offered instrument of numerical-analytical modeling can be used for analyze of structure of electromagnetic radiation of quasar propagating through group of galactic.

Pages: « first previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"