Результаты поиска по 'dependability':
Найдено статей: 308
  1. Fialko N.S., Olshevets M.M., Lakhno V.D.
    Numerical study of the Holstein model in different thermostats
    Computer Research and Modeling, 2024, v. 16, no. 2, pp. 489-502

    Based on the Holstein Hamiltonian, the dynamics of the charge introduced into the molecular chain of sites was modeled at different temperatures. In the calculation, the temperature of the chain is set by the initial data ¡ª random Gaussian distributions of velocities and site displacements. Various options for the initial charge density distribution are considered. Long-term calculations show that the system moves to fluctuations near a new equilibrium state. For the same initial velocities and displacements, the average kinetic energy, and, accordingly, the temperature of the T chain, varies depending on the initial distribution of the charge density: it decreases when a polaron is introduced into the chain, or increases if at the initial moment the electronic part of the energy is maximum. A comparison is made with the results obtained previously in the model with a Langevin thermostat. In both cases, the existence of a polaron is determined by the thermal energy of the entire chain.

    According to the simulation results, the transition from the polaron mode to the delocalized state occurs in the same range of thermal energy values of a chain of $N$ sites ~ $NT$ for both thermostat options, with an additional adjustment: for the Hamiltonian system the temperature does not correspond to the initially set one, but is determined after long-term calculations from the average kinetic energy of the chain.

    In the polaron region, the use of different methods for simulating temperature leads to a number of significant differences in the dynamics of the system. In the region of the delocalized state of charge, for high temperatures, the results averaged over a set of trajectories in a system with a random force and the results averaged over time for a Hamiltonian system are close, which does not contradict the ergodic hypothesis. From a practical point of view, for large temperatures T ≈ 300 K, when simulating charge transfer in homogeneous chains, any of these options for setting the thermostat can be used.

  2. Smolyak S.A.
    Valuation of machines at the random process of their degradation and premature sales
    Computer Research and Modeling, 2024, v. 16, no. 3, pp. 797-815

    The model of the process of using machinery and equipment is considered, which takes into account the probabilistic nature of the process of their operation and sale. It takes into account the possibility of random hidden failures, after which the condition of the machine deteriorates abruptly, as well as the randomly arising need for premature (before the end of its service life) sale of the machine, which requires, generally speaking, random time. The model is focused on assessing the market value and service life of machines in accordance with International Valuation Standards. Strictly speaking, the market value of a used machine depends on its technical condition, but in practice, appraisers only take into account its age, since generally accepted measures of the technical condition of machines do not yet exist. As a result, the market value of a used machine is assumed to be equal to the average market value of similar machines of the corresponding age. For these purposes, appraisers use coefficients that reflect the influence of the age of machines on their market value. Such coefficients are not always justified and do not take into account either the degradation of the machine or the probabilistic nature of the process of its use. The proposed model is based on the anticipation of benefits principle. In it, we characterize the state of the machine by the intensity of the benefits it brings. The machine is subjected to a complex Poisson failure process, and after failure its condition abruptly worsens and may even reach its limit. Situations also arise that preclude further use of the machine by its owner. In such situations, the owner puts the machine up for sale before the end of its service life (prematurely), and the sale requires a random timing. The model allows us to take into account the influence of such situations and construct an analytical relationship linking the market value of a machine with its condition, and calculate the average coefficients of change in the market value of machines with age. At the same time, it is also possible to take into account the influence of inflation and the scrap cost of the machine. We have found that the rate of prematurely sales has a significant impact on the cost of new and used machines. The model also allows us to take into account the influence of inflation and the scrap value of the machine. We have found that the rate of premature sales has a significant impact on the service life and market value of new and used machines. At the same time, the dependence of the market value of machines on age is largely determined by the coefficient of variation of the service life of the machines. The results obtained allow us to obtain more reasonable estimates of the market value of machines, including for the purposes of the system of national accounts.

  3. Shpitonkov M.I.
    Application of correlation adaptometry technique to sports and biomedical research
    Computer Research and Modeling, 2017, v. 9, no. 2, pp. 345-354

    The paper outlines the approaches to mathematical modeling correlation adaptometry techniques widely used in biology and medicine. The analysis is based on models employed in descriptions of structured biological systems. It is assumed that the distribution density of the biological population numbers satisfies the equation of Kolmogorov-Fokker-Planck. Using this technique evaluated the effectiveness of treatment of patients with obesity. All patients depending on the obesity degree and the comorbidity nature were divided into three groups. Shows a decrease in weight of the correlation graph computed from the measured in the patients of the indicators that characterizes the effectiveness of the treatment for all studied groups. This technique was also used to assess the intensity of the training loads in academic rowing three age groups. It was shown that with the highest voltage worked with athletes for youth group. Also, using the technique of correlation adaptometry evaluated the effectiveness of the treatment of hormone replacement therapy in women. All the patients depending on the assigned drug were divided into four groups. In the standard analysis of the dynamics of mean values of indicators, it was shown that in the course of the treatment were observed normalization of the averages for all groups of patients. However, using the technique of correlation adaptometry it was found that during the first six months the weight of the correlation graph was decreasing and during the second six months the weight increased for all study groups. This indicates the excessive length of the annual course of hormone replacement therapy and the practicality of transition to a semiannual rate.

    Views (last year): 10.
  4. Melman A.S., Evsutin O.O.
    Efficient and error-free information hiding in the hybrid domain of digital images using metaheuristic optimization
    Computer Research and Modeling, 2023, v. 15, no. 1, pp. 197-210

    Data hiding in digital images is a promising direction of cybersecurity. Digital steganography methods provide imperceptible transmission of secret data over an open communication channel. The information embedding efficiency depends on the embedding imperceptibility, capacity, and robustness. These quality criteria are mutually inverse, and the improvement of one indicator usually leads to the deterioration of the others. A balance between them can be achieved using metaheuristic optimization. Metaheuristics are a class of optimization algorithms that find an optimal, or close to an optimal solution for a variety of problems, including those that are difficult to formalize, by simulating various natural processes, for example, the evolution of species or the behavior of animals. In this study, we propose an approach to data hiding in the hybrid spatial-frequency domain of digital images based on metaheuristic optimization. Changing a block of image pixels according to some change matrix is considered as an embedding operation. We select the change matrix adaptively for each block using metaheuristic optimization algorithms. In this study, we compare the performance of three metaheuristics such as genetic algorithm, particle swarm optimization, and differential evolution to find the best change matrix. Experimental results showed that the proposed approach provides high imperceptibility of embedding, high capacity, and error-free extraction of embedded information. At the same time, storage of change matrices for each block is not required for further data extraction. This improves user experience and reduces the chance of an attacker discovering the steganographic attachment. Metaheuristics provided an increase in imperceptibility indicator, estimated by the PSNR metric, and the capacity of the previous algorithm for embedding information into the coefficients of the discrete cosine transform using the QIM method [Evsutin, Melman, Meshcheryakov, 2021] by 26.02% and 30.18%, respectively, for the genetic algorithm, 26.01% and 19.39% for particle swarm optimization, 27.30% and 28.73% for differential evolution.

  5. Maksimova O.V., Aronov I.Z.
    Mathematical consensus model of loyal experts based on regular Markov chains
    Computer Research and Modeling, 2023, v. 15, no. 5, pp. 1381-1393

    The theoretical study of consensus makes it possible to analyze the various situations that social groups that make decisions in this way have to face in real life, abstracting from the specific characteristics of the groups. It is relevant for practice to study the dynamics of a social group consisting of loyal experts who, in the process of seeking consensus, yield to each other. In this case, psychological “traps” such as false consensus or groupthink are possible, which can sometimes lead to managerial decisions with dire consequences.

    The article builds a mathematical consensus model for a group of loyal experts based on modeling using regular Markov chains. Analysis of the model showed that with an increase in the loyalty (decrease in authoritarianism) of group members, the time to reach consensus increases exponentially (the number of agreements increases), which is apparently due to the lack of desire among experts to take part of the responsibility for the decision being made. An increase in the size of such a group leads (ceteris paribus):

    – to reduce the number of approvals to consensus in the conditions of striving for absolute loyalty of members, i. e. each additional loyal member adds less and less “strength” to the group;

    – to a logarithmic increase in the number of approvals in the context of an increase in the average authoritarianism of members. It is shown that in a small group (two people), the time for reaching consensus can increase by more than 10 times compared to a group of 5 or more members), in the group there is a transfer of responsibility for making decisions.

    It is proved that in the case of a group of two absolutely loyal members, consensus is unattainable.

    A reasonable conclusion is made that consensus in a group of loyal experts is a special (special) case of consensus, since the dependence of the time until consensus is reached on the authoritarianism of experts and their number in the group is described by different curves than in the case of a regular group of experts.

  6. Fokin G.A., Volgushev D.B.
    Models for spatial selection during location-aware beamforming in ultra-dense millimeter wave radio access networks
    Computer Research and Modeling, 2024, v. 16, no. 1, pp. 195-216

    The work solves the problem of establishing the dependence of the potential for spatial selection of useful and interfering signals according to the signal-to-interference ratio criterion on the positioning error of user equipment during beamforming by their location at a base station, equipped with an antenna array. Configurable simulation parameters include planar antenna array with a different number of antenna elements, movement trajectory, as well as the accuracy of user equipment location estimation using root mean square error of coordinate estimates. The model implements three algorithms for controlling the shape of the antenna radiation pattern: 1) controlling the beam direction for one maximum and one zero; 2) controlling the shape and width of the main beam; 3) adaptive beamforming. The simulation results showed, that the first algorithm is most effective, when the number of antenna array elements is no more than 5 and the positioning error is no more than 7 m, and the second algorithm is appropriate to employ, when the number of antenna array elements is more than 15 and the positioning error is more than 5 m. Adaptive beamforming is implemented using a training signal and provides optimal spatial selection of useful and interfering signals without device location data, but is characterized by high complexity of hardware implementation. Scripts of the developed models are available for verification. The results obtained can be used in the development of scientifically based recommendations for beam control in ultra-dense millimeter-wave radio access networks of the fifth and subsequent generations.

  7. Neverova G.P., Frisman E.Y.
    Dynamics regimes of population with non-overlapping generations taking into account genetic and stage structures
    Computer Research and Modeling, 2020, v. 12, no. 5, pp. 1165-1190

    This paper studies a model of a population with non-overlapping generations and density-dependent regulation of birth rate. The population breeds seasonally, and its reproductive potential is determined genetically. The model proposed combines an ecological dynamic model of a limited population with non-overlapping generations and microevolutionary model of its genetic structure dynamics for the case when adaptive trait of birth rate controlled by a single diallelic autosomal locus with allelomorphs A and a. The study showed the genetic composition of the population, namely, will it be polymorphic or monomorphic, is mainly determined by the values of the reproductive potentials of heterozygote and homozygotes. Moreover, the average reproductive potential of mature individuals and intensity of self-regulation processes determine population dynamics. In particularly, increasing the average value of the reproductive potential leads to destabilization of the dynamics of age group sizes. The intensity of self-regulation processes determines the nature of emerging oscillations, since scenario of stability loss of fixed points depends on the values of this parameter. It is shown that patterns of occurrence and evolution of cyclic dynamics regimes are mainly determined by the features of life cycle of individuals in population. The life cycle leading to existence of non-overlapping generation gives isolated subpopulations in different years, which results in the possibility of independent microevolution of these subpopulations and, as a result, the complex dynamics emergence of both stage structure and genetic one. Fixing various adaptive mutations will gradually lead to genetic (and possibly morphological) differentiation and to differences in the average reproductive potentials of subpopulations that give different values of equilibrium subpopulation sizes. Further evolutionary growth of reproductive potentials of limited subpopulations leads to their number fluctuations which can differ in both amplitude and phase.

  8. varshavsky L.Eug.
    Study of the dynamics of the structure of oligopolistic markets with non-market opposition parties
    Computer Research and Modeling, 2021, v. 13, no. 1, pp. 219-233

    The article examines the impact of non-market actions of participants in oligopolistic markets on the market structure. The following actions of one of the market participants aimed at increasing its market share are analyzed: 1) price manipulation; 2) blocking investments of stronger oligopolists; 3) destruction of produced products and capacities of competitors. Linear dynamic games with a quadratic criterion are used to model the strategies of oligopolists. The expediency of their use is due to the possibility of both an adequate description of the evolution of markets and the implementation of two mutually complementary approaches to determining the strategies of oligopolists: 1) based on the representation of models in the state space and the solution of generalized Riccati equations; 2) based on the application of operational calculus methods (in the frequency domain) which owns the visibility necessary for economic analysis.

    The article shows the equivalence of approaches to solving the problem with maximin criteria of oligopolists in the state space and in the frequency domain. The results of calculations are considered in relation to a duopoly, with indicators close to one of the duopolies in the microelectronic industry of the world. The second duopolist is less effective from the standpoint of costs, though more mobile. Its goal is to increase its market share by implementing the non-market methods listed above.

    Calculations carried out with help of the game model, made it possible to construct dependencies that characterize the relationship between the relative increase in production volumes over a 25-year period of weak and strong duopolists under price manipulation. Constructed dependencies show that an increase in the price for the accepted linear demand function leads to a very small increase in the production of a strong duopolist, but, simultaneously, to a significant increase in this indicator for a weak one.

    Calculations carried out with use of the other variants of the model, show that blocking investments, as well as destroying the products of a strong duopolist, leads to more significant increase in the production of marketable products for a weak duopolist than to a decrease in this indicator for a strong one.

  9. Zavodskikh R.K., Efanov N.N.
    Performance prediction for chosen types of loops over one-dimensional arrays with embedding-driven intermediate representations analysis
    Computer Research and Modeling, 2023, v. 15, no. 1, pp. 211-224

    The method for mapping of intermediate representations (IR) set of C, C++ programs to vector embedding space is considered to create an empirical estimation framework for static performance prediction using LLVM compiler infrastructure. The usage of embeddings makes programs easier to compare due to avoiding Control Flow Graphs (CFG) and Data Flow Graphs (DFG) direct comparison. This method is based on transformation series of the initial IR such as: instrumentation — injection of artificial instructions in an instrumentation compiler’s pass depending on load offset delta in the current instruction compared to the previous one, mapping of instrumented IR into multidimensional vector with IR2Vec and dimension reduction with t-SNE (t-distributed stochastic neighbor embedding) method. The D1 cache miss ratio measured with perf stat tool is considered as performance metric. A heuristic criterion of programs having more or less cache miss ratio is given. This criterion is based on embeddings of programs in 2D-space. The instrumentation compiler’s pass developed in this work is described: how it generates and injects artificial instructions into IR within the used memory model. The software pipeline that implements the performance estimation based on LLVM compiler infrastructure is given. Computational experiments are performed on synthetic tests which are the sets of programs with the same CFGs but with different sequences of offsets used when accessing the one-dimensional array of a given size. The correlation coefficient between performance metric and distance to the worst program’s embedding is measured and proved to be negative regardless of t-SNE initialization. This fact proves the heuristic criterion to be true. The process of such synthetic tests generation is also considered. Moreover, the variety of performance metric in programs set in such a test is proposed as a metric to be improved with exploration of more tests generators.

  10. Nikitiuk A.S.
    Parameter identification of viscoelastic cell models based on force curves and wavelet transform
    Computer Research and Modeling, 2023, v. 15, no. 6, pp. 1653-1672

    Mechanical properties of eukaryotic cells play an important role in life cycle conditions and in the development of pathological processes. In this paper we discuss the problem of parameters identification and verification of viscoelastic constitutive models based on force spectroscopy data of living cells. It is proposed to use one-dimensional continuous wavelet transform to calculate the relaxation function. Analytical calculations and the results of numerical simulation are given, which allow to obtain relaxation functions similar to each other on the basis of experimentally determined force curves and theoretical stress-strain relationships using wavelet differentiation algorithms. Test examples demonstrating correctness of software implementation of the proposed algorithms are analyzed. The cell models are considered, on the example of which the application of the proposed procedure of identification and verification of their parameters is demonstrated. Among them are a structural-mechanical model with parallel connected fractional elements, which is currently the most adequate in terms of compliance with atomic force microscopy data of a wide class of cells, and a new statistical-thermodynamic model, which is not inferior in descriptive capabilities to models with fractional derivatives, but has a clearer physical meaning. For the statistical-thermodynamic model, the procedure of its construction is described in detail, which includes the following. Introduction of a structural variable, the order parameter, to describe the orientation properties of the cell cytoskeleton. Setting and solving the statistical problem for the ensemble of actin filaments of a representative cell volume with respect to this variable. Establishment of the type of free energy depending on the order parameter, temperature and external load. It is also proposed to use an oriented-viscous-elastic body as a model of a representative element of the cell. Following the theory of linear thermodynamics, evolutionary equations describing the mechanical behavior of the representative volume of the cell are obtained, which satisfy the basic thermodynamic laws. The problem of optimizing the parameters of the statisticalthermodynamic model of the cell, which can be compared both with experimental data and with the results of simulations based on other mathematical models, is also posed and solved. The viscoelastic characteristics of cells are determined on the basis of comparison with literature data.

Pages: « first previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"