Результаты поиска по 'estimation of parameters':
Найдено статей: 76
  1. The paper develops a new mathematical method of the joint signal and noise calculation at the Rice statistical distribution based on combing the maximum likelihood method and the method of moments. The calculation of the sough-for values of signal and noise is implemented by processing the sampled measurements of the analyzed Rician signal’s amplitude. The explicit equations’ system has been obtained for required signal and noise parameters and the results of its numerical solution are provided confirming the efficiency of the proposed technique. It has been shown that solving the two-parameter task by means of the proposed technique does not lead to the increase of the volume of demanded calculative resources if compared with solving the task in one-parameter approximation. An analytical solution of the task has been obtained for the particular case of small value of the signal-to-noise ratio. The paper presents the investigation of the dependence of the sought for parameters estimation accuracy and dispersion on the quantity of measurements in experimental sample. According to the results of numerical experiments, the dispersion values of the estimated sought-for signal and noise parameters calculated by means of the proposed technique change in inverse proportion to the quantity of measurements in a sample. There has been implemented a comparison of the accuracy of the soughtfor Rician parametersestimation by means of the proposed technique and by earlier developed version of the method of moments. The problem having been considered in the paper is meaningful for the purposes of Rician data processing, in particular, at the systems of magnetic-resonance visualization, in devices of ultrasonic visualization, at optical signals’ analysis in range-measuring systems, at radar signals’ analysis, as well as at solving many other scientific and applied tasks that are adequately described by the Rice statistical model.

    Views (last year): 11.
  2. Grebenkin I.V., Alekseenko A.E., Gaivoronskiy N.A., Ignatov M.G., Kazennov A.M., Kozakov D.V., Kulagin A.P., Kholodov Y.A.
    Ensemble building and statistical mechanics methods for MHC-peptide binding prediction
    Computer Research and Modeling, 2020, v. 12, no. 6, pp. 1383-1395

    The proteins of the Major Histocompatibility Complex (MHC) play a key role in the functioning of the adaptive immune system, and the identification of peptides that bind to them is an important step in the development of vaccines and understanding the mechanisms of autoimmune diseases. Today, there are a number of methods for predicting the binding of a particular MHC allele to a peptide. One of the best such methods is NetMHCpan-4.0, which is based on an ensemble of artificial neural networks. This paper presents a methodology for qualitatively improving the underlying neural network underlying NetMHCpan-4.0. The proposed method uses the ensemble construction technique and adds as input an estimate of the Potts model taken from static mechanics, which is a generalization of the Ising model. In the general case, the model reflects the interaction of spins in the crystal lattice. Within the framework of the proposed method, the model is used to better represent the physical nature of the interaction of proteins included in the complex. To assess the interaction of the MHC + peptide complex, we use a two-dimensional Potts model with 20 states (corresponding to basic amino acids). Solving the inverse problem using data on experimentally confirmed interacting pairs, we obtain the values of the parameters of the Potts model, which we then use to evaluate a new pair of MHC + peptide, and supplement this value with the input data of the neural network. This approach, combined with the ensemble construction technique, allows for improved prediction accuracy, in terms of the positive predictive value (PPV) metric, compared to the baseline model.

  3. Kazarnikov A.V.
    Analysing the impact of migration on background social strain using a continuous social stratification model
    Computer Research and Modeling, 2022, v. 14, no. 3, pp. 661-673

    The background social strain of a society can be quantitatively estimated using various statistical indicators. Mathematical models, allowing to forecast the dynamics of social strain, are successful in describing various social processes. If the number of interacting groups is small, the dynamics of the corresponding indicators can be modelled with a system of ordinary differential equations. The increase in the number of interacting components leads to the growth of complexity, which makes the analysis of such models a challenging task. A continuous social stratification model can be considered as a result of the transition from a discrete number of interacting social groups to their continuous distribution in some finite interval. In such a model, social strain naturally spreads locally between neighbouring groups, while in reality, the social elite influences the whole society via news media, and the Internet allows non-local interaction between social groups. These factors, however, can be taken into account to some extent using the term of the model, describing negative external influence on the society. In this paper, we develop a continuous social stratification model, describing the dynamics of two societies connected through migration. We assume that people migrate from the social group of donor society with the highest strain level to poorer social layers of the acceptor society, transferring the social strain at the same time. We assume that all model parameters are constants, which is a realistic assumption for small societies only. By using the finite volume method, we construct the spatial discretization for the problem, capable of reproducing finite propagation speed of social strain. We verify the discretization by comparing the results of numerical simulations with the exact solutions of the auxiliary non-linear diffusion equation. We perform the numerical analysis of the proposed model for different values of model parameters, study the impact of migration intensity on the stability of acceptor society, and find the destabilization conditions. The results, obtained in this work, can be used in further analysis of the model in the more realistic case of inhomogeneous coefficients.

  4. Lukianchenko P.P., Danilov A.M., Bugaev A.S., Gorbunov E.I., Pashkov R.A., Ilyina P.G., Gadzhimirzayev Sh.M.
    Approach to Estimating the Dynamics of the Industry Consolidation Level
    Computer Research and Modeling, 2023, v. 15, no. 1, pp. 129-140

    In this article we propose a new approach to the analysis of econometric industry parameters for the industry consolidation level. The research is based on the simple industry automatic control model. The state of the industry is measured by quarterly obtained econometric parameters from each industry’s company provided by the tax control regulator. An approach to analysis of the industry, which does not provide for tracking the economy of each company, but explores the parameters of the set of all companies as a whole, is proposed. Quarterly obtained econometric parameters from each industry’s company are Income, Quantity of employers, Taxes, and Income from Software Licenses. The ABC analysis method was modified by ABCD analysis (D — companies with zero-level impact to industry metrics) and used to make the results obtained for different indicators comparable. Pareto charts were formed for the set of econometric indicators.

    To estimate the industry monopolization, the Herfindahl – Hirschman index was calculated for the most sensitive companies metrics. Using the HHI approach, it was proved that COVID-19 does not lead to changes in the monopolization of the Russian IT industry.

    As the most visually obvious approach to the industry visualization, scattering diagrams in combination with the Pareto graph colors were proposed. The affect of the accreditation procedure is clearly observed by scattering diagram in combination with red/black dots for accredited and nonaccredited companies respectively.

    The last reported result is the proposal to use the Licenses End-to-End Product Identification as the market structure control instrument. It is the basis to avoid the multiple accounting of the licenses reselling within the chain of software distribution.

    The results of research could be the basis for future IT industry analysis and simulation on the agent based approach.

  5. Minkevich I.G.
    Estimation of maximal values of biomass growth yield based on the mass-energy balance of cell metabolism
    Computer Research and Modeling, 2019, v. 11, no. 4, pp. 723-750

    The biomass growth yield is the ratio of the newly synthesized substance of growing cells to the amount of the consumed substrate, the source of matter and energy for cell growth. The yield is a characteristic of the efficiency of substrate conversion to cell biomass. The conversion is carried out by the cell metabolism, which is a complete aggregate of biochemical reactions occurring in the cells.

    This work newly considers the problem of maximal cell growth yield prediction basing on balances of the whole living cell metabolism and its fragments called as partial metabolisms (PM). The following PM’s are used for the present consideration. During growth on any substrate we consider i) the standard constructive metabolism (SCM) which consists of identical pathways during growth of various organisms on any substrate. SCM starts from several standard compounds (nodal metabolites): glucose, acetyl-CoA 2-oxoglutarate, erythrose-4-phosphate, oxaloacetate, ribose-5- phosphate, 3-phosphoglycerate, phosphoenolpyruvate, and pyruvate, and ii) the full forward metabolism (FM) — the remaining part of the whole metabolism. The first one consumes high-energy bonds (HEB) formed by the second one. In this work we examine a generalized variant of the FM, when the possible presence of extracellular products, as well as the possibilities of both aerobic and anaerobic growth are taken into account. Instead of separate balances of each nodal metabolite formation as it was made in our previous work, this work deals at once with the whole aggregate of these metabolites. This makes the problem solution more compact and requiring a smaller number of biochemical quantities and substantially less computational time. An equation expressing the maximal biomass yield via specific amounts of HEB formed and consumed by the partial metabolisms has been derived. It includes the specific HEB consumption by SCM which is a universal biochemical parameter applicable to the wide range of organisms and growth substrates. To correctly determine this parameter, the full constructive metabolism and its forward part are considered for the growth of cells on glucose as the mostly studied substrate. We used here the found earlier properties of the elemental composition of lipid and lipid-free fractions of cell biomass. Numerical study of the effect of various interrelations between flows via different nodal metabolites has been made. It showed that the requirements of the SCM in high-energy bonds and NAD(P)H are practically constants. The found HEB-to-formed-biomass coefficient is an efficient tool for finding estimates of maximal biomass yield from substrates for which the primary metabolism is known. Calculation of ATP-to-substrate ratio necessary for the yield estimation has been made using the special computer program package, GenMetPath.

    Views (last year): 2.
  6. Chernov I.A.
    High-throughput identification of hydride phase-change kinetics models
    Computer Research and Modeling, 2020, v. 12, no. 1, pp. 171-183

    Metal hydrides are an interesting class of chemical compounds that can reversibly bind a large amount of hydrogen and are, therefore, of interest for energy applications. Understanding the factors affecting the kinetics of hydride formation and decomposition is especially important. Features of the material, experimental setup and conditions affect the mathematical description of the processes, which can undergo significant changes during the processing of experimental data. The article proposes a general approach to numerical modeling of the formation and decomposition of metal hydrides and solving inverse problems of estimating material parameters from measurement data. The models are divided into two classes: diffusive ones, that take into account the gradient of hydrogen concentration in the metal lattice, and models with fast diffusion. The former are more complex and take the form of non-classical boundary value problems of parabolic type. A rather general approach to the grid solution of such problems is described. The second ones are solved relatively simply, but can change greatly when model assumptions change. Our experience in processing experimental data shows that a flexible software tool is needed; a tool that allows, on the one hand, building models from standard blocks, freely changing them if necessary, and, on the other hand, avoiding the implementation of routine algorithms. It also should be adapted for high-performance systems of different paradigms. These conditions are satisfied by the HIMICOS library presented in the paper, which has been tested on a large number of experimental data. It allows simulating the kinetics of formation and decomposition of metal hydrides, as well as related tasks, at three levels of abstraction. At the low level, the user defines the interface procedures, such as calculating the time layer based on the previous layer or the entire history, calculating the observed value and the independent variable from the task variables, comparing the curve with the reference. Special algorithms can be used for solving quite general parabolic-type boundary value problems with free boundaries and with various quasilinear (i.e., linear with respect to the derivative only) boundary conditions, as well as calculating the distance between the curves in different metric spaces and with different normalization. This is the middle level of abstraction. At the high level, it is enough to choose a ready tested model for a particular material and modify it in relation to the experimental conditions.

  7. Varshavskiy A.E.
    A model for analyzing income inequality based on a finite functional sequence (adequacy and application problems)
    Computer Research and Modeling, 2022, v. 14, no. 3, pp. 675-689

    The paper considers the adequacy of the model developed earlier by the author for the analysis of income inequality and based on an empirically confirmed hypothesis that the relative (to the income of the richest group) income values of 20% population groups in total income can be represented as a finite functional sequence, each member of which depends on one parameter — a specially defined indicator of inequality. It is shown that in addition to the existing methods of inequality analysis, the model makes it possible to estimate with the help of analytical expressions the income shares of 20%, 10% and smaller groups of the population for different levels of inequality, as well as to identify how they change with the growth of inequality, to estimate the level of inequality for known ratios between the incomes of different groups of the population, etc.

    The paper provides a more detailed confirmation of the proposed model adequacy in comparison with the previously obtained results of statistical analysis of empirical data on the distribution of income between the 20% and 10% population groups. It is based on the analysis of certain ratios between the values of quintiles and deciles according to the proposed model. The verification of these ratios was carried out using a set of data for a large number of countries and the estimates obtained confirm the sufficiently high accuracy of the model.

    Data are presented that confirm the possibility of using the model to analyze the dependence of income distribution by population groups on the level of inequality, as well as to estimate the inequality indicator for income ratios between different groups, including variants when the income of the richest 20% is equal to the income of the poor 60 %, income of the middle class 40% or income of the rest 80% of the population, as well as when the income of the richest 10% is equal to the income of the poor 40 %, 50% or 60%, to the income of various middle class groups, etc., as well as for cases, when the distribution of income obeys harmonic proportions and when the quintiles and deciles corresponding to the middle class reach a maximum. It is shown that the income shares of the richest middle class groups are relatively stable and have a maximum at certain levels of inequality.

    The results obtained with the help of the model can be used to determine the standards for developing a policy of gradually increasing the level of progressive taxation in order to move to the level of inequality typical of countries with social oriented economy.

  8. The creation of a virtual laboratory stand that allows one to obtain reliable characteristics that can be proven as actual, taking into account errors and noises (which is the main distinguishing feature of a computational experiment from model studies) is one of the main problems of this work. It considers the following task: there is a rectangular waveguide in the single operating mode, on the wide wall of which a technological hole is cut, through which a sample for research is placed into the cavity of the transmission line. The recovery algorithm is as follows: the laboratory measures the network parameters (S11 and/or S21) in the transmission line with the sample. In the computer model of the laboratory stand, the sample geometry is reconstructed and an iterative process of optimization (or sweeping) of the electrophysical parameters is started, the mask of this process is the experimental data, and the stop criterion is the interpretive estimate of proximity (or residual). It is important to note that the developed computer model, along with its apparent simplicity, is initially ill-conditioned. To set up a computational experiment, the Comsol modeling environment is used. The results of the computational experiment with a good degree of accuracy coincided with the results of laboratory studies. Thus, experimental verification was carried out for several significant components, both the computer model in particular and the algorithm for restoring the target parameters in general. It is important to note that the computer model developed and described in this work may be effectively used for a computational experiment to restore the full dielectric parameters of a complex geometry target. Weak bianisotropy effects can also be detected, including chirality, gyrotropy, and material nonreciprocity. The resulting model is, by definition, incomplete, but its completeness is the highest of the considered options, while at the same time, the resulting model is well conditioned. Particular attention in this work is paid to the modeling of a coaxial-waveguide transition, it is shown that the use of a discrete-element approach is preferable to the direct modeling of the geometry of a microwave device.

  9. A simple non-linear model allowing to calculate daily and monthly GPP and NPP of forests using parameters characterizing the light-use efficiencies for GPP and NPP, and integral values of absorbed photosynthetically active radiation, obtained using field measurements and remotes sensing data was suggested. Daily and monthly GPP, NPP of the forest ecosystems were derived from the field measurements of the net ecosystem exchange of CO2 in the spruce and tropical rain forests using a process-based Mixfor-SVAT model.

    Views (last year): 1. Citations: 2 (RSCI).
  10. Usanov D.A., Skripal A.V., Averyanov A.P., Dobdin S.Yu., Kashchavtsev E.O.
    Method of estimation of heart failure during a physical exercise
    Computer Research and Modeling, 2017, v. 9, no. 2, pp. 311-321

    The results of determination of the risk of cardiovascular failure of young athletes and adolescents in stressful physical activity have been demonstrated. The method of screening diagnostics of the risk of developing heart failure has been described. The results of contactless measurement of the form of the pulse wave of the radial artery using semiconductor laser autodyne have been presented. In the measurements used laser diode type RLD-650 specifications: output power of 5 mW, emission wavelength 654 nm. The problem was solved by the reduced form of the reflector movement, which acts as the surface of the skin of the human artery, tested method of assessing the risk of cardiovascular disease during exercise and the analysis of the results of its application to assess the risk of cardiovascular failure reactions of young athletes. As analyzed parameters were selected the following indicators: the steepness of the rise in the systolic portion of the fast and slow phase, the rate of change in the pulse wave catacrota variability of cardio intervals as determined by the time intervals between the peaks of the pulse wave. It analyzed pulse wave form on its first and second derivative with respect to time. The zeros of the first derivative of the pulse wave allow to set aside time in systolic rise. A minimum of the second derivative corresponds to the end of the phase and the beginning of the slow pressure build-up in the systole. Using the first and second derivative of the pulse wave made it possible to separately analyze the pulse wave form phase of rapid and slow pressure increase phase during systolic expansion. It has been established that the presence of anomalies in the form of the pulse wave in combination with vagotonic nervous regulation of the cardiovascular system of a patient is a sign of danger collapse of circulation during physical exercise.

    Views (last year): 8. Citations: 1 (RSCI).
Pages: « first previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"