Результаты поиска по 'estimating function':
Найдено статей: 90
  1. Tinkov O.V., Polishchuk P.G., Khachatryan D.S., Kolotaev A.V., Balaev A.N., Osipov V.N., Grigorev B.Y.
    Quantitative analysis of “structure – anticancer activity” and rational molecular design of bi-functional VEGFR-2/HDAC-inhibitors
    Computer Research and Modeling, 2019, v. 11, no. 5, pp. 911-930

    Inhibitors of histone deacetylases (HDACi) have considered as a promising class of drugs for the treatment of cancers because of their effects on cell growth, differentiation, and apoptosis. Angiogenesis play an important role in the growth of most solid tumors and the progression of metastasis. The vascular endothelial growth factor (VEGF) is a key angiogenic agent, which is secreted by malignant tumors, which induces the proliferation and the migration of vascular endothelial cells. Currently, the most promising strategy in the fight against cancer is the creation of hybrid drugs that simultaneously act on several physiological targets. In this work, a series of hybrids bearing N-phenylquinazolin-4-amine and hydroxamic acid moieties were studied as dual VEGFR-2/HDAC inhibitors using simplex representation of the molecular structure and Support Vector Machine (SVM). The total sample of 42 compounds was divided into training and test sets. Five-fold cross-validation (5-fold) was used for internal validation. Satisfactory quantitative structure—activity relationship (QSAR) models were constructed (R2test = 0.64–0.87) for inhibitors of HDAC, VEGFR-2 and human breast cancer cell line MCF-7. The interpretation of the obtained QSAR models was carried out. The coordinated effect of different molecular fragments on the increase of antitumor activity of the studied compounds was estimated. Among the substituents of the N-phenyl fragment, the positive contribution of para bromine for all three types of activity can be distinguished. The results of the interpretation were used for molecular design of potential dual VEGFR-2/HDAC inhibitors. For comparative QSAR research we used physicochemical descriptors calculated by the program HYBOT, the method of Random Forest (RF), and on-line version of the expert system OCHEM (https://ochem.eu). In the modeling of OCHEM PyDescriptor descriptors and extreme gradient boosting was chosen. In addition, the models obtained with the help of the expert system OCHEM were used for virtual screening of 300 compounds to select promising VEGFR-2/HDAC inhibitors for further synthesis and testing.

  2. Puchinin S.M., Korolkov E.R., Stonyakin F.S., Alkousa M.S., Vyguzov A.A.
    Subgradient methods with B.T. Polyak-type step for quasiconvex minimization problems with inequality constraints and analogs of the sharp minimum
    Computer Research and Modeling, 2024, v. 16, no. 1, pp. 105-122

    In this paper, we consider two variants of the concept of sharp minimum for mathematical programming problems with quasiconvex objective function and inequality constraints. It investigated the problem of describing a variant of a simple subgradient method with switching along productive and non-productive steps, for which, on a class of problems with Lipschitz functions, it would be possible to guarantee convergence with the rate of geometric progression to the set of exact solutions or its vicinity. It is important that to implement the proposed method there is no need to know the sharp minimum parameter, which is usually difficult to estimate in practice. To overcome this problem, the authors propose to use a step adjustment procedure similar to that previously proposed by B. T. Polyak. However, in this case, in comparison with the class of problems without constraints, it arises the problem of knowing the exact minimal value of the objective function. The paper describes the conditions for the inexactness of this information, which make it possible to preserve convergence with the rate of geometric progression in the vicinity of the set of minimum points of the problem. Two analogs of the concept of a sharp minimum for problems with inequality constraints are considered. In the first one, the problem of approximation to the exact solution arises only to a pre-selected level of accuracy, for this, it is considered the case when the minimal value of the objective function is unknown; instead, it is given some approximation of this value. We describe conditions on the inexact minimal value of the objective function, under which convergence to the vicinity of the desired set of points with a rate of geometric progression is still preserved. The second considered variant of the sharp minimum does not depend on the desired accuracy of the problem. For this, we propose a slightly different way of checking whether the step is productive, which allows us to guarantee the convergence of the method to the exact solution with the rate of geometric progression in the case of exact information. Convergence estimates are proved under conditions of weak convexity of the constraints and some restrictions on the choice of the initial point, and a corollary is formulated for the convex case when the need for an additional assumption on the choice of the initial point disappears. For both approaches, it has been proven that the distance from the current point to the set of solutions decreases with increasing number of iterations. This, in particular, makes it possible to limit the requirements for the properties of the used functions (Lipschitz-continuous, sharp minimum) only for a bounded set. Some computational experiments are performed, including for the truss topology design problem.

  3. Lyubushin A.A., Rodionov E.A.
    Analysis of predictive properties of ground tremor using Huang decomposition
    Computer Research and Modeling, 2024, v. 16, no. 4, pp. 939-958

    A method is proposed for analyzing the tremor of the earth’s surface, measured by means of space geodesy, in order to highlight the prognostic effects of seismicity activation. The method is illustrated by the example of a joint analysis of a set of synchronous time series of daily vertical displacements of the earth’s surface on the Japanese Islands for the time interval 2009–2023. The analysis is based on dividing the source data (1047 time series) into blocks (clusters of stations) and sequentially applying the principal component method. The station network is divided into clusters using the K-means method from the maximum pseudo-F-statistics criterion, and for Japan the optimal number of clusters was chosen to be 15. The Huang decomposition method into a sequence of independent empirical oscillation modes (EMD — Empirical Mode Decomposition) is applied to the time series of principal components from station blocks. To provide the stability of estimates of the waveforms of the EMD decomposition, averaging of 1000 independent additive realizations of white noise of limited amplitude was performed. Using the Cholesky decomposition of the covariance matrix of the waveforms of the first three EMD components in a sliding time window, indicators of abnormal tremor behavior were determined. By calculating the correlation function between the average indicators of anomalous behavior and the released seismic energy in the vicinity of the Japanese Islands, it was established that bursts in the measure of anomalous tremor behavior precede emissions of seismic energy. The purpose of the article is to clarify common hypotheses that movements of the earth’s crust recorded by space geodesy may contain predictive information. That displacements recorded by geodetic methods respond to the effects of earthquakes is widely known and has been demonstrated many times. But isolating geodetic effects that predict seismic events is much more challenging. In our paper, we propose one method for detecting predictive effects in space geodesy data.

  4. Grabarnik P.Ya.
    Parameter estimation methods for random point fields with local interactions
    Computer Research and Modeling, 2016, v. 8, no. 2, pp. 323-332

    The paper gives an overview of methods for estimating the parameters of random point fields with local interaction between points. It is shown that the conventional method of the maximum pseudo-likelihood is a special case of the family of estimation methods based on the use of the auxiliary Markov process, invariant measure of which is the Gibbs point field with parameters to be estimated. A generalization of this method, resulting in estimating equation that can not be obtained by the the universal Takacs–Fiksel method, is proposed. It is shown by computer simulations that the new method enables to obtain estimates which have better quality than those by a widely used method of the maximum pseudolikelihood.

    Views (last year): 3.
  5. The paper develops a new mathematical method of the joint signal and noise calculation at the Rice statistical distribution based on combing the maximum likelihood method and the method of moments. The calculation of the sough-for values of signal and noise is implemented by processing the sampled measurements of the analyzed Rician signal’s amplitude. The explicit equations’ system has been obtained for required signal and noise parameters and the results of its numerical solution are provided confirming the efficiency of the proposed technique. It has been shown that solving the two-parameter task by means of the proposed technique does not lead to the increase of the volume of demanded calculative resources if compared with solving the task in one-parameter approximation. An analytical solution of the task has been obtained for the particular case of small value of the signal-to-noise ratio. The paper presents the investigation of the dependence of the sought for parameters estimation accuracy and dispersion on the quantity of measurements in experimental sample. According to the results of numerical experiments, the dispersion values of the estimated sought-for signal and noise parameters calculated by means of the proposed technique change in inverse proportion to the quantity of measurements in a sample. There has been implemented a comparison of the accuracy of the soughtfor Rician parameters’ estimation by means of the proposed technique and by earlier developed version of the method of moments. The problem having been considered in the paper is meaningful for the purposes of Rician data processing, in particular, at the systems of magnetic-resonance visualization, in devices of ultrasonic visualization, at optical signals’ analysis in range-measuring systems, at radar signals’ analysis, as well as at solving many other scientific and applied tasks that are adequately described by the Rice statistical model.

    Views (last year): 11.
  6. Grebenkin I.V., Alekseenko A.E., Gaivoronskiy N.A., Ignatov M.G., Kazennov A.M., Kozakov D.V., Kulagin A.P., Kholodov Y.A.
    Ensemble building and statistical mechanics methods for MHC-peptide binding prediction
    Computer Research and Modeling, 2020, v. 12, no. 6, pp. 1383-1395

    The proteins of the Major Histocompatibility Complex (MHC) play a key role in the functioning of the adaptive immune system, and the identification of peptides that bind to them is an important step in the development of vaccines and understanding the mechanisms of autoimmune diseases. Today, there are a number of methods for predicting the binding of a particular MHC allele to a peptide. One of the best such methods is NetMHCpan-4.0, which is based on an ensemble of artificial neural networks. This paper presents a methodology for qualitatively improving the underlying neural network underlying NetMHCpan-4.0. The proposed method uses the ensemble construction technique and adds as input an estimate of the Potts model taken from static mechanics, which is a generalization of the Ising model. In the general case, the model reflects the interaction of spins in the crystal lattice. Within the framework of the proposed method, the model is used to better represent the physical nature of the interaction of proteins included in the complex. To assess the interaction of the MHC + peptide complex, we use a two-dimensional Potts model with 20 states (corresponding to basic amino acids). Solving the inverse problem using data on experimentally confirmed interacting pairs, we obtain the values of the parameters of the Potts model, which we then use to evaluate a new pair of MHC + peptide, and supplement this value with the input data of the neural network. This approach, combined with the ensemble construction technique, allows for improved prediction accuracy, in terms of the positive predictive value (PPV) metric, compared to the baseline model.

  7. Kashchenko N.M., Ishanov S.A., Zubkov E.V.
    Numerical model of transport in problems of instabilities of the Earth’s low-latitude ionosphere using a two-dimensional monotonized Z-scheme
    Computer Research and Modeling, 2021, v. 13, no. 5, pp. 1011-1023

    The aim of the work is to study a monotone finite-difference scheme of the second order of accuracy, created on the basis of a generalization of the one-dimensional Z-scheme. The study was carried out for model equations of the transfer of an incompressible medium. The paper describes a two-dimensional generalization of the Z-scheme with nonlinear correction, using instead of streams oblique differences containing values from different time layers. The monotonicity of the obtained nonlinear scheme is verified numerically for the limit functions of two types, both for smooth solutions and for nonsmooth solutions, and numerical estimates of the order of accuracy of the constructed scheme are obtained.

    The constructed scheme is absolutely stable, but it loses the property of monotony when the Courant step is exceeded. A distinctive feature of the proposed finite-difference scheme is the minimality of its template. The constructed numerical scheme is intended for models of plasma instabilities of various scales in the low-latitude ionospheric plasma of the Earth. One of the real problems in the solution of which such equations arise is the numerical simulation of highly nonstationary medium-scale processes in the earth’s ionosphere under conditions of the appearance of the Rayleigh – Taylor instability and plasma structures with smaller scales, the generation mechanisms of which are instabilities of other types, which leads to the phenomenon F-scattering. Due to the fact that the transfer processes in the ionospheric plasma are controlled by the magnetic field, it is assumed that the plasma incompressibility condition is fulfilled in the direction transverse to the magnetic field.

  8. Koltsov Y.V., Boboshko E.V.
    Comparative analysis of optimization methods for electrical energy losses interval evaluation problem
    Computer Research and Modeling, 2013, v. 5, no. 2, pp. 231-239

    This article is dedicated to a comparison analysis of optimization methods, in order to perform an interval estimation of electrical energy technical losses in distribution networks of voltage 6–20 kV. The issue of interval evaluation is represented as a multi-dimensional conditional minimization/maximization problem with implicit target function. A number of numerical optimization methods of first and zero orders is observed, with the aim of determining the most suitable for the problem of interest. The desired algorithm is BOBYQA, in which the target function is replaced with its quadratic approximation in some trusted region.

    Views (last year): 2. Citations: 1 (RSCI).
  9. Tumanyan A.G., Bartsev S.I.
    Simple behavioral model of imprint formation
    Computer Research and Modeling, 2014, v. 6, no. 5, pp. 793-802

    Formation of adequate behavioral patterns in condition of the unknown environment carried out through exploratory behavior. At the same time the rapid formation of an acceptable pattern is more preferable than a long elaboration perfect pattern through repeat play learning situation. In extreme situations, phenomenon of imprinting is observed — instant imprinting of behavior pattern, which ensure the survival of individuals. In this paper we propose a hypothesis and imprint model when trained on a single successful pattern of virtual robot's neural network demonstrates the effective functioning. Realism of the model is estimated by checking the stability of playback behavior pattern to perturbations situation imprint run.

    Views (last year): 5. Citations: 2 (RSCI).
  10. Varshavskiy A.E.
    A model for analyzing income inequality based on a finite functional sequence (adequacy and application problems)
    Computer Research and Modeling, 2022, v. 14, no. 3, pp. 675-689

    The paper considers the adequacy of the model developed earlier by the author for the analysis of income inequality and based on an empirically confirmed hypothesis that the relative (to the income of the richest group) income values of 20% population groups in total income can be represented as a finite functional sequence, each member of which depends on one parameter — a specially defined indicator of inequality. It is shown that in addition to the existing methods of inequality analysis, the model makes it possible to estimate with the help of analytical expressions the income shares of 20%, 10% and smaller groups of the population for different levels of inequality, as well as to identify how they change with the growth of inequality, to estimate the level of inequality for known ratios between the incomes of different groups of the population, etc.

    The paper provides a more detailed confirmation of the proposed model adequacy in comparison with the previously obtained results of statistical analysis of empirical data on the distribution of income between the 20% and 10% population groups. It is based on the analysis of certain ratios between the values of quintiles and deciles according to the proposed model. The verification of these ratios was carried out using a set of data for a large number of countries and the estimates obtained confirm the sufficiently high accuracy of the model.

    Data are presented that confirm the possibility of using the model to analyze the dependence of income distribution by population groups on the level of inequality, as well as to estimate the inequality indicator for income ratios between different groups, including variants when the income of the richest 20% is equal to the income of the poor 60 %, income of the middle class 40% or income of the rest 80% of the population, as well as when the income of the richest 10% is equal to the income of the poor 40 %, 50% or 60%, to the income of various middle class groups, etc., as well as for cases, when the distribution of income obeys harmonic proportions and when the quintiles and deciles corresponding to the middle class reach a maximum. It is shown that the income shares of the richest middle class groups are relatively stable and have a maximum at certain levels of inequality.

    The results obtained with the help of the model can be used to determine the standards for developing a policy of gradually increasing the level of progressive taxation in order to move to the level of inequality typical of countries with social oriented economy.

Pages: « first previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"