Результаты поиска по 'methods':
Найдено статей: 617
  1. Chen J., Lobanov A.V., Rogozin A.V.
    Nonsmooth Distributed Min-Max Optimization Using the Smoothing Technique
    Computer Research and Modeling, 2023, v. 15, no. 2, pp. 469-480

    Distributed saddle point problems (SPPs) have numerous applications in optimization, matrix games and machine learning. For example, the training of generated adversarial networks is represented as a min-max optimization problem, and training regularized linear models can be reformulated as an SPP as well. This paper studies distributed nonsmooth SPPs with Lipschitz-continuous objective functions. The objective function is represented as a sum of several components that are distributed between groups of computational nodes. The nodes, or agents, exchange information through some communication network that may be centralized or decentralized. A centralized network has a universal information aggregator (a server, or master node) that directly communicates to each of the agents and therefore can coordinate the optimization process. In a decentralized network, all the nodes are equal, the server node is not present, and each agent only communicates to its immediate neighbors.

    We assume that each of the nodes locally holds its objective and can compute its value at given points, i. e. has access to zero-order oracle. Zero-order information is used when the gradient of the function is costly, not possible to compute or when the function is not differentiable. For example, in reinforcement learning one needs to generate a trajectory to evaluate the current policy. This policy evaluation process can be interpreted as the computation of the function value. We propose an approach that uses a smoothing technique, i. e., applies a first-order method to the smoothed version of the initial function. It can be shown that the stochastic gradient of the smoothed function can be viewed as a random two-point gradient approximation of the initial function. Smoothing approaches have been studied for distributed zero-order minimization, and our paper generalizes the smoothing technique on SPPs.

  2. Fedorov V.A., Khruschev S.S., Kovalenko I.B.
    Analysis of Brownian and molecular dynamics trajectories of to reveal the mechanisms of protein-protein interactions
    Computer Research and Modeling, 2023, v. 15, no. 3, pp. 723-738

    The paper proposes a set of fairly simple analysis algorithms that can be used to analyze a wide range of protein-protein interactions. In this work, we jointly use the methods of Brownian and molecular dynamics to describe the process of formation of a complex of plastocyanin and cytochrome f proteins in higher plants. In the diffusion-collision complex, two clusters of structures were revealed, the transition between which is possible with the preservation of the position of the center of mass of the molecules and is accompanied only by a rotation of plastocyanin by 134 degrees. The first and second clusters of structures of collisional complexes differ in that in the first cluster with a positively charged region near the small domain of cytochrome f, only the “lower” plastocyanin region contacts, while in the second cluster, both negatively charged regions. The “upper” negatively charged region of plastocyanin in the first cluster is in contact with the amino acid residue of lysine K122. When the final complex is formed, the plastocyanin molecule rotates by 69 degrees around an axis passing through both areas of electrostatic contact. With this rotation, water is displaced from the regions located near the cofactors of the molecules and formed by hydrophobic amino acid residues. This leads to the appearance of hydrophobic contacts, a decrease in the distance between the cofactors to a distance of less than 1.5 nm, and further stabilization of the complex in a position suitable for electron transfer. Characteristics such as contact matrices, rotation axes during the transition between states, and graphs of changes in the number of contacts during the modeling process make it possible to determine the key amino acid residues involved in the formation of the complex and to reveal the physicochemical mechanisms underlying this process.

  3. Fedorov V.A., Kholina E.G., Kovalenko I.B.
    Molecular dynamics of tubulin protofilaments and the effect of taxol on their bending deformation
    Computer Research and Modeling, 2024, v. 16, no. 2, pp. 503-512

    Despite the widespread use of cancer chemotherapy drugs, the molecular mechanisms of action of many of them remain unclear. Some of these drugs, such as taxol, are known to affect the dynamics of microtubule assembly and stop the process of cell division in prophase-prometaphase. Recently, new spatial structures of microtubules and individual tubulin oligomers have emerged associated with various regulatory proteins and cancer chemotherapy drugs. However, knowledge of the spatial structure in itself does not provide information about the mechanism of action of drugs.

    In this work, we applied the molecular dynamics method to study the behavior of taxol-bound tubulin oligomers and used our previously developed method for analyzing the conformation of tubulin protofilaments, based on the calculation of modified Euler angles. Recent structures of microtubule fragments have demonstrated that tubulin protofilaments bend not in the radial direction, as many researchers assume, but at an angle of approximately 45◦ from the radial direction. However, in the presence of taxol, the bending direction shifts closer to the radial direction. There was no significant difference between the mean bending and torsion angles of the studied tubulin structures when bound to the various natural regulatory ligands, guanosine triphosphate and guanosine diphosphate. The intra-dimer bending angle was found to be greater than the interdimer bending angle in all analyzed trajectories. This indicates that the bulk of the deformation energy is stored within the dimeric tubulin subunits and not between them. Analysis of the structures of the latest generation of tubulins indicated that the presence of taxol in the tubulin beta subunit pocket allosterically reduces the torsional rigidity of the tubulin oligomer, which could explain the underlying mechanism of taxol’s effect on microtubule dynamics. Indeed, a decrease in torsional rigidity makes it possible to maintain lateral connections between protofilaments, and therefore should lead to the stabilization of microtubules, which is what is observed in experiments. The results of the work shed light on the phenomenon of dynamic instability of microtubules and allow to come closer to understanding the molecular mechanisms of cell division.

  4. Kapitan V.U., Nefedev K.V.
    Calculation of magnetic properties of nanostructured films by means of the parallel Monte-Carlo
    Computer Research and Modeling, 2013, v. 5, no. 4, pp. 693-703

    Images of surface topography of ultrathin magnetic films have been used for Monte Carlo simulations in the framework of the ferromagnetic Ising model to study the hysteresis and thermal properties of nanomaterials. For high performance calculations was used super-scalable parallel algorithm for the finding of the equilibrium configuration. The changing of a distribution of spins on the surface during the reversal of the magnetization and the dynamics of nanodomain structure of thin magnetic films under the influence of changing external magnetic field was investigated.

    Views (last year): 4. Citations: 1 (RSCI).
  5. Danilov G.V., Zhukov V.V., Kulikov A.S., Makashova E.S., Mitin N.A., Orlov Y.N.
    Comparative analysis of statistical methods of scientific publications classification in medicine
    Computer Research and Modeling, 2020, v. 12, no. 4, pp. 921-933

    In this paper the various methods of machine classification of scientific texts by thematic sections on the example of publications in specialized medical journals published by Springer are compared. The corpus of texts was studied in five sections: pharmacology/toxicology, cardiology, immunology, neurology and oncology. We considered both classification methods based on the analysis of annotations and keywords, and classification methods based on the processing of actual texts. Methods of Bayesian classification, reference vectors, and reference letter combinations were applied. It is shown that the method of classification with the best accuracy is based on creating a library of standards of letter trigrams that correspond to texts of a certain subject. It is turned out that for this corpus the Bayesian method gives an error of about 20%, the support vector machine has error of order 10%, and the proximity of the distribution of three-letter text to the standard theme gives an error of about 5%, which allows to rank these methods to the use of artificial intelligence in the task of text classification by industry specialties. It is important that the support vector method provides the same accuracy when analyzing annotations as when analyzing full texts, which is important for reducing the number of operations for large text corpus.

  6. Kirilyuk I.L., Sen'ko O.V.
    Assessing the validity of clustering of panel data by Monte Carlo methods (using as example the data of the Russian regional economy)
    Computer Research and Modeling, 2020, v. 12, no. 6, pp. 1501-1513

    The paper considers a method for studying panel data based on the use of agglomerative hierarchical clustering — grouping objects based on the similarities and differences in their features into a hierarchy of clusters nested into each other. We used 2 alternative methods for calculating Euclidean distances between objects — the distance between the values averaged over observation interval, and the distance using data for all considered years. Three alternative methods for calculating the distances between clusters were compared. In the first case, the distance between the nearest elements from two clusters is considered to be distance between these clusters, in the second — the average over pairs of elements, in the third — the distance between the most distant elements. The efficiency of using two clustering quality indices, the Dunn and Silhouette index, was studied to select the optimal number of clusters and evaluate the statistical significance of the obtained solutions. The method of assessing statistical reliability of cluster structure consisted in comparing the quality of clustering on a real sample with the quality of clustering on artificially generated samples of panel data with the same number of objects, features and lengths of time series. Generation was made from a fixed probability distribution. At the same time, simulation methods imitating Gaussian white noise and random walk were used. Calculations with the Silhouette index showed that a random walk is characterized not only by spurious regression, but also by “spurious clustering”. Clustering was considered reliable for a given number of selected clusters if the index value on the real sample turned out to be greater than the value of the 95% quantile for artificial data. A set of time series of indicators characterizing production in the regions of the Russian Federation was used as a sample of real data. For these data only Silhouette shows reliable clustering at the level p < 0.05. Calculations also showed that index values for real data are generally closer to values for random walks than for white noise, but it have significant differences from both. Since three-dimensional feature space is used, the quality of clustering was also evaluated visually. Visually, one can distinguish clusters of points located close to each other, also distinguished as clusters by the applied hierarchical clustering algorithm.

  7. varshavsky L.Eug.
    Study of the dynamics of the structure of oligopolistic markets with non-market opposition parties
    Computer Research and Modeling, 2021, v. 13, no. 1, pp. 219-233

    The article examines the impact of non-market actions of participants in oligopolistic markets on the market structure. The following actions of one of the market participants aimed at increasing its market share are analyzed: 1) price manipulation; 2) blocking investments of stronger oligopolists; 3) destruction of produced products and capacities of competitors. Linear dynamic games with a quadratic criterion are used to model the strategies of oligopolists. The expediency of their use is due to the possibility of both an adequate description of the evolution of markets and the implementation of two mutually complementary approaches to determining the strategies of oligopolists: 1) based on the representation of models in the state space and the solution of generalized Riccati equations; 2) based on the application of operational calculus methods (in the frequency domain) which owns the visibility necessary for economic analysis.

    The article shows the equivalence of approaches to solving the problem with maximin criteria of oligopolists in the state space and in the frequency domain. The results of calculations are considered in relation to a duopoly, with indicators close to one of the duopolies in the microelectronic industry of the world. The second duopolist is less effective from the standpoint of costs, though more mobile. Its goal is to increase its market share by implementing the non-market methods listed above.

    Calculations carried out with help of the game model, made it possible to construct dependencies that characterize the relationship between the relative increase in production volumes over a 25-year period of weak and strong duopolists under price manipulation. Constructed dependencies show that an increase in the price for the accepted linear demand function leads to a very small increase in the production of a strong duopolist, but, simultaneously, to a significant increase in this indicator for a weak one.

    Calculations carried out with use of the other variants of the model, show that blocking investments, as well as destroying the products of a strong duopolist, leads to more significant increase in the production of marketable products for a weak duopolist than to a decrease in this indicator for a strong one.

  8. Ablaev S.S., Makarenko D.V., Stonyakin F.S., Alkousa M.S., Baran I.V.
    Subgradient methods for non-smooth optimization problems with some relaxation of sharp minimum
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 473-495

    Non-smooth optimization often arises in many applied problems. The issues of developing efficient computational procedures for such problems in high-dimensional spaces are very topical. First-order methods (subgradient methods) are well applicable here, but in fairly general situations they lead to low speed guarantees for large-scale problems. One of the approaches to this type of problem can be to identify a subclass of non-smooth problems that allow relatively optimistic results on the rate of convergence. For example, one of the options for additional assumptions can be the condition of a sharp minimum, proposed in the late 1960s by B. T. Polyak. In the case of the availability of information about the minimal value of the function for Lipschitz-continuous problems with a sharp minimum, it turned out to be possible to propose a subgradient method with a Polyak step-size, which guarantees a linear rate of convergence in the argument. This approach made it possible to cover a number of important applied problems (for example, the problem of projecting onto a convex compact set). However, both the condition of the availability of the minimal value of the function and the condition of a sharp minimum itself look rather restrictive. In this regard, in this paper, we propose a generalized condition for a sharp minimum, somewhat similar to the inexact oracle proposed recently by Devolder – Glineur – Nesterov. The proposed approach makes it possible to extend the class of applicability of subgradient methods with the Polyak step-size, to the situation of inexact information about the value of the minimum, as well as the unknown Lipschitz constant of the objective function. Moreover, the use of local analogs of the global characteristics of the objective function makes it possible to apply the results of this type to wider classes of problems. We show the possibility of applying the proposed approach to strongly convex nonsmooth problems, also, we make an experimental comparison with the known optimal subgradient method for such a class of problems. Moreover, there were obtained some results connected to the applicability of the proposed technique to some types of problems with convexity relaxations: the recently proposed notion of weak $\beta$-quasi-convexity and ordinary quasiconvexity. Also in the paper, we study a generalization of the described technique to the situation with the assumption that the $\delta$-subgradient of the objective function is available instead of the usual subgradient. For one of the considered methods, conditions are found under which, in practice, it is possible to escape the projection of the considered iterative sequence onto the feasible set of the problem.

  9. Golubev V.I., Shevchenko A.V., Petrov I.B.
    Raising convergence order of grid-characteristic schemes for 2D linear elasticity problems using operator splitting
    Computer Research and Modeling, 2022, v. 14, no. 4, pp. 899-910

    The grid-characteristic method is successfully used for solving hyperbolic systems of partial differential equations (for example, transport / acoustic / elastic equations). It allows to construct correctly algorithms on contact boundaries and boundaries of the integration domain, to a certain extent to take into account the physics of the problem (propagation of discontinuities along characteristic curves), and has the property of monotonicity, which is important for considered problems. In the cases of two-dimensional and three-dimensional problems the method makes use of a coordinate splitting technique, which enables us to solve the original equations by solving several one-dimensional ones consecutively. It is common to use up to 3-rd order one-dimensional schemes with simple splitting techniques which do not allow for the convergence order to be higher than two (with respect to time). Significant achievements in the operator splitting theory were done, the existence of higher-order schemes was proved. Its peculiarity is the need to perform a step in the opposite direction in time, which gives rise to difficulties, for example, for parabolic problems.

    In this work coordinate splitting of the 3-rd and 4-th order were used for the two-dimensional hyperbolic problem of the linear elasticity. This made it possible to increase the final convergence order of the computational algorithm. The paper empirically estimates the convergence in L1 and L∞ norms using analytical solutions of the system with the sufficient degree of smoothness. To obtain objective results, we considered the cases of longitudinal and transverse plane waves propagating both along the diagonal of the computational cell and not along it. Numerical experiments demonstrated the improved accuracy and convergence order of constructed schemes. These improvements are achieved with the cost of three- or fourfold increase of the computational time (for the 3-rd and 4-th order respectively) and no additional memory requirements. The proposed improvement of the computational algorithm preserves the simplicity of its parallel implementation based on the spatial decomposition of the computational grid.

  10. Zavodskikh R.K., Efanov N.N.
    Performance prediction for chosen types of loops over one-dimensional arrays with embedding-driven intermediate representations analysis
    Computer Research and Modeling, 2023, v. 15, no. 1, pp. 211-224

    The method for mapping of intermediate representations (IR) set of C, C++ programs to vector embedding space is considered to create an empirical estimation framework for static performance prediction using LLVM compiler infrastructure. The usage of embeddings makes programs easier to compare due to avoiding Control Flow Graphs (CFG) and Data Flow Graphs (DFG) direct comparison. This method is based on transformation series of the initial IR such as: instrumentation — injection of artificial instructions in an instrumentation compiler’s pass depending on load offset delta in the current instruction compared to the previous one, mapping of instrumented IR into multidimensional vector with IR2Vec and dimension reduction with t-SNE (t-distributed stochastic neighbor embedding) method. The D1 cache miss ratio measured with perf stat tool is considered as performance metric. A heuristic criterion of programs having more or less cache miss ratio is given. This criterion is based on embeddings of programs in 2D-space. The instrumentation compiler’s pass developed in this work is described: how it generates and injects artificial instructions into IR within the used memory model. The software pipeline that implements the performance estimation based on LLVM compiler infrastructure is given. Computational experiments are performed on synthetic tests which are the sets of programs with the same CFGs but with different sequences of offsets used when accessing the one-dimensional array of a given size. The correlation coefficient between performance metric and distance to the worst program’s embedding is measured and proved to be negative regardless of t-SNE initialization. This fact proves the heuristic criterion to be true. The process of such synthetic tests generation is also considered. Moreover, the variety of performance metric in programs set in such a test is proposed as a metric to be improved with exploration of more tests generators.

Pages: « first previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"