Результаты поиска по 'probability':
Найдено статей: 70
  1. Krasnov F.V., Smaznevich I.S., Baskakova E.N.
    Bibliographic link prediction using contrast resampling technique
    Computer Research and Modeling, 2021, v. 13, no. 6, pp. 1317-1336

    The paper studies the problem of searching for fragments with missing bibliographic links in a scientific article using automatic binary classification. To train the model, we propose a new contrast resampling technique, the innovation of which is the consideration of the context of the link, taking into account the boundaries of the fragment, which mostly affects the probability of presence of a bibliographic links in it. The training set was formed of automatically labeled samples that are fragments of three sentences with class labels «without link» and «with link» that satisfy the requirement of contrast: samples of different classes are distanced in the source text. The feature space was built automatically based on the term occurrence statistics and was expanded by constructing additional features — entities (names, numbers, quotes and abbreviations) recognized in the text.

    A series of experiments was carried out on the archives of the scientific journals «Law enforcement review» (273 articles) and «Journal Infectology» (684 articles). The classification was carried out by the models Nearest Neighbors, RBF SVM, Random Forest, Multilayer Perceptron, with the selection of optimal hyperparameters for each classifier.

    Experiments have confirmed the hypothesis put forward. The highest accuracy was reached by the neural network classifier (95%), which is however not as fast as the linear one that showed also high accuracy with contrast resampling (91–94%). These values are superior to those reported for NER and Sentiment Analysis on comparable data. The high computational efficiency of the proposed method makes it possible to integrate it into applied systems and to process documents online.

  2. Salikhova T.Y., Pushin D.M., Guria G.T.
    Investigation of shear-induced platelet activation in arteriovenous fistulas for haemodialysis
    Computer Research and Modeling, 2023, v. 15, no. 3, pp. 703-721

    Numerical modeling of shear-induced platelet activation in haemodialysis arteriovenous fistulas was carried out in this work. The goal was to investigate the mechanisms of threshold shear-induced platelet activation in fistulas. For shear-induced platelet activation to take place, shear stress accumulated by platelets along corresponding trajectories in blood flow had to exceed a definite threshold value. The threshold value of cumulative shear stress was supposed to depend on the multimer size of von Willebrand factor macromolecules acting as hydrodynamic sensors for platelets. The effect of arteriovenous fistulas parameters, such as the anastomotic angle, blood flow rate, and the multimer size of von Willebrand factor macromolecules, on platelet activation risk was studied. Parametric diagrams have been constructed that make it possible to distinguish the areas of parameters corresponding to the presence or absence of shear-induced platelet activation. Scaling relations that approximate critical curves on parametric diagrams were obtained. Analysis showed that threshold fistula flow rate is higher for obtuse anastomotic angle than for sharp ones. This means that a fistula with obtuse angle can be used in wider flow rate range without risk of platelet activation. In addition, a study of different anastomosis configurations of arteriovenous fistulas showed that the configuration “end of vein to end of artery” is among the safest. For all the investigated anastomosis configurations, the critical curves on the parametric diagrams were monotonically decreasing functions of von Willebrand factor multimer size. It was shown that fistula flow rate should have a significant impact on the probability of thrombus formation initiation, while the direction of flow through the distal artery did not affect platelet activation. The obtained results allowed to determine the safest fistula configurations with respect to thrombus formation triggering. The authors believe that the results of the work may be of interest to doctors performing surgical operations for creation of arteriovenous fistulas for haemodialysis. In the final section of the work, possible clinical applications of the obtained results by means of mathematical modeling are discussed.

  3. Mitin N.A., Orlov Y.N.
    Statistical analysis of bigrams of specialized texts
    Computer Research and Modeling, 2020, v. 12, no. 1, pp. 243-254

    The method of the stochastic matrix spectrum analysis is used to build an indicator that allows to determine the subject of scientific texts without keywords usage. This matrix is a matrix of conditional probabilities of bigrams, built on the statistics of the alphabet characters in the text without spaces, numbers and punctuation marks. Scientific texts are classified according to the mutual arrangement of invariant subspaces of the matrix of conditional probabilities of pairs of letter combinations. The separation indicator is the value of the cosine of the angle between the right and left eigenvectors corresponding to the maximum and minimum eigenvalues. The computational algorithm uses a special representation of the dichotomy parameter, which is the integral of the square norm of the resolvent of the stochastic matrix of bigrams along the circumference of a given radius in the complex plane. The tendency of the integral to infinity testifies to the approximation of the integration circuit to the eigenvalue of the matrix. The paper presents the typical distribution of the indicator of identification of specialties. For statistical analysis were analyzed dissertations on the main 19 specialties without taking into account the classification within the specialty, 20 texts for the specialty. It was found that the empirical distributions of the cosine of the angle for the mathematical and Humanities specialties do not have a common domain, so they can be formally divided by the value of this indicator without errors. Although the body of texts was not particularly large, nevertheless, in the case of arbitrary selection of dissertations, the identification error at the level of 2 % seems to be a very good result compared to the methods based on semantic analysis. It was also found that it is possible to make a text pattern for each of the specialties in the form of a reference matrix of bigrams, in the vicinity of which in the norm of summable functions it is possible to accurately identify the theme of the written scientific work, without using keywords. The proposed method can be used as a comparative indicator of greater or lesser severity of the scientific text or as an indicator of compliance of the text to a certain scientific level.

  4. Shumov V.V.
    Mathematical models of combat and military operations
    Computer Research and Modeling, 2020, v. 12, no. 4, pp. 907-920

    Modeling the fight against terrorist, pirate and robbery acts at sea is an urgent scientific task due to the prevalence of force acts and the insufficient number of works on this issue. The actions of pirates and terrorists are diverse. Using a base ship, they can attack ships up to 450–500 miles from the coast. Having chosen the target, they pursue it and use the weapons to board the ship. Actions to free a ship captured by pirates or terrorists include: blocking the ship, predicting where pirates might be on the ship, penetrating (from board to board, by air or from under water) and cleaning up the ship’s premises. An analysis of the special literature on the actions of pirates and terrorists showed that the act of force (and actions to neutralize it) consists of two stages: firstly, blocking the vessel, which consists in forcing it to stop, and secondly, neutralizing the team (terrorist groups, pirates), including penetration of a ship (ship) and its cleaning. The stages of the cycle are matched by indicators — the probability of blocking and the probability of neutralization. The variables of the act of force model are the number of ships (ships, boats) of the attackers and defenders, as well as the strength of the capture group of the attackers and the crew of the ship - the victim of the attack. Model parameters (indicators of naval and combat superiority) were estimated using the maximum likelihood method using an international database of incidents at sea. The values of these parameters are 7.6–8.5. Such high values of superiority parameters reflect the parties' ability to act in force acts. An analytical method for calculating excellence parameters is proposed and statistically substantiated. The following indicators are taken into account in the model: the ability of the parties to detect the enemy, the speed and maneuverability characteristics of the vessels, the height of the vessel and the characteristics of the boarding equipment, the characteristics of weapons and protective equipment, etc. Using the Becker model and the theory of discrete choice, the probability of failure of the force act is estimated. The significance of the obtained models for combating acts of force in the sea space lies in the possibility of quantitative substantiation of measures to protect the ship from pirate and terrorist attacks and deterrence measures aimed at preventing attacks (the presence of armed guards on board the ship, assistance from warships and helicopters).

  5. Voronina M.Y., Orlov Y.N.
    Identification of the author of the text by segmentation method
    Computer Research and Modeling, 2022, v. 14, no. 5, pp. 1199-1210

    The paper describes a method for recognizing authors of literary texts by the proximity of fragments into which a separate text is divided to the standard of the author. The standard is the empirical frequency distribution of letter combinations, built on a training sample, which included expertly selected reliably known works of this author. A set of standards of different authors forms a library, within which the problem of identifying the author of an unknown text is solved. The proximity between texts is understood in the sense of the norm in L1 for the frequency vector of letter combinations, which is constructed for each fragment and for the text as a whole. The author of an unknown text is assigned the one whose standard is most often chosen as the closest for the set of fragments into which the text is divided. The length of the fragment is optimized based on the principle of the maximum difference in distances from fragments to standards in the problem of recognition of «friend–foe». The method was tested on the corpus of domestic and foreign (translated) authors. 1783 texts of 100 authors with a total volume of about 700 million characters were collected. In order to exclude the bias in the selection of authors, authors whose surnames began with the same letter were considered. In particular, for the letter L, the identification error was 12%. Along with a fairly high accuracy, this method has another important property: it allows you to estimate the probability that the standard of the author of the text in question is missing in the library. This probability can be estimated based on the results of the statistics of the nearest standards for small fragments of text. The paper also examines statistical digital portraits of writers: these are joint empirical distributions of the probability that a certain proportion of the text is identified at a given level of trust. The practical importance of these statistics is that the carriers of the corresponding distributions practically do not overlap for their own and other people’s standards, which makes it possible to recognize the reference distribution of letter combinations at a high level of confidence.

  6. Kirilyuk I.L., Sen'ko O.V.
    Assessing the validity of clustering of panel data by Monte Carlo methods (using as example the data of the Russian regional economy)
    Computer Research and Modeling, 2020, v. 12, no. 6, pp. 1501-1513

    The paper considers a method for studying panel data based on the use of agglomerative hierarchical clustering — grouping objects based on the similarities and differences in their features into a hierarchy of clusters nested into each other. We used 2 alternative methods for calculating Euclidean distances between objects — the distance between the values averaged over observation interval, and the distance using data for all considered years. Three alternative methods for calculating the distances between clusters were compared. In the first case, the distance between the nearest elements from two clusters is considered to be distance between these clusters, in the second — the average over pairs of elements, in the third — the distance between the most distant elements. The efficiency of using two clustering quality indices, the Dunn and Silhouette index, was studied to select the optimal number of clusters and evaluate the statistical significance of the obtained solutions. The method of assessing statistical reliability of cluster structure consisted in comparing the quality of clustering on a real sample with the quality of clustering on artificially generated samples of panel data with the same number of objects, features and lengths of time series. Generation was made from a fixed probability distribution. At the same time, simulation methods imitating Gaussian white noise and random walk were used. Calculations with the Silhouette index showed that a random walk is characterized not only by spurious regression, but also by “spurious clustering”. Clustering was considered reliable for a given number of selected clusters if the index value on the real sample turned out to be greater than the value of the 95% quantile for artificial data. A set of time series of indicators characterizing production in the regions of the Russian Federation was used as a sample of real data. For these data only Silhouette shows reliable clustering at the level p < 0.05. Calculations also showed that index values for real data are generally closer to values for random walks than for white noise, but it have significant differences from both. Since three-dimensional feature space is used, the quality of clustering was also evaluated visually. Visually, one can distinguish clusters of points located close to each other, also distinguished as clusters by the applied hierarchical clustering algorithm.

  7. Irkhin I.A., Bulatov V.G., Vorontsov K.V.
    Additive regularizarion of topic models with fast text vectorizartion
    Computer Research and Modeling, 2020, v. 12, no. 6, pp. 1515-1528

    The probabilistic topic model of a text document collection finds two matrices: a matrix of conditional probabilities of topics in documents and a matrix of conditional probabilities of words in topics. Each document is represented by a multiset of words also called the “bag of words”, thus assuming that the order of words is not important for revealing the latent topics of the document. Under this assumption, the problem is reduced to a low-rank non-negative matrix factorization governed by likelihood maximization. In general, this problem is ill-posed having an infinite set of solutions. In order to regularize the solution, a weighted sum of optimization criteria is added to the log-likelihood. When modeling large text collections, storing the first matrix seems to be impractical, since its size is proportional to the number of documents in the collection. At the same time, the topical vector representation (embedding) of documents is necessary for solving many text analysis tasks, such as information retrieval, clustering, classification, and summarization of texts. In practice, the topical embedding is calculated for a document “on-the-fly”, which may require dozens of iterations over all the words of the document. In this paper, we propose a way to calculate a topical embedding quickly, by one pass over document words. For this, an additional constraint is introduced into the model in the form of an equation, which calculates the first matrix from the second one in linear time. Although formally this constraint is not an optimization criterion, in fact it plays the role of a regularizer and can be used in combination with other regularizers within the additive regularization framework ARTM. Experiments on three text collections have shown that the proposed method improves the model in terms of sparseness, difference, logLift and coherence measures of topic quality. The open source libraries BigARTM and TopicNet were used for the experiments.

  8. Maslakov A.S.
    Describing processes in photosynthetic reaction center ensembles using a Monte Carlo kinetic model
    Computer Research and Modeling, 2020, v. 12, no. 5, pp. 1207-1221

    Photosynthetic apparatus of a plant cell consists of multiple photosynthetic electron transport chains (ETC). Each ETC is capable of capturing and utilizing light quanta, that drive electron transport along the chain. Light assimilation efficiency depends on the plant’s current physiological state. The energy of the part of quanta that cannot be utilized, dissipates into heat, or is emitted as fluorescence. Under high light conditions fluorescence levels gradually rise to the maximum level. The curve describing that rise is called fluorescence rise (FR). It has a complex shape and that shape changes depending on the photosynthetic apparatus state. This gives one the opportunity to investigate that state only using the non invasive measuring of the FR.

    When measuring fluorescence in experimental conditions, we get a response from millions of photosynthetic units at a time. In order to reproduce the probabilistic nature of the processes in a photosynthetic ETC, we created a Monte Carlo model of this chain. This model describes an ETC as a sequence of electron carriers in a thylakoid membrane, connected with each other. Those carriers have certain probabilities of capturing light photons, transferring excited states, or reducing each other, depending on the current ETC state. The events that take place in each of the model photosynthetic ETCs are registered, accumulated and used to create fluorescence rise and electron carrier redox states accumulation kinetics. This paper describes the model structure, the principles of its operation and the relations between certain model parameters and the resulting kinetic curves shape. Model curves include photosystem II reaction center fluorescence rise and photosystem I reaction center redox state change kinetics under different conditions.

  9. Melnikova I.V., Bovkun V.A.
    Connection between discrete financial models and continuous models with Wiener and Poisson processes
    Computer Research and Modeling, 2023, v. 15, no. 3, pp. 781-795

    The paper is devoted to the study of relationships between discrete and continuous models financial processes and their probabilistic characteristics. First, a connection is established between the price processes of stocks, hedging portfolio and options in the models conditioned by binomial perturbations and their limit perturbations of the Brownian motion type. Secondly, analogues in the coefficients of stochastic equations with various random processes, continuous and jumpwise, and in the coefficients corresponding deterministic equations for their probabilistic characteristics. Statement of the results on the connections and finding analogies, obtained in this paper, led to the need for an adequate presentation of preliminary information and results from financial mathematics, as well as descriptions of related objects of stochastic analysis. In this paper, partially new and known results are presented in an accessible form for those who are not specialists in financial mathematics and stochastic analysis, and for whom these results are important from the point of view of applications. Specifically, the following sections are presented.

    • In one- and n-period binomial models, it is proposed a unified approach to determining on the probability space a risk-neutral measure with which the discounted option price becomes a martingale. The resulting martingale formula for the option price is suitable for numerical simulation. In the following sections, the risk-neutral measures approach is applied to study financial processes in continuous-time models.

    • In continuous time, models of the price of shares, hedging portfolios and options are considered in the form of stochastic equations with the Ito integral over Brownian motion and over a compensated Poisson process. The study of the properties of these processes in this section is based on one of the central objects of stochastic analysis — the Ito formula. Special attention is given to the methods of its application.

    • The famous Black – Scholes formula is presented, which gives a solution to the partial differential equation for the function $v(t, x)$, which, when $x = S (t)$ is substituted, where $S(t)$ is the stock price at the moment time $t$, gives the price of the option in the model with continuous perturbation by Brownian motion.

    • The analogue of the Black – Scholes formula for the case of the model with a jump-like perturbation by the Poisson process is suggested. The derivation of this formula is based on the technique of risk-neutral measures and the independence lemma.

  10. Kholodkov K.I., Aleshin I.M.
    Exact calculation of a posteriori probability distribution with distributed computing systems
    Computer Research and Modeling, 2015, v. 7, no. 3, pp. 539-542

    We'd like to present a specific grid infrastructure and web application development and deployment. The purpose of infrastructure and web application is to solve particular geophysical problems that require heavy computational resources. Here we cover technology overview and connector framework internals. The connector framework links problem-specific routines with middleware in a manner that developer of application doesn't have to be aware of any particular grid software. That is, the web application built with this framework acts as an interface between the user 's web browser and Grid's (often very) own middleware.

    Our distributed computing system is built around Gridway metascheduler. The metascheduler is connected to TORQUE resource managers of virtual compute nodes that are being run atop of compute cluster utilizing the virtualization technology. Such approach offers several notable features that are unavailable to bare-metal compute clusters.

    The first application we've integrated with our framework is seismic anisotropic parameters determination by inversion of SKS and converted phases. We've used probabilistic approach to inverse problem solution based on a posteriory probability distribution function (APDF) formalism. To get the exact solution of the problem we have to compute the values of multidimensional function. Within our implementation we used brute-force APDF calculation on rectangular grid across parameter space.

    The result of computation is stored in relational DBMS and then represented in familiar human-readable form. Application provides several instruments to allow analysis of function's shape by computational results: maximum value distribution, 2D cross-sections of APDF, 2D marginals and a few other tools. During the tests we've run the application against both synthetic and observed data.

    Views (last year): 3.
Pages: « first previous

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"