Результаты поиска по 'predictability':
Найдено статей: 81
  1. Shmidt Y.D., Ivashina N.V., Ozerova G.P.
    Modelling interregional migration flows by the cellular automata
    Computer Research and Modeling, 2020, v. 12, no. 6, pp. 1467-1483

    The article dwells upon investigating the issue of the most adequate tools developing and justifying to forecast the interregional migration flows value and structure. Migration processes have a significant impact on the size and demographic structure of the population of territories, the state and balance of regional and local labor markets.

    To analyze the migration processes and to assess their impact an economic-mathematical tool is required which would be instrumental in modelling the migration processes and flows for different areas with the desired precision. The current methods and approaches to the migration processes modelling, including the analysis of their advantages and disadvantages, were considered. It is noted that to implement many of these methods mass aggregated statistical data is required which is not always available and doesn’t characterize the migrants behavior at the local level where the decision to move to a new dwelling place is made. This has a significant impact on the ability to apply appropriate migration processes modelling techniques and on the projection accuracy of the migration flows magnitude and structure.

    The cellular automata model for interregional migration flows modelling, implementing the integration of the households migration behavior model under the conditions of the Bounded Rationality into the general model of the area migration flow was developed and tested based on the Primorye Territory data. To implement the households migration behavior model under the conditions of the Bounded Rationality the integral attractiveness index of the regions with economic, social and ecological components was proposed in the work.

    To evaluate the prognostic capacity of the developed model, it was compared with the available cellular automata models used to predict interregional migration flows. The out of sample prediction method which showed statistically significant superiority of the proposed model was applied for this purpose. The model allows obtaining the forecasts and quantitative characteristics of the areas migration flows based on the households real migration behaviour at the local level taking into consideration their living conditions and behavioural motives.

  2. Krasnov F.V., Smaznevich I.S., Baskakova E.N.
    Bibliographic link prediction using contrast resampling technique
    Computer Research and Modeling, 2021, v. 13, no. 6, pp. 1317-1336

    The paper studies the problem of searching for fragments with missing bibliographic links in a scientific article using automatic binary classification. To train the model, we propose a new contrast resampling technique, the innovation of which is the consideration of the context of the link, taking into account the boundaries of the fragment, which mostly affects the probability of presence of a bibliographic links in it. The training set was formed of automatically labeled samples that are fragments of three sentences with class labels «without link» and «with link» that satisfy the requirement of contrast: samples of different classes are distanced in the source text. The feature space was built automatically based on the term occurrence statistics and was expanded by constructing additional features — entities (names, numbers, quotes and abbreviations) recognized in the text.

    A series of experiments was carried out on the archives of the scientific journals «Law enforcement review» (273 articles) and «Journal Infectology» (684 articles). The classification was carried out by the models Nearest Neighbors, RBF SVM, Random Forest, Multilayer Perceptron, with the selection of optimal hyperparameters for each classifier.

    Experiments have confirmed the hypothesis put forward. The highest accuracy was reached by the neural network classifier (95%), which is however not as fast as the linear one that showed also high accuracy with contrast resampling (91–94%). These values are superior to those reported for NER and Sentiment Analysis on comparable data. The high computational efficiency of the proposed method makes it possible to integrate it into applied systems and to process documents online.

  3. Makarov I.S., Bagantsova E.R., Iashin P.A., Kovaleva M.D., Gorbachev R.A.
    Development of and research on machine learning algorithms for solving the classification problem in Twitter publications
    Computer Research and Modeling, 2023, v. 15, no. 1, pp. 185-195

    Posts on social networks can both predict the movement of the financial market, and in some cases even determine its direction. The analysis of posts on Twitter contributes to the prediction of cryptocurrency prices. The specificity of the community is represented in a special vocabulary. Thus, slang expressions and abbreviations are used in posts, the presence of which makes it difficult to vectorize text data, as a result of which preprocessing methods such as Stanza lemmatization and the use of regular expressions are considered. This paper describes created simplest machine learning models, which may work despite such problems as lack of data and short prediction timeframe. A word is considered as an element of a binary vector of a data unit in the course of the problem of binary classification solving. Basic words are determined according to the frequency analysis of mentions of a word. The markup is based on Binance candlesticks with variable parameters for a more accurate description of the trend of price changes. The paper introduces metrics that reflect the distribution of words depending on their belonging to a positive or negative classes. To solve the classification problem, we used a dense model with parameters selected by Keras Tuner, logistic regression, a random forest classifier, a naive Bayesian classifier capable of working with a small sample, which is very important for our task, and the k-nearest neighbors method. The constructed models were compared based on the accuracy metric of the predicted labels. During the investigation we recognized that the best approach is to use models which predict price movements of a single coin. Our model deals with posts that mention LUNA project, which no longer exist. This approach to solving binary classification of text data is widely used to predict the price of an asset, the trend of its movement, which is often used in automated trading.

  4. Solovyov S.A., Rose J., Dzyublyk I.V., Trokhimenko E.P.
    Predictive models of efficacy and public health impact of vaccination with rotavirus vaccine in Ukraine
    Computer Research and Modeling, 2012, v. 4, no. 2, pp. 407-421

    There were presented the results of the computational and theoretical studies related to assessing of an efficacy and public health impact of a vaccination with a rotavirus vaccine in Ukraine. The required indicators are: the genotype-specific vaccine efficacy, number of the severe illness preventions, hospitalizations, outpatient visits and deaths. The results were obtained in a form of tree of decisions based on Makrov model by using mathematical model with computer simulation. The results showed the significant positive effect of the vaccination compared to no vaccination, in case of high level of vaccine coverage in Ukraine.

    Views (last year): 2.
  5. Shumov V.V.
    Mathematical models of combat and military operations
    Computer Research and Modeling, 2020, v. 12, no. 4, pp. 907-920

    Modeling the fight against terrorist, pirate and robbery acts at sea is an urgent scientific task due to the prevalence of force acts and the insufficient number of works on this issue. The actions of pirates and terrorists are diverse. Using a base ship, they can attack ships up to 450–500 miles from the coast. Having chosen the target, they pursue it and use the weapons to board the ship. Actions to free a ship captured by pirates or terrorists include: blocking the ship, predicting where pirates might be on the ship, penetrating (from board to board, by air or from under water) and cleaning up the ship’s premises. An analysis of the special literature on the actions of pirates and terrorists showed that the act of force (and actions to neutralize it) consists of two stages: firstly, blocking the vessel, which consists in forcing it to stop, and secondly, neutralizing the team (terrorist groups, pirates), including penetration of a ship (ship) and its cleaning. The stages of the cycle are matched by indicators — the probability of blocking and the probability of neutralization. The variables of the act of force model are the number of ships (ships, boats) of the attackers and defenders, as well as the strength of the capture group of the attackers and the crew of the ship - the victim of the attack. Model parameters (indicators of naval and combat superiority) were estimated using the maximum likelihood method using an international database of incidents at sea. The values of these parameters are 7.6–8.5. Such high values of superiority parameters reflect the parties' ability to act in force acts. An analytical method for calculating excellence parameters is proposed and statistically substantiated. The following indicators are taken into account in the model: the ability of the parties to detect the enemy, the speed and maneuverability characteristics of the vessels, the height of the vessel and the characteristics of the boarding equipment, the characteristics of weapons and protective equipment, etc. Using the Becker model and the theory of discrete choice, the probability of failure of the force act is estimated. The significance of the obtained models for combating acts of force in the sea space lies in the possibility of quantitative substantiation of measures to protect the ship from pirate and terrorist attacks and deterrence measures aimed at preventing attacks (the presence of armed guards on board the ship, assistance from warships and helicopters).

  6. Zavodskikh R.K., Efanov N.N.
    Performance prediction for chosen types of loops over one-dimensional arrays with embedding-driven intermediate representations analysis
    Computer Research and Modeling, 2023, v. 15, no. 1, pp. 211-224

    The method for mapping of intermediate representations (IR) set of C, C++ programs to vector embedding space is considered to create an empirical estimation framework for static performance prediction using LLVM compiler infrastructure. The usage of embeddings makes programs easier to compare due to avoiding Control Flow Graphs (CFG) and Data Flow Graphs (DFG) direct comparison. This method is based on transformation series of the initial IR such as: instrumentation — injection of artificial instructions in an instrumentation compiler’s pass depending on load offset delta in the current instruction compared to the previous one, mapping of instrumented IR into multidimensional vector with IR2Vec and dimension reduction with t-SNE (t-distributed stochastic neighbor embedding) method. The D1 cache miss ratio measured with perf stat tool is considered as performance metric. A heuristic criterion of programs having more or less cache miss ratio is given. This criterion is based on embeddings of programs in 2D-space. The instrumentation compiler’s pass developed in this work is described: how it generates and injects artificial instructions into IR within the used memory model. The software pipeline that implements the performance estimation based on LLVM compiler infrastructure is given. Computational experiments are performed on synthetic tests which are the sets of programs with the same CFGs but with different sequences of offsets used when accessing the one-dimensional array of a given size. The correlation coefficient between performance metric and distance to the worst program’s embedding is measured and proved to be negative regardless of t-SNE initialization. This fact proves the heuristic criterion to be true. The process of such synthetic tests generation is also considered. Moreover, the variety of performance metric in programs set in such a test is proposed as a metric to be improved with exploration of more tests generators.

  7. Vassilevski Y.V., Simakov S.S., Gamilov T.M., Salamatova V.Yu., Dobroserdova T.K., Kopytov G.V., Bogdanov O.N., Danilov A.A., Dergachev M.A., Dobrovolskii D.D., Kosukhin O.N., Larina E.V., Meleshkina A.V., Mychka E.Yu., Kharin V.Yu., Chesnokova K.V., Shipilov A.A.
    Personalization of mathematical models in cardiology: obstacles and perspectives
    Computer Research and Modeling, 2022, v. 14, no. 4, pp. 911-930

    Most biomechanical tasks of interest to clinicians can be solved only using personalized mathematical models. Such models allow to formalize and relate key pathophysiological processes, basing on clinically available data evaluate non-measurable parameters that are important for the diagnosis of diseases, predict the result of a therapeutic or surgical intervention. The use of models in clinical practice imposes additional restrictions: clinicians require model validation on clinical cases, the speed and automation of the entire calculated technological chain, from processing input data to obtaining a result. Limitations on the simulation time, determined by the time of making a medical decision (of the order of several minutes), imply the use of reduction methods that correctly describe the processes under study within the framework of reduced models or machine learning tools.

    Personalization of models requires patient-oriented parameters, personalized geometry of a computational domain and generation of a computational mesh. Model parameters are estimated by direct measurements, or methods of solving inverse problems, or methods of machine learning. The requirement of personalization imposes severe restrictions on the number of fitted parameters that can be measured under standard clinical conditions. In addition to parameters, the model operates with boundary conditions that must take into account the patient’s characteristics. Methods for setting personalized boundary conditions significantly depend on the clinical setting of the problem and clinical data. Building a personalized computational domain through segmentation of medical images and generation of the computational grid, as a rule, takes a lot of time and effort due to manual or semi-automatic operations. Development of automated methods for setting personalized boundary conditions and segmentation of medical images with the subsequent construction of a computational grid is the key to the widespread use of mathematical modeling in clinical practice.

    The aim of this work is to review our solutions for personalization of mathematical models within the framework of three tasks of clinical cardiology: virtual assessment of hemodynamic significance of coronary artery stenosis, calculation of global blood flow after hemodynamic correction of complex heart defects, calculating characteristics of coaptation of reconstructed aortic valve.

  8. Abramov V.S., Petrov M.N.
    Application of the Dynamic Mode Decomposition in search of unstable modes in laminar-turbulent transition problem
    Computer Research and Modeling, 2023, v. 15, no. 4, pp. 1069-1090

    Laminar-turbulent transition is the subject of an active research related to improvement of economic efficiency of air vehicles, because in the turbulent boundary layer drag increases, which leads to higher fuel consumption. One of the directions of such research is the search for efficient methods, that can be used to find the position of the transition in space. Using this information about laminar-turbulent transition location when designing an aircraft, engineers can predict its performance and profitability at the initial stages of the project. Traditionally, $e^N$ method is applied to find the coordinates of a laminar-turbulent transition. It is a well known approach in industry. However, despite its widespread use, this method has a number of significant drawbacks, since it relies on parallel flow assumption, which limits the scenarios for its application, and also requires computationally expensive calculations in a wide range of frequencies and wave numbers. Alternatively, flow analysis can be done by using Dynamic Mode Decomposition, which allows one to analyze flow disturbances using flow data directly. Since Dynamic Mode Decomposition is a dimensionality reduction method, the number of computations can be dramatically reduced. Furthermore, usage of Dynamic Mode Decomposition expands the applicability of the whole method, due to the absence of assumptions about the parallel flow in its derivation.

    The presented study proposes an approach to finding the location of a laminar-turbulent transition using the Dynamic Mode Decomposition method. The essence of this approach is to divide the boundary layer region into sets of subregions, for each of which the transition point is independently calculated, using Dynamic Mode Decomposition for flow analysis, after which the results are averaged to produce the final result. This approach is validated by laminar-turbulent transition predictions of subsonic and supersonic flows over a 2D flat plate with zero pressure gradient. The results demonstrate the fundamental applicability and high accuracy of the described method in a wide range of conditions. The study focuses on comparison with the $e^N$ method and proves the advantages of the proposed approach. It is shown that usage of Dynamic Mode Decomposition leads to significantly faster execution due to less intensive computations, while the accuracy is comparable to the such of the solution obtained with the $e^N$ method. This indicates the prospects for using the described approach in a real world applications.

  9. Kamenev G.K., Kamenev I.G.
    Multicriterial metric data analysis in human capital modelling
    Computer Research and Modeling, 2020, v. 12, no. 5, pp. 1223-1245

    The article describes a model of a human in the informational economy and demonstrates the multicriteria optimizational approach to the metric analysis of model-generated data. The traditional approach using the identification and study involves the model’s identification by time series and its further prediction. However, this is not possible when some variables are not explicitly observed and only some typical borders or population features are known, which is often the case in the social sciences, making some models pure theoretical. To avoid this problem, we propose a method of metric data analysis (MMDA) for identification and study of such models, based on the construction and analysis of the Kolmogorov – Shannon metric nets of the general population in a multidimensional space of social characteristics. Using this method, the coefficients of the model are identified and the features of its phase trajectories are studied. In this paper, we are describing human according to his role in information processing, considering his awareness and cognitive abilities. We construct two lifetime indices of human capital: creative individual (generalizing cognitive abilities) and productive (generalizing the amount of information mastered by a person) and formulate the problem of their multi-criteria (two-criteria) optimization taking into account life expectancy. This approach allows us to identify and economically justify the new requirements for the education system and the information environment of human existence. It is shown that the Pareto-frontier exists in the optimization problem, and its type depends on the mortality rates: at high life expectancy there is one dominant solution, while for lower life expectancy there are different types of Paretofrontier. In particular, the Pareto-principle applies to Russia: a significant increase in the creative human capital of an individual (summarizing his cognitive abilities) is possible due to a small decrease in the creative human capital (summarizing awareness). It is shown that the increase in life expectancy makes competence approach (focused on the development of cognitive abilities) being optimal, while for low life expectancy the knowledge approach is preferable.

  10. Timiryanova V.M., Lakman I.A., Larkin M.M.
    Retail forecasting on high-frequency depersonalized data
    Computer Research and Modeling, 2023, v. 15, no. 6, pp. 1713-1734

    Technological development determines the emergence of highly detailed data in time and space, which expands the possibilities of analysis, allowing us to consider consumer decisions and the competitive behavior of enterprises in all their diversity, taking into account the context of the territory and the characteristics of time periods. Despite the promise of such studies, they are currently limited in the scientific literature. This is due to the range of problems, the solution of which is considered in this paper. The article draws attention to the complexity of the analysis of depersonalized high-frequency data and the possibility of modeling consumption changes in time and space based on them. The features of the new type of data are considered on the example of real depersonalized data received from the fiscal data operator “First OFD” (JSC “Energy Systems and Communications”). It is shown that along with the spectrum of problems inherent in high-frequency data, there are disadvantages associated with the process of generating data on the side of the sellers, which requires a wider use of data mining tools. A series of statistical tests were carried out on the data under consideration, including a Unit-Root Test, test for unobserved individual effects, test for serial correlation and for cross-sectional dependence in panels, etc. The presence of spatial autocorrelation of the data was tested using modified tests of Lagrange multipliers. The tests carried out showed the presence of a consistent correlation and spatial dependence of the data, which determine the expediency of applying the methods of panel and spatial analysis in relation to high-frequency data accumulated by fiscal operators. The constructed models made it possible to substantiate the spatial relationship of sales growth and its dependence on the day of the week. The limitation for increasing the predictive ability of the constructed models and their subsequent complication, due to the inclusion of explanatory factors, was the lack of open access statistics grouped in the required detail in time and space, which determines the relevance of the formation of high-frequency geographically structured data bases.

Pages: « first previous next

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"