Результаты поиска по 'construction':
Найдено статей: 233
  1. Gankevich I.G., Degtyarev A.B.
    Efficient processing and classification of wave energy spectrum data with a distributed pipeline
    Computer Research and Modeling, 2015, v. 7, no. 3, pp. 517-520

    Processing of large amounts of data often consists of several steps, e.g. pre- and post-processing stages, which are executed sequentially with data written to disk after each step, however, when pre-processing stage for each task is different the more efficient way of processing data is to construct a pipeline which streams data from one stage to another. In a more general case some processing stages can be factored into several parallel subordinate stages thus forming a distributed pipeline where each stage can have multiple inputs and multiple outputs. Such processing pattern emerges in a problem of classification of wave energy spectra based on analytic approximations which can extract different wave systems and their parameters (e.g. wave system type, mean wave direction) from spectrum. Distributed pipeline approach achieves good performance compared to conventional “sequential-stage” processing.

    Views (last year): 3. Citations: 2 (RSCI).
  2. Andreeva A.A., Anand M., Lobanov A.I., Nikolaev A.V., Panteleev M.A.
    Using extended ODE systems to investigate the mathematical model of the blood coagulation
    Computer Research and Modeling, 2022, v. 14, no. 4, pp. 931-951

    Many properties of ordinary differential equations systems solutions are determined by the properties of the equations in variations. An ODE system, which includes both the original nonlinear system and the equations in variations, will be called an extended system further. When studying the properties of the Cauchy problem for the systems of ordinary differential equations, the transition to extended systems allows one to study many subtle properties of solutions. For example, the transition to the extended system allows one to increase the order of approximation for numerical methods, gives the approaches to constructing a sensitivity function without using numerical differentiation procedures, allows to use methods of increased convergence order for the inverse problem solution. Authors used the Broyden method belonging to the class of quasi-Newtonian methods. The Rosenbroke method with complex coefficients was used to solve the stiff systems of the ordinary differential equations. In our case, it is equivalent to the second order approximation method for the extended system.

    As an example of the proposed approach, several related mathematical models of the blood coagulation process were considered. Based on the analysis of the numerical calculations results, the conclusion was drawn that it is necessary to include a description of the factor XI positive feedback loop in the model equations system. Estimates of some reaction constants based on the numerical inverse problem solution were given.

    Effect of factor V release on platelet activation was considered. The modification of the mathematical model allowed to achieve quantitative correspondence in the dynamics of the thrombin production with experimental data for an artificial system. Based on the sensitivity analysis, the hypothesis tested that there is no influence of the lipid membrane composition (the number of sites for various factors of the clotting system, except for thrombin sites) on the dynamics of the process.

  3. Dubinina M.G.
    Spatio-temporal models of ICT diffusion
    Computer Research and Modeling, 2023, v. 15, no. 6, pp. 1695-1712

    The article proposes a space-time approach to modeling the diffusion of information and communication technologies based on the Fisher –Kolmogorov– Petrovsky – Piskunov equation, in which the diffusion kinetics is described by the Bass model, which is widely used to model the diffusion of innovations in the market. For this equation, its equilibrium positions are studied, and based on the singular perturbation theory, was obtained an approximate solution in the form of a traveling wave, i. e. a solution that propagates at a constant speed while maintaining its shape in space. The wave speed shows how much the “spatial” characteristic, which determines the given level of technology dissemination, changes in a single time interval. This speed is significantly higher than the speed at which propagation occurs due to diffusion. By constructing such an autowave solution, it becomes possible to estimate the time required for the subject of research to achieve the current indicator of the leader.

    The obtained approximate solution was further applied to assess the factors affecting the rate of dissemination of information and communication technologies in the federal districts of the Russian Federation. Various socio-economic indicators were considered as “spatial” variables for the diffusion of mobile communications among the population. Growth poles in which innovation occurs are usually characterized by the highest values of “spatial” variables. For Russia, Moscow is such a growth pole; therefore, indicators of federal districts related to Moscow’s indicators were considered as factor indicators. The best approximation to the initial data was obtained for the ratio of the share of R&D costs in GRP to the indicator of Moscow, average for the period 2000–2009. It was found that for the Ural Federal District at the initial stage of the spread of mobile communications, the lag behind the capital was less than one year, for the Central Federal District, the Northwestern Federal District — 1.4 years, for the Volga Federal District, the Siberian Federal District, the Southern Federal District and the Far Eastern Federal District — less than two years, in the North Caucasian Federal District — a little more 2 years. In addition, estimates of the delay time for the spread of digital technologies (intranet, extranet, etc.) used by organizations of the federal districts of the Russian Federation from Moscow indicators were obtained.

  4. Kamenev G.K., Kamenev I.G.
    Multicriterial metric data analysis in human capital modelling
    Computer Research and Modeling, 2020, v. 12, no. 5, pp. 1223-1245

    The article describes a model of a human in the informational economy and demonstrates the multicriteria optimizational approach to the metric analysis of model-generated data. The traditional approach using the identification and study involves the model’s identification by time series and its further prediction. However, this is not possible when some variables are not explicitly observed and only some typical borders or population features are known, which is often the case in the social sciences, making some models pure theoretical. To avoid this problem, we propose a method of metric data analysis (MMDA) for identification and study of such models, based on the construction and analysis of the Kolmogorov – Shannon metric nets of the general population in a multidimensional space of social characteristics. Using this method, the coefficients of the model are identified and the features of its phase trajectories are studied. In this paper, we are describing human according to his role in information processing, considering his awareness and cognitive abilities. We construct two lifetime indices of human capital: creative individual (generalizing cognitive abilities) and productive (generalizing the amount of information mastered by a person) and formulate the problem of their multi-criteria (two-criteria) optimization taking into account life expectancy. This approach allows us to identify and economically justify the new requirements for the education system and the information environment of human existence. It is shown that the Pareto-frontier exists in the optimization problem, and its type depends on the mortality rates: at high life expectancy there is one dominant solution, while for lower life expectancy there are different types of Paretofrontier. In particular, the Pareto-principle applies to Russia: a significant increase in the creative human capital of an individual (summarizing his cognitive abilities) is possible due to a small decrease in the creative human capital (summarizing awareness). It is shown that the increase in life expectancy makes competence approach (focused on the development of cognitive abilities) being optimal, while for low life expectancy the knowledge approach is preferable.

  5. The article discusses the problem of the influence of the research goals on the structure of the multivariate model of regression analysis (in particular, on the implementation of the procedure for reducing the dimension of the model). It is shown how bringing the specification of the multiple regression model in line with the research objectives affects the choice of modeling methods. Two schemes for constructing a model are compared: the first does not allow taking into account the typology of primary predictors and the nature of their influence on the performance characteristics, the second scheme implies a stage of preliminary division of the initial predictors into groups, in accordance with the objectives of the study. Using the example of solving the problem of analyzing the causes of burnout of creative workers, the importance of the stage of qualitative analysis and systematization of a priori selected factors is shown, which is implemented not by computing means, but by attracting the knowledge and experience of specialists in the studied subject area. The presented example of the implementation of the approach to determining the specification of the regression model combines formalized mathematical and statistical procedures and the preceding stage of the classification of primary factors. The presence of this stage makes it possible to explain the scheme of managing (corrective) actions (softening the leadership style and increasing approval lead to a decrease in the manifestations of anxiety and stress, which, in turn, reduces the severity of the emotional exhaustion of the team members). Preclassification also allows avoiding the combination in one main component of controlled and uncontrolled, regulatory and controlled feature factors, which could worsen the interpretability of the synthesized predictors. On the example of a specific problem, it is shown that the selection of factors-regressors is a process that requires an individual solution. In the case under consideration, the following were consistently used: systematization of features, correlation analysis, principal component analysis, regression analysis. The first three methods made it possible to significantly reduce the dimension of the problem, which did not affect the achievement of the goal for which this task was posed: significant measures of controlling influence on the team were shown. allowing to reduce the degree of emotional burnout of its participants.

  6. Timiryanova V.M., Lakman I.A., Larkin M.M.
    Retail forecasting on high-frequency depersonalized data
    Computer Research and Modeling, 2023, v. 15, no. 6, pp. 1713-1734

    Technological development determines the emergence of highly detailed data in time and space, which expands the possibilities of analysis, allowing us to consider consumer decisions and the competitive behavior of enterprises in all their diversity, taking into account the context of the territory and the characteristics of time periods. Despite the promise of such studies, they are currently limited in the scientific literature. This is due to the range of problems, the solution of which is considered in this paper. The article draws attention to the complexity of the analysis of depersonalized high-frequency data and the possibility of modeling consumption changes in time and space based on them. The features of the new type of data are considered on the example of real depersonalized data received from the fiscal data operator “First OFD” (JSC “Energy Systems and Communications”). It is shown that along with the spectrum of problems inherent in high-frequency data, there are disadvantages associated with the process of generating data on the side of the sellers, which requires a wider use of data mining tools. A series of statistical tests were carried out on the data under consideration, including a Unit-Root Test, test for unobserved individual effects, test for serial correlation and for cross-sectional dependence in panels, etc. The presence of spatial autocorrelation of the data was tested using modified tests of Lagrange multipliers. The tests carried out showed the presence of a consistent correlation and spatial dependence of the data, which determine the expediency of applying the methods of panel and spatial analysis in relation to high-frequency data accumulated by fiscal operators. The constructed models made it possible to substantiate the spatial relationship of sales growth and its dependence on the day of the week. The limitation for increasing the predictive ability of the constructed models and their subsequent complication, due to the inclusion of explanatory factors, was the lack of open access statistics grouped in the required detail in time and space, which determines the relevance of the formation of high-frequency geographically structured data bases.

  7. Vetrin R.L., Koberg K.
    Reinforcement learning in optimisation of financial market trading strategy parameters
    Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1793-1812

    High frequency algorithmic trading became is a subclass of trading which is focused on gaining basis-point like profitability on sub-second time frames. Such trading strategies do not depend on most of the factors eligible for the longer-term trading and require specific approach. There were many attempts to utilize machine learning techniques to both high and low frequency trading. However, it is still having limited application in the real world trading due to high exposure to overfitting, requirements for rapid adaptation to new market regimes and overall instability of the results. We conducted a comprehensive research on combination of known quantitative theory and reinforcement learning methods in order derive more effective and robust approach at construction of automated trading system in an attempt to create a support for a known algorithmic trading techniques. Using classical price behavior theories as well as modern application cases in sub-millisecond trading, we utilized the Reinforcement Learning models in order to improve quality of the algorithms. As a result, we derived a robust model which utilize Deep Reinforcement learning in order to optimise static market making trading algorithms’ parameters capable of online learning on live data. More specifically, we explored the system in the derivatives cryptocurrency market which mostly not dependent on external factors in short terms. Our research was implemented in high-frequency environment and the final models showed capability to operate within accepted high-frequency trading time-frames. We compared various combinations of Deep Reinforcement Learning approaches and the classic algorithms and evaluated robustness and effectiveness of improvements for each combination.

  8. Aronov I.Z., Maksimova O.V.
    Theoretical modeling consensus building in the work of standardization technical committees in coalitions based on regular Markov chains
    Computer Research and Modeling, 2020, v. 12, no. 5, pp. 1247-1256

    Often decisions in social groups are made by consensus. This applies, for example, to the examination in the technical committee for standardization (TC) before the approval of the national standard by Rosstandart. The standard is approved if and only if the secured consensus in the TC. The same approach to standards development was adopted in almost all countries and at the regional and international level. Previously published works of authors dedicated to the construction of a mathematical model of time to reach consensus in technical committees for standardization in terms of variation in the number of TC members and their level of authoritarianism. The present study is a continuation of these works for the case of the formation of coalitions that are often formed during the consideration of the draft standard to the TC. In the article the mathematical model is constructed to ensure consensus on the work of technical standardization committees in terms of coalitions. In the framework of the model it is shown that in the presence of coalitions consensus is not achievable. However, the coalition, as a rule, are overcome during the negotiation process, otherwise the number of the adopted standards would be extremely small. This paper analyzes the factors that influence the bridging coalitions: the value of the assignment and an index of the effect of the coalition. On the basis of statistical modelling of regular Markov chains is investigated their effects on the time to ensure consensus in the technical Committee. It is proved that the time to reach consensus significantly depends on the value of unilateral concessions coalition and weakly depends on the size of coalitions. Built regression model of dependence of the average number of approvals from the value of the assignment. It was revealed that even a small concession leads to the onset of consensus, increasing the size of the assignment results (with other factors being equal) to a sharp decline in time before the consensus. It is shown that the assignment of a larger coalition against small coalitions takes on average more time before consensus. The result has practical value for all organizational structures, where the emergence of coalitions entails the inability of decision-making in the framework of consensus and requires the consideration of various methods for reaching a consensus decision.

  9. Guzev M.A., Nikitina E.Yu.
    Rank analysis of the criminal codes of the Russian Federation, the Federal Republic of Germany and the People’s Republic of China
    Computer Research and Modeling, 2022, v. 14, no. 4, pp. 969-981

    When making decisions in various fields of human activity, it is often required to create text documents. Traditionally, the study of texts is engaged in linguistics, which in a broad sense can be understood as a part of semiotics — the science of signs and sign systems, while semiotic objects are of different types. The method of rank distributions is widely used for the quantitative study of sign systems. Rank distribution is a set of item names sorted in descending order by frequency of occurrence. For frequency-rank distributions, researchers often use the term «power-law distributions».

    In this paper, the rank distribution method is used to analyze the Criminal Code of various countries. The general idea of the approach to solving this problem is to consider the code as a text document, in which the sign is the measure of punishment for certain crimes. The document is presented as a list of occurrences of a specific word (character) and its derivatives (word forms). The combination of all these signs characters forms a punishment dictionary, for which the occurrence frequency of each punishment in the code text is calculated. This allows us to transform the constructed dictionary into a frequency dictionary of punishments and conduct its further research using the V. P. Maslov approach, proposed to analyze the linguistics problems. This approach introduces the concept of the virtual frequency of crime occurrence, which is an assessment measure of the real harm to society and the consequences of the crime committed in various spheres of human life. On this path, the paper proposes a parametrization of the rank distribution to analyze the punishment dictionary of the Special Part of the Criminal Code of the Russian Federation concerning punishments for economic crimes. Various versions of the code are considered, and the constructed model was shown to reflect objectively undertaken over time by legislators its changes for the better. For the Criminal Codes in force in the Federal Republic of Germany and the People’s Republic of China, the texts including similar offenses and analogous to the Russian special section of the Special Part were studied. The rank distributions obtained in the article for the corresponding frequency dictionaries of codes coincide with those obtained by V. P. Maslov’s law, which essentially clarifies Zipf’s law. This allows us to conclude both the good text organization and the adequacy of the selected punishments for crimes.

  10. Bogdanov A.V., Degtyreva Ya.A., Zakharchuk E.A., Tikhonova N.A., Foux V.R., Khramushin V.N.
    Interactive graphical toolkit global computer simulations in marine service operational forecasts
    Computer Research and Modeling, 2015, v. 7, no. 3, pp. 641-648

    Efficiency and completeness of the numerical simulation in oceanography and hydrometeorology are entirely determined by algorithmic features of the construction of an interactive computer simulations in the scale of the oceans with adaptive coated closed seas and coastal waters refined mathematical models, with the possibility of specifying software parallelization calculations near the concrete — the protected areas of the sea coast. An important component of the research is continuous graphical visualization techniques in the course of calculations, including those undertaken in parallel processes with shared RAM or test points on the external media. The results of computational experiments are used in the description of hydrodynamic processes near the coast, which is important in keeping the organization of sea control services and forecasting marine hazards.

    Citations: 1 (RSCI).
Pages: « first previous next

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"