All issues
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Statistical analysis of bigrams of specialized texts
Computer Research and Modeling, 2020, v. 12, no. 1, pp. 243-254The method of the stochastic matrix spectrum analysis is used to build an indicator that allows to determine the subject of scientific texts without keywords usage. This matrix is a matrix of conditional probabilities of bigrams, built on the statistics of the alphabet characters in the text without spaces, numbers and punctuation marks. Scientific texts are classified according to the mutual arrangement of invariant subspaces of the matrix of conditional probabilities of pairs of letter combinations. The separation indicator is the value of the cosine of the angle between the right and left eigenvectors corresponding to the maximum and minimum eigenvalues. The computational algorithm uses a special representation of the dichotomy parameter, which is the integral of the square norm of the resolvent of the stochastic matrix of bigrams along the circumference of a given radius in the complex plane. The tendency of the integral to infinity testifies to the approximation of the integration circuit to the eigenvalue of the matrix. The paper presents the typical distribution of the indicator of identification of specialties. For statistical analysis were analyzed dissertations on the main 19 specialties without taking into account the classification within the specialty, 20 texts for the specialty. It was found that the empirical distributions of the cosine of the angle for the mathematical and Humanities specialties do not have a common domain, so they can be formally divided by the value of this indicator without errors. Although the body of texts was not particularly large, nevertheless, in the case of arbitrary selection of dissertations, the identification error at the level of 2 % seems to be a very good result compared to the methods based on semantic analysis. It was also found that it is possible to make a text pattern for each of the specialties in the form of a reference matrix of bigrams, in the vicinity of which in the norm of summable functions it is possible to accurately identify the theme of the written scientific work, without using keywords. The proposed method can be used as a comparative indicator of greater or lesser severity of the scientific text or as an indicator of compliance of the text to a certain scientific level.
-
Nonsmooth Distributed Min-Max Optimization Using the Smoothing Technique
Computer Research and Modeling, 2023, v. 15, no. 2, pp. 469-480Distributed saddle point problems (SPPs) have numerous applications in optimization, matrix games and machine learning. For example, the training of generated adversarial networks is represented as a min-max optimization problem, and training regularized linear models can be reformulated as an SPP as well. This paper studies distributed nonsmooth SPPs with Lipschitz-continuous objective functions. The objective function is represented as a sum of several components that are distributed between groups of computational nodes. The nodes, or agents, exchange information through some communication network that may be centralized or decentralized. A centralized network has a universal information aggregator (a server, or master node) that directly communicates to each of the agents and therefore can coordinate the optimization process. In a decentralized network, all the nodes are equal, the server node is not present, and each agent only communicates to its immediate neighbors.
We assume that each of the nodes locally holds its objective and can compute its value at given points, i. e. has access to zero-order oracle. Zero-order information is used when the gradient of the function is costly, not possible to compute or when the function is not differentiable. For example, in reinforcement learning one needs to generate a trajectory to evaluate the current policy. This policy evaluation process can be interpreted as the computation of the function value. We propose an approach that uses a smoothing technique, i. e., applies a first-order method to the smoothed version of the initial function. It can be shown that the stochastic gradient of the smoothed function can be viewed as a random two-point gradient approximation of the initial function. Smoothing approaches have been studied for distributed zero-order minimization, and our paper generalizes the smoothing technique on SPPs.
Keywords: convex optimization, distributed optimization. -
Forecasting demographic and macroeconomic indicators in a distributed global model
Computer Research and Modeling, 2023, v. 15, no. 3, pp. 757-779The paper present a dynamic macro model of world dynamics. The world is divided into 19 geographic regions in the model. The internal development of the regions is described by regression equations for demographic and economic indicators (Population, Gross Domestic Product, Gross Capital Formation). The bilateral trade flows from region to region describes interregional interactions and represented the trade submodel. Time, the gross product of the exporter and the gross product of the importer were used as regressors. Four types were considered: time pair regression — dependence of trade flow on time, export function — dependence of the share of trade flow in the gross product of the exporter on the gross product of the importer, import function — dependence of the share of trade flow in the gross product of the importer on the gross product of the exporter, multiple regression — dependence of trade flow on the gross products of the exporter and importer. Two types of functional dependence were used for each type: linear and log-linear, in total eight variants of the trading equation were studied. The quality of regression models is compared by the coefficient of determination. By calculations the model satisfactorily approximates the dynamics of monotonically changing indicators. The dynamics of non-monotonic trade flows is analyzed, three types of functional dependence on time are proposed for their approximation. It is shown that the number of foreign trade series can be approximated by the space of seven main components with a 10% error. The forecast of regional development and global dynamics up to 2040 is constructed.
-
Exact calculation of a posteriori probability distribution with distributed computing systems
Computer Research and Modeling, 2015, v. 7, no. 3, pp. 539-542Views (last year): 3.We'd like to present a specific grid infrastructure and web application development and deployment. The purpose of infrastructure and web application is to solve particular geophysical problems that require heavy computational resources. Here we cover technology overview and connector framework internals. The connector framework links problem-specific routines with middleware in a manner that developer of application doesn't have to be aware of any particular grid software. That is, the web application built with this framework acts as an interface between the user 's web browser and Grid's (often very) own middleware.
Our distributed computing system is built around Gridway metascheduler. The metascheduler is connected to TORQUE resource managers of virtual compute nodes that are being run atop of compute cluster utilizing the virtualization technology. Such approach offers several notable features that are unavailable to bare-metal compute clusters.
The first application we've integrated with our framework is seismic anisotropic parameters determination by inversion of SKS and converted phases. We've used probabilistic approach to inverse problem solution based on a posteriory probability distribution function (APDF) formalism. To get the exact solution of the problem we have to compute the values of multidimensional function. Within our implementation we used brute-force APDF calculation on rectangular grid across parameter space.
The result of computation is stored in relational DBMS and then represented in familiar human-readable form. Application provides several instruments to allow analysis of function's shape by computational results: maximum value distribution, 2D cross-sections of APDF, 2D marginals and a few other tools. During the tests we've run the application against both synthetic and observed data.
-
Development of distributed computing applications and services with Everest cloud platform
Computer Research and Modeling, 2015, v. 7, no. 3, pp. 593-599Views (last year): 6. Citations: 2 (RSCI).The use of service-oriented approach in scientific domains can increase research productivity by enabling sharing, publication and reuse of computing applications, as well as automation of scientific workflows. Everest is a cloud platform that enables researchers with minimal skills to publish and use scientific applications as services. In contrast to existing solutions, Everest executes applications on external resources attached by users, implements flexible binding of resources to applications and supports programmatic access to the platform's functionality. The paper presents current state of the platform, recent developments and remaining challenges.
-
Using CERN cloud technologies for the further ATLAS TDAQ software development and for its application for the remote sensing data processing in the space monitoring tasks
Computer Research and Modeling, 2015, v. 7, no. 3, pp. 683-689Views (last year): 2.The CERN cloud technologies (the CernVM project) give a new possibility for the software developers. The participation of the JINR ATLAS TDAQ working group in the software development for distributed data acquisition and processing system (TDAQ) of the ATLAS experiment (CERN) involves the work in the condition of the dynamically developing system and its infrastructure. The CERN cloud technologies, especially CernVM, provide the most effective access as to the TDAQ software as to the third-part software used in ATLAS. The access to the Scientific Linux environment is provided by CernVM virtual machines and the access software repository — by CernVM-FS. The problem of the functioning of the TDAQ middleware in the CernVM environment was studied in this work. The CernVM usage is illustrated on three examples: the development of the packages Event Dump and Webemon, and the adaptation of the data quality auto checking system of the ATLAS TDAQ (Data Quality Monitoring Framework) for the radar data assessment.
-
Views (last year): 2.
The report presents an analysis of Big Data storage solutions in different directions. The purpose of this paper is to introduce the technology of Big Data storage, prospects of storage technologies, for example, the software DIRAC. The DIRAC is a software framework for distributed computing.
The report considers popular storage technologies and lists their limitations. The main problems are the storage of large data, the lack of quality in the processing, scalability, the lack of rapid availability, the lack of implementation of intelligent data retrieval.
Experimental computing tasks demand a wide range of requirements in terms of CPU usage, data access or memory consumption and unstable profile of resource use for a certain period. The DIRAC Data Management System (DMS), together with the DIRAC Storage Management System (SMS) provides the necessary functionality to execute and control all the activities related with data.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"