Результаты поиска по 'network model':
Найдено статей: 99
  1. Shinyaeva T.S.
    Activity dynamics in virtual networks: an epidemic model vs an excitable medium model
    Computer Research and Modeling, 2020, v. 12, no. 6, pp. 1485-1499

    Epidemic models are widely used to mimic social activity, such as spreading of rumors or panic. Simultaneously, models of excitable media are traditionally used to simulate the propagation of activity. Spreading of activity in the virtual community was simulated within two models: the SIRS epidemic model and the Wiener – Rosenblut model of the excitable media. We used network versions of these models. The network was assumed to be heterogeneous, namely, each element of the network has an individual set of characteristics, which corresponds to different psychological types of community members. The structure of a virtual network relies on an appropriate scale-free network. Modeling was carried out on scale-free networks with various values of the average degree of vertices. Additionally, a special case was considered, namely, a complete graph corresponding to a close professional group, when each member of the group interacts with each. Participants in a virtual community can be in one of three states: 1) potential readiness to accept certain information; 2) active interest to this information; 3) complete indifference to this information. These states correspond to the conditions that are usually used in epidemic models: 1) susceptible to infection, 2) infected, 3) refractory (immune or death due to disease). A comparison of the two models showed their similarity both at the level of main assumptions and at the level of possible modes. Distribution of activity over the network is similar to the spread of infectious diseases. It is shown that activity in virtual networks may experience fluctuations or decay.

  2. Chen J., Lobanov A.V., Rogozin A.V.
    Nonsmooth Distributed Min-Max Optimization Using the Smoothing Technique
    Computer Research and Modeling, 2023, v. 15, no. 2, pp. 469-480

    Distributed saddle point problems (SPPs) have numerous applications in optimization, matrix games and machine learning. For example, the training of generated adversarial networks is represented as a min-max optimization problem, and training regularized linear models can be reformulated as an SPP as well. This paper studies distributed nonsmooth SPPs with Lipschitz-continuous objective functions. The objective function is represented as a sum of several components that are distributed between groups of computational nodes. The nodes, or agents, exchange information through some communication network that may be centralized or decentralized. A centralized network has a universal information aggregator (a server, or master node) that directly communicates to each of the agents and therefore can coordinate the optimization process. In a decentralized network, all the nodes are equal, the server node is not present, and each agent only communicates to its immediate neighbors.

    We assume that each of the nodes locally holds its objective and can compute its value at given points, i. e. has access to zero-order oracle. Zero-order information is used when the gradient of the function is costly, not possible to compute or when the function is not differentiable. For example, in reinforcement learning one needs to generate a trajectory to evaluate the current policy. This policy evaluation process can be interpreted as the computation of the function value. We propose an approach that uses a smoothing technique, i. e., applies a first-order method to the smoothed version of the initial function. It can be shown that the stochastic gradient of the smoothed function can be viewed as a random two-point gradient approximation of the initial function. Smoothing approaches have been studied for distributed zero-order minimization, and our paper generalizes the smoothing technique on SPPs.

  3. Fokin G.A., Volgushev D.B.
    Models for spatial selection during location-aware beamforming in ultra-dense millimeter wave radio access networks
    Computer Research and Modeling, 2024, v. 16, no. 1, pp. 195-216

    The work solves the problem of establishing the dependence of the potential for spatial selection of useful and interfering signals according to the signal-to-interference ratio criterion on the positioning error of user equipment during beamforming by their location at a base station, equipped with an antenna array. Configurable simulation parameters include planar antenna array with a different number of antenna elements, movement trajectory, as well as the accuracy of user equipment location estimation using root mean square error of coordinate estimates. The model implements three algorithms for controlling the shape of the antenna radiation pattern: 1) controlling the beam direction for one maximum and one zero; 2) controlling the shape and width of the main beam; 3) adaptive beamforming. The simulation results showed, that the first algorithm is most effective, when the number of antenna array elements is no more than 5 and the positioning error is no more than 7 m, and the second algorithm is appropriate to employ, when the number of antenna array elements is more than 15 and the positioning error is more than 5 m. Adaptive beamforming is implemented using a training signal and provides optimal spatial selection of useful and interfering signals without device location data, but is characterized by high complexity of hardware implementation. Scripts of the developed models are available for verification. The results obtained can be used in the development of scientifically based recommendations for beam control in ultra-dense millimeter-wave radio access networks of the fifth and subsequent generations.

  4. Moiseev N.A., Nazarova D.I., Semina N.S., Maksimov D.A.
    Changepoint detection on financial data using deep learning approach
    Computer Research and Modeling, 2024, v. 16, no. 2, pp. 555-575

    The purpose of this study is to develop a methodology for change points detection in time series, including financial data. The theoretical basis of the study is based on the pieces of research devoted to the analysis of structural changes in financial markets, description of the proposed algorithms for detecting change points and peculiarities of building classical and deep machine learning models for solving this type of problems. The development of such tools is of interest to investors and other stakeholders, providing them with additional approaches to the effective analysis of financial markets and interpretation of available data.

    To address the research objective, a neural network was trained. In the course of the study several ways of training sample formation were considered, differing in the nature of statistical parameters. In order to improve the quality of training and obtain more accurate results, a methodology for feature generation was developed for the formation of features that serve as input data for the neural network. These features, in turn, were derived from an analysis of mathematical expectations and standard deviations of time series data over specific intervals. The potential for combining these features to achieve more stable results is also under investigation.

    The results of model experiments were analyzed to compare the effectiveness of the proposed model with other existing changepoint detection algorithms that have gained widespread usage in practical applications. A specially generated dataset, developed using proprietary methods, was utilized as both training and testing data. Furthermore, the model, trained on various features, was tested on daily data from the S&P 500 index to assess its effectiveness in a real financial context.

    As the principles of the model’s operation are described, possibilities for its further improvement are considered, including the modernization of the proposed model’s structure, optimization of training data generation, and feature formation. Additionally, the authors are tasked with advancing existing concepts for real-time changepoint detection.

  5. Strygin N.A., Kudasov N.D.
    Fast and accurate x86 disassembly using a graph convolutional network model
    Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1779-1792

    Disassembly of stripped x86 binaries is an important yet non-trivial task. Disassembly is difficult to perform correctly without debug information, especially on x86 architecture, which has variablesized instructions interleaved with data. Moreover, the presence of indirect jumps in binary code adds another layer of complexity. Indirect jumps impede the ability of recursive traversal, a common disassembly technique, to successfully identify all instructions within the code. Consequently, disassembling such code becomes even more intricate and demanding, further highlighting the challenges faced in this field. Many tools, including commercial ones such as IDA Pro, struggle with accurate x86 disassembly. As such, there has been some interest in developing a better solution using machine learning (ML) techniques. ML can potentially capture underlying compiler-independent patterns inherent for the compiler-generated assembly. Researchers in this area have shown that it is possible for ML approaches to outperform the classical tools. They also can be less timeconsuming to develop compared to manual heuristics, shifting most of the burden onto collecting a big representative dataset of executables with debug information. Following this line of work, we propose an improvement of an existing RGCN-based architecture, which builds control and flow graph on superset disassembly. The enhancement comes from augmenting the graph with data flow information. In particular, in the embedding we add Jump Control Flow and Register Dependency edges, inspired by Probabilistic Disassembly. We also create an open-source x86 instruction identification dataset, based on a combination of ByteWeight dataset and a selection open-source Debian packages. Compared to IDA Pro, a state of the art commercial tool, our approach yields better accuracy, while maintaining great performance on our benchmarks. It also fares well against existing machine learning approaches such as DeepDi.

  6. Gadzhiev R.I.
    Estimation of probabilistic model of employee labor process
    Computer Research and Modeling, 2012, v. 4, no. 4, pp. 969-975

    The mathematical estimation model for employee labor process, built on the basis of Bayesian network is presented in the article. The great attention is given to the estimation of qualitative characteristics of labor product. Usage of described model is supposed in the companies with the management employee workflows system.

    Views (last year): 1.
  7. Marosi A.C., Lovas R.
    Defining volunteer computing: a formal approach
    Computer Research and Modeling, 2015, v. 7, no. 3, pp. 565-571

    Volunteer computing resembles private desktop grids whereas desktop grids are not fully equivalent to volunteer computing. There are several attempts to distinguish and categorize them using informal and formal methods. However, most formal approaches model a particular middleware and do not focus on the general notion of volunteer or desktop grid computing. This work makes an attempt to formalize their characteristics and relationship. To this end formal modeling is applied that tries to grasp the semantic of their functionalities — as opposed to comparisons based on properties, features, etc. We apply this modeling method to formalize the Berkeley Open Infrastructure for Network Computing (BOINC) [Anderson D. P., 2004] volunteer computing system.

  8. Lotarev D.T.
    Allocation of steinerpoints in euclidean Steiner tree problem by means of MatLab package
    Computer Research and Modeling, 2015, v. 7, no. 3, pp. 707-713

    The problem of allocation of Steiner points in Euclidean Steiner Tree is considered. The cost of network is sum of building costs and cost of the information transportation. Euclidean Steiner tree problem in the form of topological network design is a good model of this problem.

    The package MatLab has the way to solve the second part of this problem — allocate Steiner points under condition that the adjacency matrix is set. The method to get solution has been worked out. The Steiner tree is formed by means of solving of the sequence of "three points" Steiner

    Views (last year): 4.
  9. Degtyarev A.B., Myo Min S., Wunna K.
    Cloud computing for virtual testbed
    Computer Research and Modeling, 2015, v. 7, no. 3, pp. 753-758

    Nowadays cloud computing is an important topic in the field of information technology and computer system. Several companies and educational institutes have deployed cloud infrastructures to overcome their problems such as easy data access, software updates with minimal cost, large or unlimited storage, efficient cost factor, backup storage and disaster recovery, and some other benefits if compare with the traditional network infrastructures. The paper present the study of cloud computing technology for marine environmental data and processing. Cloud computing of marine environment information is proposed for the integration and sharing of marine information resources. It is highly desirable to perform empirical requiring numerous interactions with web servers and transfers of very large archival data files without affecting operational information system infrastructure. In this paper, we consider the cloud computing for virtual testbed to minimize the cost. That is related to real time infrastructure.

    Views (last year): 7.
Pages: « first previous

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"