Результаты поиска по 'memory':
Найдено статей: 55
  1. Adamovskiy Y.R., Bohush R.P., Naumovich N.M.
    Prediction of frequency resource occupancy in a cognitive radio system using the Kolmogorov – Arnold neural network
    Computer Research and Modeling, 2025, v. 17, no. 1, pp. 109-123

    For cognitive radio systems, it is important to use efficient algorithms that search for free channels that can be provided to secondary users. Therefore, this paper is devoted to improving the accuracy of prediction frequency resource occupancy of a cellular communication system using spatiotemporal radio environment maps. The formation of a radio environment map is implemented for the fourthgeneration cellular communication system Long-Term Evolution. Taking this into account, a model structure has been developed that includes data generation and allows training and testing of an artificial neural network to predict the occupancy of frequency resources presented as the contents of radio environment map cells. A method for assessing prediction accuracy is described. The simulation model of the cellular communication system is implemented in the MatLab. The developed frequency resource occupancy prediction model is implemented in the Python. The complete file structure of the model is presented. The experiments were performed using artificial neural networks based on the Long Short-Term Memory and Kolmogorov – Arnold neural network architectures, taking into account its modification. It was found that with an equal number of parameters, the Kolmogorov –Arnold neural network learns faster for a given task. The obtained research results indicate an increase in the accuracy of prediction the occupancy of the frequency resource of the cellular communication system when using the Kolmogorov – Arnold neural network.

  2. Prokoptsev N.G., Alekseenko A.E., Kholodov Y.A.
    Traffic flow speed prediction on transportation graph with convolutional neural networks
    Computer Research and Modeling, 2018, v. 10, no. 3, pp. 359-367

    The short-term prediction of road traffic condition is one of the main tasks of transportation modelling. The main purpose of which are traffic control, reporting of accidents, avoiding traffic jams due to knowledge of traffic flow and subsequent transportation planning. A number of solutions exist — both model-driven and data driven had proven to be successful in capturing the dynamics of traffic flow. Nevertheless, most space-time models suffer from high mathematical complexity and low efficiency. Artificial Neural Networks, one of the prominent datadriven approaches, show promising performance in modelling the complexity of traffic flow. We present a neural network architecture for traffic flow prediction on a real-world road network graph. The model is based on the combination of a recurrent neural network and graph convolutional neural network. Where a recurrent neural network is used to model temporal dependencies, and a convolutional neural network is responsible for extracting spatial features from traffic. To make multiple few steps ahead predictions, the encoder-decoder architecture is used, which allows to reduce noise propagation due to inexact predictions. To model the complexity of traffic flow, we employ multilayered architecture. Deeper neural networks are more difficult to train. To speed up the training process, we use skip-connections between each layer, so that each layer teaches only the residual function with respect to the previous layer outputs. The resulting neural network was trained on raw data from traffic flow detectors from the US highway system with a resolution of 5 minutes. 3 metrics: mean absolute error, mean relative error, mean-square error were used to estimate the quality of the prediction. It was found that for all metrics the proposed model achieved lower prediction error than previously published models, such as Vector Auto Regression, LSTM and Graph Convolution GRU.

    Views (last year): 36.
  3. Nikolsky I.M.
    Classifier size optimisation in segmentation of three-dimensional point images of wood vegetation
    Computer Research and Modeling, 2025, v. 17, no. 4, pp. 665-675

    The advent of laser scanning technologies has revolutionized forestry. Their use made it possible to switch from studying woodlands using manual measurements to computer analysis of stereo point images called point clouds.

    Automatic calculation of some tree parameters (such as trunk diameter) using a point cloud requires the removal of foliage points. To perform this operation, a preliminary segmentation of the stereo image into the “foliage” and “trunk” classes is required. The solution to this problem often involves the use of machine learning methods.

    One of the most popular classifiers used for segmentation of stereo images of trees is a random forest. This classifier is quite demanding on the amount of memory. At the same time, the size of the machine learning model can be critical if it needs to be sent by wire, which is required, for example, when performing distributed learning. In this paper, the goal is to find a classifier that would be less demanding in terms of memory, but at the same time would have comparable segmentation accuracy. The search is performed among classifiers such as logistic regression, naive Bayes classifier, and decision tree. In addition, a method for segmentation refinement performed by a decision tree using logistic regression is being investigated.

    The experiments were conducted on data from the collection of the University of Heidelberg. The collection contains hand-marked stereo images of trees of various species, both coniferous and deciduous, typical of the forests of Central Europe.

    It has been shown that classification using a decision tree, adjusted using logistic regression, is able to produce a result that is only slightly inferior to the result of a random forest in accuracy, while spending less time and RAM. The difference in balanced accuracy is no more than one percent on all the clouds considered, while the total size and inference time of the decision tree and logistic regression classifiers is an order of magnitude smaller than of the random forest classifier.

  4. Malkov S.Yu.
    Regimes with exacerbation in the history of mankind or memories of the future
    Computer Research and Modeling, 2019, v. 11, no. 5, pp. 931-947

    The article describes the modes with the exacerbation of social and biological history. The analysis of the possible causes of the sharp acceleration of biological and social processes in certain historical periods is carried out. Using mathematical modeling shows that hyperbolic trends in social and biological evolution may be the result of transitional processes in periods of expansion of ecological niches. Accelerating biological speciation due to the fact that its earlier life change inhabitancy, making it more diverse, saturating the organic, thus creating favourable conditions for the emergence of new species. In the social history of the expansion of ecological niches associated with technological revolutions, of which the most important were: Neolithic revolution — the transition from appropriating economy to producing economy (10 thousand years ago), “urban revolution” — a shift from the Neolithic epoch to the bronze epoch (5 thousand years ago), the “axial age” — transition to the development of iron tools (2.5 thousand years ago), the industrial revolution — the transition from manual labor to machine production (200 years ago). All of these technological revolutions have been accompanied by dramatic population growth, changes in social and political spheres. So, observed in the last century, hyperbolic nature of some demographic, economic growth and other indicators of world dynamics is a consequence of the transition process, which began as a result of the industrial revolution and to prepare for the transition of the society to a new stage of its development. Singularity point of hyperbolic trend shows the end of the initial phase of the process and marks the transition to the final stage. The mathematical model describing the demographic and economic changes in the era of change is proposed. It is shown that a direct analogue of the contemporary situation in this sense is the “axial age” (since 8 century BC to the beginning of our era). The existence of this analogy allows you to see into the future by studying the past.

  5. Nedbailo Y.A., Surchenko A.V., Bychkov I.N.
    Reducing miss rate in a non-inclusive cache with inclusive directory of a chip multiprocessor
    Computer Research and Modeling, 2023, v. 15, no. 3, pp. 639-656

    Although the era of exponential performance growth in computer chips has ended, processor core numbers have reached 16 or more even in general-purpose desktop CPUs. As DRAM throughput is unable to keep pace with this computing power growth, CPU designers need to find ways of lowering memory traffic per instruction. The straightforward way to do this is to reduce the miss rate of the last-level cache. Assuming “non-inclusive cache, inclusive directory” (NCID) scheme already implemented, three ways of reducing the cache miss rate further were studied.

    The first is to achieve more uniform usage of cache banks and sets by employing hash-based interleaving and indexing. In the experiments in SPEC CPU2017 refrate tests, even the simplest XOR-based hash functions demonstrated a performance increase of 3.2%, 9.1%, and 8.2% for CPU configurations with 16, 32, and 64 cores and last-level cache banks, comparable to the results of more complex matrix-, division- and CRC-based functions.

    The second optimisation is aimed at reducing replication at different cache levels by means of automatically switching to the exclusive scheme when it appears optimal. A known scheme of this type, FLEXclusion, was modified for use in NCID caches and showed an average performance gain of 3.8%, 5.4 %, and 7.9% for 16-, 32-, and 64-core configurations.

    The third optimisation is to increase the effective cache capacity using compression. The compression rate of the inexpensive and fast BDI*-HL (Base-Delta-Immediate Modified, Half-Line) algorithm, designed for NCID, was measured, and the respective increase in cache capacity yielded roughly 1% of the average performance increase.

    All three optimisations can be combined and demonstrated a performance gain of 7.7%, 16% and 19% for CPU configurations with 16, 32, and 64 cores and banks, respectively.

  6. Almasri A., Tsybulin V.G.
    Multistability for a mathematical model of a tritrophic system in a heterogeneous habitat
    Computer Research and Modeling, 2025, v. 17, no. 5, pp. 923-939

    We consider a spatiotemporal model of a tritrophic system describing the interaction between prey, predator, and superpredator in an environment with nonuniform resource distribution. The model incorporates superpredator omnivory (Intraguild Predation, IGP), diffusion, and directed migration (taxis), the latter modeled using a logarithmic function of resource availability and prey density. The primary focus is on analyzing the multistability of the system and the role of cosymmetry in the formation of continuous families of steady-state solutions. Using a numerical-analytical approach, we study both spatially homogeneous and inhomogeneous steady-state solutions. It is established that under additional relations between the parameters governing local predator interactions and diffusion coefficients, the system exhibits cosymmetry, leading to the emergence of a family of stable steady-state solutions proportional to the resource function. We demonstrate that the cosymmetry is independent of the resource function in the case of a heterogeneous environment. The stability of stationary distributions is investigated using spectral methods. Violation of the cosymmetry conditions results in the breakdown of the solution family and the emergence of isolated equilibria, as well as prolonged transient dynamics reflecting the system’s “memory” of the vanished states. Depending on initial conditions and parameters, the system exhibits transitions to single-predator regimes (survival of either the predator or superpredator) or predator coexistence. Numerical experiments based on the method of lines, which involves finite difference discretization in space and Runge –Kutta integration in time, confirm the system’s multistability and illustrate the disappearance of solution families when cosymmetry is broken.

  7. Kiselev M.V.
    Exploration of 2-neuron memory units in spiking neural networks
    Computer Research and Modeling, 2020, v. 12, no. 2, pp. 401-416

    Working memory mechanisms in spiking neural networks consisting of leaky integrate-and-fire neurons with adaptive threshold and synaptic plasticity are studied in this work. Moderate size networks including thousands of neurons were explored. Working memory is a network ability to keep in its state the information about recent stimuli presented to the network such that this information is sufficient to determine which stimulus has been presented. In this study, network state is defined as the current characteristics of network activity only — without internal state of its neurons. In order to discover the neuronal structures serving as a possible substrate of the memory mechanism, optimization of the network parameters and structure using genetic algorithm was carried out. Two kinds of neuronal structures with the desired properties were found. These are neuron pairs mutually connected by strong synaptic links and long tree-like neuronal ensembles. It was shown that only the neuron pairs are suitable for efficient and reliable implementation of working memory. Properties of such memory units and structures formed by them are explored in the present study. It is shown that characteristics of the studied two-neuron memory units can be set easily by the respective choice of the parameters of its neurons and synaptic connections. Besides that, this work demonstrates that ensembles of these structures can provide the network with capability of unsupervised learning to recognize patterns in the input signal.

  8. Bogdanov A.V., P. Sone K. Ko, Zaya K.
    Performance of the OpenMP and MPI implementations on ultrasparc system
    Computer Research and Modeling, 2015, v. 7, no. 3, pp. 485-491

    This paper targets programmers and developers interested in utilizing parallel programming techniques to enhance application performance. The Oracle Solaris Studio software provides state-of-the-art optimizing and parallelizing compilers for C, C++ and Fortran, an advanced debugger, and optimized mathematical and performance libraries. Also included are an extremely powerful performance analysis tool for profiling serial and parallel applications, a thread analysis tool to detect data races and deadlock in memory parallel programs, and an Integrated Development Environment (IDE). The Oracle Message Passing Toolkit software provides the high-performance MPI libraries and associated run-time environment needed for message passing applications that can run on a single system or across multiple compute systems connected with high performance networking, including Gigabit Ethernet, 10 Gigabit Ethernet, InfiniBand and Myrinet. Examples of OpenMP and MPI are provided throughout the paper, including their usage via the Oracle Solaris Studio and Oracle Message Passing Toolkit products for development and deployment of both serial and parallel applications on SPARC and x86/x64 based systems. Throughout this paper it is demonstrated how to develop and deploy an application parallelized with OpenMP and/or MPI.

    Views (last year): 2.
  9. Bogdanov A.V., Zaya K., P. Sone K. Ko
    Improvement of computational abilities in computing environments with virtualization technologies
    Computer Research and Modeling, 2015, v. 7, no. 3, pp. 499-504

    In this paper, we illustrates the ways to improve abilities of the computing environments by using virtualization, single system image (SSI) and hypervisor technologies’ collaboration for goal to improve computational abilities. Recently cloud computing as a new service concept has become popular to provide various services to user such as multi-media sharing, online office software, game and online storage. The cloud computing is bringing together multiple computers and servers in a single environment designed to address certain types of tasks, such as scientific problems or complex calculations. By using virtualization technologies, cloud computing environment is able to virtualize and share resources among different applications with the objective for better server utilization, better load balancing and effectiveness.

    Views (last year): 3.
  10. Golubev V.I., Shevchenko A.V., Petrov I.B.
    Raising convergence order of grid-characteristic schemes for 2D linear elasticity problems using operator splitting
    Computer Research and Modeling, 2022, v. 14, no. 4, pp. 899-910

    The grid-characteristic method is successfully used for solving hyperbolic systems of partial differential equations (for example, transport / acoustic / elastic equations). It allows to construct correctly algorithms on contact boundaries and boundaries of the integration domain, to a certain extent to take into account the physics of the problem (propagation of discontinuities along characteristic curves), and has the property of monotonicity, which is important for considered problems. In the cases of two-dimensional and three-dimensional problems the method makes use of a coordinate splitting technique, which enables us to solve the original equations by solving several one-dimensional ones consecutively. It is common to use up to 3-rd order one-dimensional schemes with simple splitting techniques which do not allow for the convergence order to be higher than two (with respect to time). Significant achievements in the operator splitting theory were done, the existence of higher-order schemes was proved. Its peculiarity is the need to perform a step in the opposite direction in time, which gives rise to difficulties, for example, for parabolic problems.

    In this work coordinate splitting of the 3-rd and 4-th order were used for the two-dimensional hyperbolic problem of the linear elasticity. This made it possible to increase the final convergence order of the computational algorithm. The paper empirically estimates the convergence in L1 and L∞ norms using analytical solutions of the system with the sufficient degree of smoothness. To obtain objective results, we considered the cases of longitudinal and transverse plane waves propagating both along the diagonal of the computational cell and not along it. Numerical experiments demonstrated the improved accuracy and convergence order of constructed schemes. These improvements are achieved with the cost of three- or fourfold increase of the computational time (for the 3-rd and 4-th order respectively) and no additional memory requirements. The proposed improvement of the computational algorithm preserves the simplicity of its parallel implementation based on the spatial decomposition of the computational grid.

Pages: « first previous next

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"