Результаты поиска по 'memory':
Найдено статей: 48
  1. Grachev V.A., Nayshtut Yu.S.
    Ultimate load theorems for rigid plastic solids with internal degrees of freedom and their application in continual lattice shells
    Computer Research and Modeling, 2013, v. 5, no. 3, pp. 423-432

    This paper studies solids with internal degrees of freedom using the method of Cartan moving hedron. Strain compatibility conditions are derived in the form of structure equations for manifolds. Constitutive relations are reviewed and ultimate load theorems are proved for rigid plastic solids with internal degrees of freedom. It is demonstrated how the above theorems can be applied in behavior analysis of rigid plastic continual shells of shape memory materials. The ultimate loads are estimated for rotating shells under external forces and in case of shape recovery from heating.

    Citations: 2 (RSCI).
  2. Chernavskaya O.D.
    Dynamical theory of information as a basis for natural-constructive approach to modeling a cognitive process
    Computer Research and Modeling, 2017, v. 9, no. 3, pp. 433-447

    The main statements and inferences of the Dynamic Theory Information (DTI) are considered. It is shown that DTI provides the possibility two reveal two essentially important types of information: objective (unconventional) and subjective (conventional) informtion. There are two ways of obtaining information: reception (perception of an already existing one) and generation (production of new) information. It is shown that the processes of generation and perception of information should proceed in two different subsystems of the same cognitive system. The main points of the Natural-Constructivist Approach to modeling the cognitive process are discussed. It is shown that any neuromorphic approach faces the problem of Explanatory Gap between the “Brain” and the “Mind”, i. e. the gap between objectively measurable information about the ensemble of neurons (“Brain”) and subjective information about the human consciousness (“Mind”). The Natural-Constructive Cognitive Architecture developed within the framework of this approach is discussed. It is a complex block-hierarchical combination of several neuroprocessors. The main constructive feature of this architecture is splitting the whole system into two linked subsystems, by analogy with the hemispheres of the human brain. One of the subsystems is processing the new information, learning, and creativity, i.e. for the generation of information. Another subsystem is responsible for processing already existing information, i.e. reception of information. It is shown that the lowest (zero) level of the hierarchy is represented by processors that should record images of real objects (distributed memory) as a response to sensory signals, which is objective information (and refers to the “Brain”). The next hierarchy levels are represented by processors containing symbols of the recorded images. It is shown that symbols represent subjective (conventional) information created by the system itself and providing its individuality. The highest hierarchy levels containing the symbols of abstract concepts provide the possibility to interpret the concepts of “consciousness”, “sub-consciousness”, “intuition”, referring to the field of “Mind”, in terms of the ensemble of neurons. Thus, DTI provides an opportunity to build a model that allows us to trace how the “Mind” could emerge basing on the “Brain”.

    Views (last year): 6.
  3. In this work we have developed a new efficient program for the numerical simulation of 3D global chemical transport on an adaptive finite-difference grid which allows us to concentrate grid points in the regions where flow variables sharply change and coarsen the grid in the regions of their smooth behavior, which significantly minimizes the grid size. We represent the adaptive grid with a combination of several dynamic (tree, linked list) and static (array) data structures. The dynamic data structures are used for a grid reconstruction, and the calculations of the flow variables are based on the static data structures. The introduction of the static data structures allows us to speed up the program by a factor of 2 in comparison with the conventional approach to the grid representation with only dynamic data structures.

    We wrote and tested our program on a computer with 6 CPU cores. Using the computer microarchitecture simulator gem5, we estimated the scalability property of the program on a significantly greater number of cores (up to 32), using several models of a computer system with the design “computational cores – cache – main memory”. It has been shown that the microarchitecture of a computer system has a significant impact on the scalability property, i.e. the same program demonstrates different efficiency on different computer microarchitectures. For example, we have a speedup of 14.2 on a processor with 32 cores and 2 cache levels, but we have a speedup of 22.2 on a processor with 32 cores and 3 cache levels. The execution time of a program on a computer model in gem5 is 104–105 times greater than the execution time of the same program on a real computer and equals 1.5 hours for the most complex model.

    Also in this work we describe how to configure gem5 and how to perform simulations with gem5 in the most optimal way.

  4. Beshtokov M.K.
    Numerical solution of integro-differential equations of fractional moisture transfer with the Bessel operator
    Computer Research and Modeling, 2024, v. 16, no. 2, pp. 353-373

    The paper considers integro-differential equations of fractional order moisture transfer with the Bessel operator. The studied equations contain the Bessel operator, two Gerasimov – Caputo fractional differentiation operators with different orders $\alpha$ and $\beta$. Two types of integro-differential equations are considered: in the first case, the equation contains a non-local source, i.e. the integral of the unknown function over the integration variable $x$, and in the second case, the integral over the time variable τ, denoting the memory effect. Similar problems arise in the study of processes with prehistory. To solve differential problems for different ratios of $\alpha$ and $\beta$, a priori estimates in differential form are obtained, from which the uniqueness and stability of the solution with respect to the right-hand side and initial data follow. For the approximate solution of the problems posed, difference schemes are constructed with the order of approximation $O(h^2+\tau^2)$ for $\alpha=\beta$ and $O(h^2+\tau^{2-\max\{\alpha,\beta\}})$ for $\alpha\neq\beta$. The study of the uniqueness, stability and convergence of the solution is carried out using the method of energy inequalities. A priori estimates for solutions of difference problems are obtained for different ratios of $\alpha$ and $\beta$, from which the uniqueness and stability follow, as well as the convergence of the solution of the difference scheme to the solution of the original differential problem at a rate equal to the order of approximation of the difference scheme.

  5. Mezentsev Y.A., Razumnikova O.M., Estraykh I.V., Tarasova I.V., Trubnikova O.A.
    Tasks and algorithms for optimal clustering of multidimensional objects by a variety of heterogeneous indicators and their applications in medicine
    Computer Research and Modeling, 2024, v. 16, no. 3, pp. 673-693

    The work is devoted to the description of the author’s formal statements of the clustering problem for a given number of clusters, algorithms for their solution, as well as the results of using this toolkit in medicine.

    The solution of the formulated problems by exact algorithms of implementations of even relatively low dimensions before proving optimality is impossible in a finite time due to their belonging to the NP class.

    In this regard, we have proposed a hybrid algorithm that combines the advantages of precise methods based on clustering in paired distances at the initial stage with the speed of methods for solving simplified problems of splitting by cluster centers at the final stage. In the development of this direction, a sequential hybrid clustering algorithm using random search in the paradigm of swarm intelligence has been developed. The article describes it and presents the results of calculations of applied clustering problems.

    To determine the effectiveness of the developed tools for optimal clustering of multidimensional objects according to a variety of heterogeneous indicators, a number of computational experiments were performed using data sets including socio-demographic, clinical anamnestic, electroencephalographic and psychometric data on the cognitive status of patients of the cardiology clinic. An experimental proof of the effectiveness of using local search algorithms in the paradigm of swarm intelligence within the framework of a hybrid algorithm for solving optimal clustering problems has been obtained.

    The results of the calculations indicate the actual resolution of the main problem of using the discrete optimization apparatus — limiting the available dimensions of task implementations. We have shown that this problem is eliminated while maintaining an acceptable proximity of the clustering results to the optimal ones. The applied significance of the obtained clustering results is also due to the fact that the developed optimal clustering toolkit is supplemented by an assessment of the stability of the formed clusters, which allows for known factors (the presence of stenosis or older age) to additionally identify those patients whose cognitive resources are insufficient to overcome the influence of surgical anesthesia, as a result of which there is a unidirectional effect of postoperative deterioration of complex visual-motor reaction, attention and memory. This effect indicates the possibility of differentiating the classification of patients using the proposed tools.

  6. Zhmurov A.A., Barsegov V.A., Trifonov S.V., Kholodov Y.A., Kholodov A.S.
    Efficient Pseudorandom number generators for biomolecular simulations on graphics processors
    Computer Research and Modeling, 2011, v. 3, no. 3, pp. 287-308

    Langevin Dynamics, Monte Carlo, and all-atom Molecular Dynamics simulations in implicit solvent require a reliable source of pseudorandom numbers generated at each step of calculation. We present the two main approaches for implementation of pseudorandom number generators on a GPU. In the first approach, inherent in CPU-based calculations, one PRNG produces a stream of pseudorandom numbers in each thread of execution, whereas the second approach builds on the ability of different threads to communicate, thus, sharing random seeds across the entire device. We exemplify the use of these approaches through the development of Ran2, Hybrid Taus, and Lagged Fibonacci algorithms. As an application-based test of randomness, we carry out LD simulations of N independent harmonic oscillators coupled to a stochastic thermostat. This model allows us to assess statistical quality of pseudorandom numbers. We also profile performance of these generators in terms of the computational time, memory usage, and the speedup factor (CPU/GPU time).

    Views (last year): 11. Citations: 2 (RSCI).
  7. Prokoptsev N.G., Alekseenko A.E., Kholodov Y.A.
    Traffic flow speed prediction on transportation graph with convolutional neural networks
    Computer Research and Modeling, 2018, v. 10, no. 3, pp. 359-367

    The short-term prediction of road traffic condition is one of the main tasks of transportation modelling. The main purpose of which are traffic control, reporting of accidents, avoiding traffic jams due to knowledge of traffic flow and subsequent transportation planning. A number of solutions exist — both model-driven and data driven had proven to be successful in capturing the dynamics of traffic flow. Nevertheless, most space-time models suffer from high mathematical complexity and low efficiency. Artificial Neural Networks, one of the prominent datadriven approaches, show promising performance in modelling the complexity of traffic flow. We present a neural network architecture for traffic flow prediction on a real-world road network graph. The model is based on the combination of a recurrent neural network and graph convolutional neural network. Where a recurrent neural network is used to model temporal dependencies, and a convolutional neural network is responsible for extracting spatial features from traffic. To make multiple few steps ahead predictions, the encoder-decoder architecture is used, which allows to reduce noise propagation due to inexact predictions. To model the complexity of traffic flow, we employ multilayered architecture. Deeper neural networks are more difficult to train. To speed up the training process, we use skip-connections between each layer, so that each layer teaches only the residual function with respect to the previous layer outputs. The resulting neural network was trained on raw data from traffic flow detectors from the US highway system with a resolution of 5 minutes. 3 metrics: mean absolute error, mean relative error, mean-square error were used to estimate the quality of the prediction. It was found that for all metrics the proposed model achieved lower prediction error than previously published models, such as Vector Auto Regression, LSTM and Graph Convolution GRU.

    Views (last year): 36.
  8. Malkov S.Yu.
    Regimes with exacerbation in the history of mankind or memories of the future
    Computer Research and Modeling, 2019, v. 11, no. 5, pp. 931-947

    The article describes the modes with the exacerbation of social and biological history. The analysis of the possible causes of the sharp acceleration of biological and social processes in certain historical periods is carried out. Using mathematical modeling shows that hyperbolic trends in social and biological evolution may be the result of transitional processes in periods of expansion of ecological niches. Accelerating biological speciation due to the fact that its earlier life change inhabitancy, making it more diverse, saturating the organic, thus creating favourable conditions for the emergence of new species. In the social history of the expansion of ecological niches associated with technological revolutions, of which the most important were: Neolithic revolution — the transition from appropriating economy to producing economy (10 thousand years ago), “urban revolution” — a shift from the Neolithic epoch to the bronze epoch (5 thousand years ago), the “axial age” — transition to the development of iron tools (2.5 thousand years ago), the industrial revolution — the transition from manual labor to machine production (200 years ago). All of these technological revolutions have been accompanied by dramatic population growth, changes in social and political spheres. So, observed in the last century, hyperbolic nature of some demographic, economic growth and other indicators of world dynamics is a consequence of the transition process, which began as a result of the industrial revolution and to prepare for the transition of the society to a new stage of its development. Singularity point of hyperbolic trend shows the end of the initial phase of the process and marks the transition to the final stage. The mathematical model describing the demographic and economic changes in the era of change is proposed. It is shown that a direct analogue of the contemporary situation in this sense is the “axial age” (since 8 century BC to the beginning of our era). The existence of this analogy allows you to see into the future by studying the past.

  9. Nedbailo Y.A., Surchenko A.V., Bychkov I.N.
    Reducing miss rate in a non-inclusive cache with inclusive directory of a chip multiprocessor
    Computer Research and Modeling, 2023, v. 15, no. 3, pp. 639-656

    Although the era of exponential performance growth in computer chips has ended, processor core numbers have reached 16 or more even in general-purpose desktop CPUs. As DRAM throughput is unable to keep pace with this computing power growth, CPU designers need to find ways of lowering memory traffic per instruction. The straightforward way to do this is to reduce the miss rate of the last-level cache. Assuming “non-inclusive cache, inclusive directory” (NCID) scheme already implemented, three ways of reducing the cache miss rate further were studied.

    The first is to achieve more uniform usage of cache banks and sets by employing hash-based interleaving and indexing. In the experiments in SPEC CPU2017 refrate tests, even the simplest XOR-based hash functions demonstrated a performance increase of 3.2%, 9.1%, and 8.2% for CPU configurations with 16, 32, and 64 cores and last-level cache banks, comparable to the results of more complex matrix-, division- and CRC-based functions.

    The second optimisation is aimed at reducing replication at different cache levels by means of automatically switching to the exclusive scheme when it appears optimal. A known scheme of this type, FLEXclusion, was modified for use in NCID caches and showed an average performance gain of 3.8%, 5.4 %, and 7.9% for 16-, 32-, and 64-core configurations.

    The third optimisation is to increase the effective cache capacity using compression. The compression rate of the inexpensive and fast BDI*-HL (Base-Delta-Immediate Modified, Half-Line) algorithm, designed for NCID, was measured, and the respective increase in cache capacity yielded roughly 1% of the average performance increase.

    All three optimisations can be combined and demonstrated a performance gain of 7.7%, 16% and 19% for CPU configurations with 16, 32, and 64 cores and banks, respectively.

  10. Kiselev M.V.
    Exploration of 2-neuron memory units in spiking neural networks
    Computer Research and Modeling, 2020, v. 12, no. 2, pp. 401-416

    Working memory mechanisms in spiking neural networks consisting of leaky integrate-and-fire neurons with adaptive threshold and synaptic plasticity are studied in this work. Moderate size networks including thousands of neurons were explored. Working memory is a network ability to keep in its state the information about recent stimuli presented to the network such that this information is sufficient to determine which stimulus has been presented. In this study, network state is defined as the current characteristics of network activity only — without internal state of its neurons. In order to discover the neuronal structures serving as a possible substrate of the memory mechanism, optimization of the network parameters and structure using genetic algorithm was carried out. Two kinds of neuronal structures with the desired properties were found. These are neuron pairs mutually connected by strong synaptic links and long tree-like neuronal ensembles. It was shown that only the neuron pairs are suitable for efficient and reliable implementation of working memory. Properties of such memory units and structures formed by them are explored in the present study. It is shown that characteristics of the studied two-neuron memory units can be set easily by the respective choice of the parameters of its neurons and synaptic connections. Besides that, this work demonstrates that ensembles of these structures can provide the network with capability of unsupervised learning to recognize patterns in the input signal.

Pages: « first previous next

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"