All issues
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Model of formation of primary behavioral patterns with adaptive behavior based on the combination of random search and experience
Computer Research and Modeling, 2016, v. 8, no. 6, pp. 941-950Views (last year): 6. Citations: 2 (RSCI).In this paper, we propose an adaptive algorithm that simulates the process of forming the initial behavioral skills on the example of the system ‘eye-arm’ animat. The situation is the formation of the initial behavioral skills occurs, for example, when a child masters the management of their hands by understanding the relationship between baseline unidentified spots on the retina of his eye and the position of the real object. Since the body control skills are not ‘hardcoded’ initially in the brain and the spinal cord at the level of instincts, the human child, like most young of other mammals, it is necessary to develop these skills in search behavior mode. Exploratory behavior begins with trial and error and then its contribution is gradually reduced as the development of the body and its environment. Since the correct behavior patterns at this stage of development of the organism does not exist for now, then the only way to select the right skills is a positive reinforcement to achieve the objective. A key feature of the proposed algorithm is to fix in the imprinting mode, only the final action that led to success, and that is very important, led to the familiar imprinted situation clearly leads to success. Over time, the continuous chain is lengthened right action — maximum use of previous positive experiences and negative ‘forgotten’ and not used.
Thus there is the gradual replacement of the random search purposeful actions that observed in the real young. Thus, the algorithm is able to establish a correspondence between the laws of the world and the ‘inner feelings’, the internal state of the animat. The proposed animat model was used 2 types of neural networks: 1) neural network NET1 to the input current which is fed to the position of the brush arms and the target point, and the output of motor commands, directing ‘brush’ manipulator animat to the target point; 2) neural network NET2 is received at the input of target coordinates and the current coordinates of the ‘brush’ and the output value is formed likelihood that the animat already ‘know’ this situation, and he ‘knows’ how to react to it. With this architecture at the animat has to rely on the ‘experience’ of neural networks to recognize situations where the response from NET2 network of close to 1, and on the other hand, run a random search, when the experience of functioning in this area of the visual field in animat not (response NET2 close to 0).
-
Optimisation of parameters and structure of a parallel spherical manipulator
Computer Research and Modeling, 2023, v. 15, no. 6, pp. 1523-1534The paper is a study of the mathematical model and kinematics of a parallel spherical manipulator. This type of manipulator was proposed back in the 80s of the last century and has since found application in exoskeletons and rehabilitation robots due to its structure, which allows imitating natural joint movements of the human body.
The Parallel Spherical Manipulator is a robot with three legs and two platforms, a base platform and a mobile platform. Its legs consist of two support links that are arc-shaped. Mathematically, the manipulator can be described using two virtual pyramids that are placed on top of each other.
The paper considers two types of manipulator configurations: classical and asymmetric, and solves basic kinematic problems for each. The study shows that the asymmetric design of the manipulator has the maximum workspace, especially when the motors are mounted at the joints of the manipulator’s links inside legs.
To optimize the parameters of the parallel spherical manipulator, we introduced a metric of usable workspace volume. This metric represents the volume of the sector of the sphere in which the robot does not experience internal collisions or singular states. There are three types of singular states possible within a parallel spherical manipulator — serial, parallel, and mixed singularity. We used all three types of singularities to calculate the useful volume. In our research work, we solved the problem related to maximizing the usable volume of the workspace.
Through our research work, we found that the asymmetric configuration of the spherical manipulator maximizes the workspace when the motors are located at the articulation point of the robot leg support arms. At the same time, the parameter $\beta_1$ must be zero degrees to maximize the workspace. This allowed us to create a prototype robot in which we eliminated the use of lower links in legs in favor of a radiused rail along which the motors run. This allowed us to reduce the linear dimensions of the robot itself and gain on the stiffness of the structure.
The results obtained can be used to optimize the parameters of the parallel spherical manipulator in various industrial and scientific applications, as well as for further research of other types of parallel robots and manipulators.
-
Mathematical models of combat and military operations
Computer Research and Modeling, 2020, v. 12, no. 4, pp. 907-920Modeling the fight against terrorist, pirate and robbery acts at sea is an urgent scientific task due to the prevalence of force acts and the insufficient number of works on this issue. The actions of pirates and terrorists are diverse. Using a base ship, they can attack ships up to 450–500 miles from the coast. Having chosen the target, they pursue it and use the weapons to board the ship. Actions to free a ship captured by pirates or terrorists include: blocking the ship, predicting where pirates might be on the ship, penetrating (from board to board, by air or from under water) and cleaning up the ship’s premises. An analysis of the special literature on the actions of pirates and terrorists showed that the act of force (and actions to neutralize it) consists of two stages: firstly, blocking the vessel, which consists in forcing it to stop, and secondly, neutralizing the team (terrorist groups, pirates), including penetration of a ship (ship) and its cleaning. The stages of the cycle are matched by indicators — the probability of blocking and the probability of neutralization. The variables of the act of force model are the number of ships (ships, boats) of the attackers and defenders, as well as the strength of the capture group of the attackers and the crew of the ship - the victim of the attack. Model parameters (indicators of naval and combat superiority) were estimated using the maximum likelihood method using an international database of incidents at sea. The values of these parameters are 7.6–8.5. Such high values of superiority parameters reflect the parties' ability to act in force acts. An analytical method for calculating excellence parameters is proposed and statistically substantiated. The following indicators are taken into account in the model: the ability of the parties to detect the enemy, the speed and maneuverability characteristics of the vessels, the height of the vessel and the characteristics of the boarding equipment, the characteristics of weapons and protective equipment, etc. Using the Becker model and the theory of discrete choice, the probability of failure of the force act is estimated. The significance of the obtained models for combating acts of force in the sea space lies in the possibility of quantitative substantiation of measures to protect the ship from pirate and terrorist attacks and deterrence measures aimed at preventing attacks (the presence of armed guards on board the ship, assistance from warships and helicopters).
-
The development of an ARM system on chip based processing unit for data stream computing
Computer Research and Modeling, 2015, v. 7, no. 3, pp. 505-509Views (last year): 1.Modern big science projects are becoming highly data intensive to the point where offline processing of stored data is infeasible. High data throughput computing, or Data Stream Computing, for future projects is required to deal with terabytes of data per second which cannot be stored in long-term storage elements. Conventional data-centres based on typical server-grade hardware are expensive and are biased towards processing power. The overall I/O bandwidth can be increased with massive parallelism, usually at the expense of excessive processing power and high energy consumption. An ARM System on Chip (SoC) based processing unit may address the issue of system I/O and CPU balance, affordability and energy efficiency since ARM SoCs are mass produced and designed to be energy efficient for use in mobile devices. Such a processing unit is currently in development, with a design goal of 20 Gb/s I/O throughput and significant processing power. The I/O capabilities of consumer ARM System on Chips are discussed along with to-date performance and I/O throughput tests.
-
A CPU benchmarking characterization of ARM based processors
Computer Research and Modeling, 2015, v. 7, no. 3, pp. 581-586Views (last year): 1.Big science projects are producing data at ever increases rates. Typical techniques involve storing the data to disk, after minor filtering, and then processing it in large computer farms. Data production has reached a point where on-line processing is required in order to filter the data down to manageable sizes. A potential solution involves using low-cost, low-power ARM processors in large arrays to provide massive parallelisation for data stream computing (DSC). The main advantage in using System on Chips (SoCs) is inherent in its design philosophy. SoCs are primarily used in mobile devices and hence consume less power while maintaining relatively good performance. A benchmarking characterisation of three different models of ARM processors will be presented.
-
Memory benchmarking characterisation of ARM-based SoCs
Computer Research and Modeling, 2015, v. 7, no. 3, pp. 607-613Computational intensity is traditionally the focus of large-scale computing system designs, generally leaving such designs ill-equipped to efficiently handle throughput-oriented workloads. In addition, cost and energy consumption considerations for large-scale computing systems in general remain a source of concern. A potential solution involves using low-cost, low-power ARM processors in large arrays in a manner which provides massive parallelisation and high rates of data throughput (relative to existing large-scale computing designs). Giving greater priority to both throughput-rate and cost considerations increases the relevance of primary memory performance and design optimisations to overall system performance. Using several primary memory performance benchmarks to evaluate various aspects of RAM and cache performance, we provide characterisations of the performances of four different models of ARM-based system-on-chip, namely the Cortex-A9, Cortex- A7, Cortex-A15 r3p2 and Cortex-A15 r3p3. We then discuss the relevance of these results to high volume computing and the potential for ARM processors.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"