All issues
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Parallel calculations in the Darwin PIC-model
Computer Research and Modeling, 2015, v. 7, no. 1, pp. 61-69Views (last year): 2.The approach to parallel implementation of low-frequency PIC-algorithms is proposed, taking into account peculiarity of the nonradiative (Darwin) field approximation. Its advantages and specifics of adaptation to the base computer types for high performance calculations are discussed.
-
The algorithm of the method for calculating quality classes’ boundaries for quantitative systems’ characteristics and for determination of interactions between characteristics. Part 2. Calculation for three or more quality classes
Computer Research and Modeling, 2016, v. 8, no. 1, pp. 37-54Views (last year): 4. Citations: 1 (RSCI).The method of calculation of the boundaries of quality classes for quantitative characteristics of systems with any properties is adapted to search for boundaries of three quality classes. In addition to other results, adaptation of the method allowed to determine boundaries between quality classes at simultaneous «unacceptability » of high and low values of indicator characteristic of the system condition and simultaneous «inadmissibility » of high and low values of factors affecting the system.
-
Four-factor computing experiment for the random walk on a two-dimensional square field
Computer Research and Modeling, 2017, v. 9, no. 6, pp. 905-918Views (last year): 21.Nowadays the random search became a widespread and effective tool for solving different complex optimization and adaptation problems. In this work, the problem of an average duration of a random search for one object by another is regarded, depending on various factors on a square field. The problem solution was carried out by holding total experiment with 4 factors and orthogonal plan with 54 lines. Within each line, the initial conditions and the cellular automaton transition rules were simulated and the duration of the search for one object by another was measured. As a result, the regression model of average duration of a random search for an object depending on the four factors considered, specifying the initial positions of two objects, the conditions of their movement and detection is constructed. The most significant factors among the factors considered in the work that determine the average search time are determined. An interpretation is carried out in the problem of random search for an object from the constructed model. The important result of the work is that the qualitative and quantitative influence of initial positions of objects, the size of the lattice and the transition rules on the average duration of search is revealed by means of model obtained. It is shown that the initial neighborhood of objects on the lattice does not guarantee a quick search, if each of them moves. In addition, it is quantitatively estimated how many times the average time of searching for an object can increase or decrease with increasing the speed of the searching object by 1 unit, and also with increasing the field size by 1 unit, with different initial positions of the two objects. The exponential nature of the growth in the number of steps for searching for an object with an increase in the lattice size for other fixed factors is revealed. The conditions for the greatest increase in the average search duration are found: the maximum distance of objects in combination with the immobility of one of them when the field size is changed by 1 unit. (that is, for example, with $4 \times 4$ at $5 \times 5$) can increase the average search duration in $e^{1.69} \approx 5.42$. The task presented in the work may be relevant from the point of view of application both in the landmark for ensuring the security of the state, and, for example, in the theory of mass service.
-
Development of anisotropic nonlinear noise-reduction algorithm for computed tomography data with context dynamic threshold
Computer Research and Modeling, 2019, v. 11, no. 2, pp. 233-248Views (last year): 21.The article deals with the development of the noise-reduction algorithm based on anisotropic nonlinear data filtering of computed tomography (CT). Analysis of domestic and foreign literature has shown that the most effective algorithms for noise reduction of CT data use complex methods for analyzing and processing data, such as bilateral, adaptive, three-dimensional and other types of filtrations. However, a combination of such techniques is rarely used in practice due to long processing time per slice. In this regard, it was decided to develop an efficient and fast algorithm for noise-reduction based on simplified bilateral filtration method with three-dimensional data accumulation. The algorithm was developed on C ++11 programming language in Microsoft Visual Studio 2015. The main difference of the developed noise reduction algorithm is the use an improved mathematical model of CT noise, based on the distribution of Poisson and Gauss from the logarithmic value, developed earlier by our team. This allows a more accurate determination of the noise level and, thus, the threshold of data processing. As the result of the noise reduction algorithm, processed CT data with lower noise level were obtained. Visual evaluation of the data showed the increased information content of the processed data, compared to original data, the clarity of the mapping of homogeneous regions, and a significant reduction in noise in processing areas. Assessing the numerical results of the algorithm showed a decrease in the standard deviation (SD) level by more than 6 times in the processed areas, and high rates of the determination coefficient showed that the data were not distorted and changed only due to the removal of noise. Usage of newly developed context dynamic threshold made it possible to decrease SD level on every area of data. The main difference of the developed threshold is its simplicity and speed, achieved by preliminary estimation of the data array and derivation of the threshold values that are put in correspondence with each pixel of the CT. The principle of its work is based on threshold criteria, which fits well both into the developed noise reduction algorithm based on anisotropic nonlinear filtration, and another algorithm of noise-reduction. The algorithm successfully functions as part of the MultiVox workstation and is being prepared for implementation in a single radiological network of the city of Moscow.
-
Mirror descent for constrained optimization problems with large subgradient values of functional constraints
Computer Research and Modeling, 2020, v. 12, no. 2, pp. 301-317The paper is devoted to the problem of minimization of the non-smooth functional $f$ with a non-positive non-smooth Lipschitz-continuous functional constraint. We consider the formulation of the problem in the case of quasi-convex functionals. We propose new strategies of step-sizes and adaptive stopping rules in Mirror Descent for the considered class of problems. It is shown that the methods are applicable to the objective functionals of various levels of smoothness. Applying a special restart technique to the considered version of Mirror Descent there was proposed an optimal method for optimization problems with strongly convex objective functionals. Estimates of the rate of convergence for the considered methods are obtained depending on the level of smoothness of the objective functional. These estimates indicate the optimality of the considered methods from the point of view of the theory of lower oracle bounds. In particular, the optimality of our approach for Höldercontinuous quasi-convex (sub)differentiable objective functionals is proved. In addition, the case of a quasiconvex objective functional and functional constraint was considered. In this paper, we consider the problem of minimizing a non-smooth functional $f$ in the presence of a Lipschitz-continuous non-positive non-smooth functional constraint $g$, and the problem statement in the cases of quasi-convex and strongly (quasi-)convex functionals is considered separately. The paper presents numerical experiments demonstrating the advantages of using the considered methods.
-
Parametric identification of dynamic systems based on external interval estimates of phase variables
Computer Research and Modeling, 2024, v. 16, no. 2, pp. 299-314An important role in the construction of mathematical models of dynamic systems is played by inverse problems, which in particular include the problem of parametric identification. Unlike classical models that operate with point values, interval models give upper and lower boundaries on the quantities under study. The paper considers an interpolation approach to solving interval problems of parametric identification of dynamic systems for the case when experimental data are represented by external interval estimates. The purpose of the proposed approach is to find such an interval estimate of the model parameters, in which the external interval estimate of the solution of the direct modeling problem would contain experimental data or minimize the deviation from them. The approach is based on the adaptive interpolation algorithm for modeling dynamic systems with interval uncertainties, which makes it possible to explicitly obtain the dependence of phase variables on system parameters. The task of minimizing the distance between the experimental data and the model solution in the space of interval boundaries of the model parameters is formulated. An expression for the gradient of the objectivet function is obtained. On a representative set of tasks, the effectiveness of the proposed approach is demonstrated.
-
Views (last year): 5. Citations: 4 (RSCI).
The history of the development of CUDA technology and its fundamental limitations are discribed. The article is intended for those readers who are not familiar with graphics adapter programming features but want to evaluate the possibilities for GPU computing applications.
-
Correlation and realization of quasi-Newton methods of absolute optimization
Computer Research and Modeling, 2016, v. 8, no. 1, pp. 55-78Views (last year): 7. Citations: 5 (RSCI).Newton and quasi-Newton methods of absolute optimization based on Cholesky factorization with adaptive step and finite difference approximation of the first and the second derivatives. In order to raise effectiveness of the quasi-Newton methods a modified version of Cholesky decomposition of quasi-Newton matrix is suggested. It solves the problem of step scaling while descending, allows approximation by non-quadratic functions, and integration with confidential neighborhood method. An approach to raise Newton methods effectiveness with finite difference approximation of the first and second derivatives is offered. The results of numerical research of algorithm effectiveness are shown.
-
Overset grids approach for topography modeling in elastic-wave modeling using the grid-characteristic method
Computer Research and Modeling, 2019, v. 11, no. 6, pp. 1049-1059While modeling seismic wave propagation, it is important to take into account nontrivial topography, as this topography causes multiple complex phenomena, such as diffraction at rough surfaces, complex propagation of Rayleigh waves, and side effects caused by wave interference. The primary goal of this research is to construct a method that implements the free surface on topography, utilizing an overset curved grid for characterization, while keeping the main grid structured rectangular. For a combination of the regular and curve-linear grid, the workability of the grid characteristics method using overset grids (also known as the Chimera grid approach) is analyzed. One of the benefits of this approach is computational complexity reduction, caused by the fact that simulation in a regular, homogeneous physical area using a sparse regular rectangle grid is simpler. The simplification of the mesh building mechanism (one grid is regular, and the other can be automatically built using surface data) is a side effect. Despite its simplicity, the method we propose allows us to increase the digitalization of fractured regions and minimize the Courant number. This paper contains various comparisons of modeling results produced by the proposed method-based solver, and results produced by the well-known solver specfem2d, as well as previous modeling results for the same problems. The drawback of the method is that an interpolation error can worsen an overall model accuracy and reduce the computational schema order. Some countermeasures against it are described. For this paper, only two-dimensional models are analyzed. However, the method we propose can be applied to the three-dimensional problems with minimal adaptation required.
-
Computational design of closed-chain linkages: synthesis of ergonomic spine support module of exosuit
Computer Research and Modeling, 2022, v. 14, no. 6, pp. 1269-1280The article focuses on the problem of mechanisms’ co-design for robotic systems to perform adaptive physical interaction with an unstructured environment, including physical human robot interaction. The co-design means simultaneous optimization of mechanics and control system, ensuring optimal behavior and performance of the system. Mechanics optimization refers to the search for optimal structure, geometric parameters, mass distribution among the links and their compliance; control refers to the search for motion trajectories for mechanism’s joints. The paper presents a generalized method of structural-parametric synthesis of underactuated mechanisms with closed kinematics for robotic systems for various purposes, e. g., it was previously used for the co-design of fingers’ mechanisms for anthropomorphic gripper and legs’ mechanisms for galloping robots. The method implements the concept of morphological computation of control laws due to the features of mechanical design, minimizing the control effort from the algorithmic component of the control system, which reduces the requirements for the level of technical equipment and reduces energy consumption. In this paper, the proposed method is used to optimize the structure and geometric parameters of the passive mechanism of the back support module of an industrial exosuit. Human movements are diverse and non-deterministic when compared with the movements of autonomous robots, which complicates the design of wearable robotic devices. To reduce injuries, fatigue and increase the productivity of workers, the synthesized industrial exosuit should not only compensate for loads, but also not interfere with the natural human motions. To test the developed exosuit, kinematic datasets from motion capture of an entire human body during industrial operations were used. The proposed method of structural-parametric synthesis was used to improve the ergonomics of a wearable robotic device. Verification of the synthesized mechanism was carried out using simulation: the passive module of the back is attached to two geometric primitives that move the chest and pelvis of the exosuit operator in accordance with the motion capture data. The ergonomics of the back module is quantified by the distance between the joints connecting the upper and bottom parts of the exosuit; minimizing deviation from the average value corresponds to a lesser limitation of the operator’s movement, i. e. greater ergonomics. The article provides a detailed description of the method of structural-parametric synthesis, an example of synthesis of an exosuit module and the results of simulation.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"