Результаты поиска по 'interpretability':
Найдено статей: 51
  1. Different versions of the shifting mode of reproduction models describe set of the macroeconomic production subsystems interacting with each other, to each of which there corresponds the household. These subsystems differ among themselves on age of the fixed capital used by them as they alternately stop production for its updating by own forces (for repair of the equipment and for introduction of the innovations increasing production efficiency). It essentially distinguishes this type of models from the models describing the mode of joint reproduction in case of which updating of fixed capital and production of a product happen simultaneously. Models of the shifting mode of reproduction allow to describe mechanisms of such phenomena as cash circulations and amortization, and also to describe different types of monetary policy, allow to interpret mechanisms of economic growth in a new way. Unlike many other macroeconomic models, model of this class in which the subsystems competing among themselves serially get an advantage in comparison with the others because of updating, essentially not equilibrium. They were originally described as a systems of ordinary differential equations with abruptly varying coefficients. In the numerical calculations which were carried out for these systems depending on parameter values and initial conditions both regular, and not regular dynamics was revealed. This paper shows that the simplest versions of this model without the use of additional approximations can be represented in a discrete form (in the form of non-linear mappings) with different variants (continuous and discrete) financial flows between subsystems (interpreted as wages and subsidies). This form of representation is more convenient for receipt of analytical results as well as for a more economical and accurate numerical calculations. In particular, its use allowed to determine the entry conditions corresponding to coordinated and sustained economic growth without systematic lagging in production of a product of one subsystems from others.

    Views (last year): 1. Citations: 4 (RSCI).
  2. Chernavskaya O.D.
    Dynamical theory of information as a basis for natural-constructive approach to modeling a cognitive process
    Computer Research and Modeling, 2017, v. 9, no. 3, pp. 433-447

    The main statements and inferences of the Dynamic Theory Information (DTI) are considered. It is shown that DTI provides the possibility two reveal two essentially important types of information: objective (unconventional) and subjective (conventional) informtion. There are two ways of obtaining information: reception (perception of an already existing one) and generation (production of new) information. It is shown that the processes of generation and perception of information should proceed in two different subsystems of the same cognitive system. The main points of the Natural-Constructivist Approach to modeling the cognitive process are discussed. It is shown that any neuromorphic approach faces the problem of Explanatory Gap between the “Brain” and the “Mind”, i. e. the gap between objectively measurable information about the ensemble of neurons (“Brain”) and subjective information about the human consciousness (“Mind”). The Natural-Constructive Cognitive Architecture developed within the framework of this approach is discussed. It is a complex block-hierarchical combination of several neuroprocessors. The main constructive feature of this architecture is splitting the whole system into two linked subsystems, by analogy with the hemispheres of the human brain. One of the subsystems is processing the new information, learning, and creativity, i.e. for the generation of information. Another subsystem is responsible for processing already existing information, i.e. reception of information. It is shown that the lowest (zero) level of the hierarchy is represented by processors that should record images of real objects (distributed memory) as a response to sensory signals, which is objective information (and refers to the “Brain”). The next hierarchy levels are represented by processors containing symbols of the recorded images. It is shown that symbols represent subjective (conventional) information created by the system itself and providing its individuality. The highest hierarchy levels containing the symbols of abstract concepts provide the possibility to interpret the concepts of “consciousness”, “sub-consciousness”, “intuition”, referring to the field of “Mind”, in terms of the ensemble of neurons. Thus, DTI provides an opportunity to build a model that allows us to trace how the “Mind” could emerge basing on the “Brain”.

    Views (last year): 6.
  3. Aleshin I.M., Malygin I.V.
    Machine learning interpretation of inter-well radiowave survey data
    Computer Research and Modeling, 2019, v. 11, no. 4, pp. 675-684

    Traditional geological search methods going to be ineffective. The exploration depth of kimberlite bodies and ore deposits has increased significantly. The only direct exploration method is to drill a system of wells to the depths that provide access to the enclosing rocks. Due to the high cost of drilling, the role of inter-well survey methods has increased. They allows to increase the mean well spacing without significantly reducing the kimberlite or ore body missing probability. The method of inter-well radio wave survey is effective to search for high contrast conductivity objects. The physics of the method based on the dependence of the electromagnetic wave propagation on the propagation medium conductivity. The source and receiver of electromagnetic radiation is an electric dipole, they are placed in adjacent wells. The distance between the source and receiver is known. Therefore we could estimate the medium absorption coefficient by the rate of radio wave amplitude decrease. Low electrical resistance rocks corresponds to high absorption of radio waves. The inter-well measurement data allows to estimate an effective electrical resistance (or conductivity) of the rock. Typically, the source and receiver are immersed in adjacent wells synchronously. The value of the of the electric field amplitude measured at the receiver site allows to estimate the average value of the attenuation coefficient on the line connecting the source and receiver. The measurements are taken during stops, approximately every 5 m. The distance between stops is much less than the distance between adjacent wells. This leads to significant spatial anisotropy in the measured data distribution. Drill grid covers a large area, and our point is to build a three-dimensional model of the distribution of the electrical properties of the inter-well space throughout the whole area. The anisotropy of spatial distribution makes hard to the use of standard geostatistics approach. To build a three-dimensional model of attenuation coefficient, we used one of machine learning theory methods, the method of nearest neighbors. In this method, the value of the absorption coefficient at a given point is calculated by $k$ nearest measurements. The number $k$ should be determined from additional reasons. The spatial distribution anisotropy effect can be reduced by changing the spatial scale in the horizontal direction. The scale factor $\lambda$ is one yet external parameter of the problem. To select the parameters $k$ and $\lambda$ values we used the determination coefficient. To demonstrate the absorption coefficient three-dimensional image construction we apply the procedure to the inter-well radio wave survey data. The data was obtained at one of the sites in Yakutia.

    Views (last year): 3.
  4. Kurzhanskiy A.A., Kurzhanski A.B.
    Intersection in a smart city
    Computer Research and Modeling, 2018, v. 10, no. 3, pp. 347-358

    Intersections present a very demanding environment for all the parties involved. Challenges arise from complex vehicle trajectories; occasional absence of lane markings to guide vehicles; split phases that prevent determining who has the right of way; invisible vehicle approaches; illegal movements; simultaneous interactions among pedestrians, bicycles and vehicles. Unsurprisingly, most demonstrations of AVs are on freeways; but the full potential of automated vehicles — personalized transit, driverless taxis, delivery vehicles — can only be realized when AVs can sense the intersection environment to efficiently and safely maneuver through intersections.

    AVs are equipped with an array of on-board sensors to interpret and suitably engage with their surroundings. Advanced algorithms utilize data streams from such sensors to support the movement of autonomous vehicles through a wide range of traffic and climatic conditions. However, there exist situations, in which additional information about the upcoming traffic environment would be beneficial to better inform the vehicles’ in-built tracking and navigation algorithms. A potential source for such information is from in-pavement sensors at an intersection that can be used to differentiate between motorized and non-motorized modes and track road user movements and interactions. This type of information, in addition to signal phasing, can be provided to the AV as it approaches an intersection, and incorporated into an improved prior for the probabilistic algorithms used to classify and track movement in the AV’s field of vision.

    This paper is concerned with the situation in which there are objects that are not visible to the AV. The driving context is that of an intersection, and the lack of visibility is due to other vehicles that obstruct the AV’s view, leading to the creation of blind zones. Such obstruction is commonplace in intersections.

    Our objective is:

    1) inform a vehicle crossing the intersection about its potential blind zones;

    2) inform the vehicle about the presence of agents (other vehicles, bicyclists or pedestrians) in those blind zones.

    Views (last year): 29.
  5. Podlipnova I.V., Dorn Y.V., Sklonin I.A.
    Cloud interpretation of the entropy model for calculating the trip matrix
    Computer Research and Modeling, 2024, v. 16, no. 1, pp. 89-103

    As the population of cities grows, the need to plan for the development of transport infrastructure becomes more acute. For this purpose, transport modeling packages are created. These packages usually contain a set of convex optimization problems, the iterative solution of which leads to the desired equilibrium distribution of flows along the paths. One of the directions for the development of transport modeling is the construction of more accurate generalized models that take into account different types of passengers, their travel purposes, as well as the specifics of personal and public modes of transport that agents can use. Another important direction of transport models development is to improve the efficiency of the calculations performed. Since, due to the large dimension of modern transport networks, the search for a numerical solution to the problem of equilibrium distribution of flows along the paths is quite expensive. The iterative nature of the entire solution process only makes this worse. One of the approaches leading to a reduction in the number of calculations performed is the construction of consistent models that allow to combine the blocks of a 4-stage model into a single optimization problem. This makes it possible to eliminate the iterative running of blocks, moving from solving a separate optimization problem at each stage to some general problem. Early work has proven that such approaches provide equivalent solutions. However, it is worth considering the validity and interpretability of these methods. The purpose of this article is to substantiate a single problem, that combines both the calculation of the trip matrix and the modal choice, for the generalized case when there are different layers of demand, types of agents and classes of vehicles in the transport network. The article provides possible interpretations for the gauge parameters used in the problem, as well as for the dual factors associated with the balance constraints. The authors of the article also show the possibility of combining the considered problem with a block for determining network load into a single optimization problem.

  6. Ainbinder R.M., Rassadin A.E.
    On population migration in an ecological niche with a spatially heterogeneous local capacity
    Computer Research and Modeling, 2025, v. 17, no. 3, pp. 483-500

    The article describes the migration process of a certain population, taking into account the spatial heterogeneity of the local capacity of the ecological niche. It is assumed that this spatial heterogeneity is caused by various natural or artificial factors. The mathematical model of the migration process under consideration is a Cauchy problem on a straight line for some quasi-linear partial differential equation of the first order, which is satisfied by the linear population density under consideration. In this paper, a general solution to this Cauchy problem is found for an arbitrary dependence of the local capacity of an ecological niche on the spatial coordinate. This general solution was applied to describe the migration of the population in question in two different cases: in the case of a dependence of the local capacity of the ecological niche on the spatial coordinate in the form of a smooth step and in the case of a hill-like dependence of the local capacity of the ecological niche on the spatial coordinate. In both cases, the solution to the Cauchy problem is expressed in terms of higher transcendental functions. By applying special relations to the model parameters, these higher transcendental functions are reduced to elementary functions, which makes it possible to obtain exact model solutions explicitly expressed in terms of elementary functions. With the help of these precise solutions, an extensive program of computational experiments has been implemented, showing how the initial population density of the Gaussian form is dispersed by the considered two types of spatial heterogeneity of the local capacity of the ecological niche. These computational experiments have shown that when passing through both step-like and hill-like spatial inhomogeneities of the local capacity of an ecological niche with a narrow Gaussian width of its initial density compared to the characteristic spatial scale of these inhomogeneities, the system forgets its initial state. In particular, if we interpret the system under study as a population living in an extended calm rectilinear river along its bed, then it can be argued that under this initial condition, after the current of this river carries the population under consideration through the area of spatial heterogeneity of the local capacity of the ecological niche, the population density becomes a quasi-rectangular function.

  7. Petrov I.B., Konov D.S., Vasyukov A.V., Muratov M.V.
    Detecting large fractures in geological media using convolutional neural networks
    Computer Research and Modeling, 2025, v. 17, no. 5, pp. 889-901

    This paper considers the inverse problem of seismic exploration — determining the structure of the media based on the recorded wave response from it. Large cracks are considered as target objects, whose size and position are to be determined.

    he direct problem is solved using the grid-characteristic method. The method allows using physically based algorithms for calculating outer boundaries of the region and contact boundaries inside the region. The crack is assumed to be thin, a special condition on the crack borders is used to describe the crack.

    The inverse problem is solved using convolutional neural networks. The input data of the neural network are seismograms interpreted as images. The output data are masks describing the medium on a structured grid. Each element of such a grid belongs to one of two classes — either an element of a continuous geological massif, or an element through which a crack passes. This approach allows us to consider a medium with an unknown number of cracks.

    The neural network is trained using only samples with one crack. The final testing of the trained network is performed using additional samples with several cracks. These samples are not involved in the training process. The purpose of testing under such conditions is to verify that the trained network has sufficient generality, recognizes signs of a crack in the signal, and does not suffer from overtraining on samples with a single crack in the media.

    The paper shows that a convolutional network trained on samples with a single crack can be used to process data with multiple cracks. The networks detects fairly small cracks at great depths if they are sufficiently spatially separated from each other. In this case their wave responses are clearly distinguishable on the seismogram and can be interpreted by the neural network. If the cracks are close to each other, artifacts and interpretation errors may occur. This is due to the fact that on the seismogram the wave responses of close cracks merge. This cause the network to interpret several cracks located nearby as one. It should be noted that a similar error would most likely be made by a human during manual interpretation of the data. The paper provides examples of some such artifacts, distortions and recognition errors.

  8. Kapitan V.U., Peretyat'ko A.A., Ivanov U.P., Nefedev K.V., Belokon V.I.
    Superscale simulation of the magnetic states and reconstruction of the ordering types for nanodots arrays
    Computer Research and Modeling, 2011, v. 3, no. 3, pp. 309-318

    We consider two possible computational methods of the interpretation of experimental data obtained by means of the magnetic force microscopy. These methods of macrospin distribution simulation and reconstruction can be used for research of magnetization reversal processes of nanodots in ordered 2D arrays of nanodots. New approaches to the development of high-performance superscale algorithms for parallel executing on a supercomputer clusters for solving direct and inverse task of the modeling of magnetic states, types of ordering, reversal processes of nanosystems with a collective behavior are proposed. The simulation results are consistent with experimental results.

    Views (last year): 2.
  9. Kosykh N.E., Sviridov N.M., Savin S.Z., Potapova T.P.
    Computer aided analysis of medical image recognition for example of scintigraphy
    Computer Research and Modeling, 2016, v. 8, no. 3, pp. 541-548

    The practical application of nuclear medicine demonstrates the continued information deficiency of the algorithms and programs that provide visualization and analysis of medical images. The aim of the study was to determine the principles of optimizing the processing of planar osteostsintigraphy on the basis of сomputer aided diagnosis (CAD) for analysis of texture descriptions of images of metastatic zones on planar scintigrams of skeleton. A computer-aided diagnosis system for analysis of skeletal metastases based on planar scintigraphy data has been developed. This system includes skeleton image segmentation, calculation of textural, histogram and morphometrical parameters and the creation of a training set. For study of metastatic images’ textural characteristics on planar scintigrams of skeleton was developed the computer program of automatic analysis of skeletal metastases is used from data of planar scintigraphy. Also expert evaluation was used to distinguishing ‘pathological’ (metastatic) from ‘physiological’ (non-metastatic) radiopharmaceutical hyperfixation zones in which Haralick’s textural features were determined: autocorrelation, contrast, ‘forth moment’ and heterogeneity. This program was established on the principles of сomputer aided diagnosis researches planar scintigrams of skeletal patients with metastatic breast cancer hearths hyperfixation of radiopharmaceuticals were identified. Calculated parameters were made such as brightness, smoothness, the third moment of brightness, brightness uniformity, entropy brightness. It has been established that in most areas of the skeleton of histogram values of parameters in pathologic hyperfixation of radiopharmaceuticals predominate over the same values in the physiological. Most often pathological hyperfixation of radiopharmaceuticals as the front and rear fixed scintigramms prevalence of brightness and smoothness of the image brightness in comparison with those of the physiological hyperfixation of radiopharmaceuticals. Separate figures histogram analysis can be used in specifying the diagnosis of metastases in the mathematical modeling and interpretation bone scintigraphy. Separate figures histogram analysis can be used in specifying the diagnosis of metastases in the mathematical modeling and interpretation bone scintigraphy.

    Views (last year): 3. Citations: 3 (RSCI).
  10. Tinkov O.V., Polishchuk P.G., Khachatryan D.S., Kolotaev A.V., Balaev A.N., Osipov V.N., Grigorev B.Y.
    Quantitative analysis of “structure – anticancer activity” and rational molecular design of bi-functional VEGFR-2/HDAC-inhibitors
    Computer Research and Modeling, 2019, v. 11, no. 5, pp. 911-930

    Inhibitors of histone deacetylases (HDACi) have considered as a promising class of drugs for the treatment of cancers because of their effects on cell growth, differentiation, and apoptosis. Angiogenesis play an important role in the growth of most solid tumors and the progression of metastasis. The vascular endothelial growth factor (VEGF) is a key angiogenic agent, which is secreted by malignant tumors, which induces the proliferation and the migration of vascular endothelial cells. Currently, the most promising strategy in the fight against cancer is the creation of hybrid drugs that simultaneously act on several physiological targets. In this work, a series of hybrids bearing N-phenylquinazolin-4-amine and hydroxamic acid moieties were studied as dual VEGFR-2/HDAC inhibitors using simplex representation of the molecular structure and Support Vector Machine (SVM). The total sample of 42 compounds was divided into training and test sets. Five-fold cross-validation (5-fold) was used for internal validation. Satisfactory quantitative structure—activity relationship (QSAR) models were constructed (R2test = 0.64–0.87) for inhibitors of HDAC, VEGFR-2 and human breast cancer cell line MCF-7. The interpretation of the obtained QSAR models was carried out. The coordinated effect of different molecular fragments on the increase of antitumor activity of the studied compounds was estimated. Among the substituents of the N-phenyl fragment, the positive contribution of para bromine for all three types of activity can be distinguished. The results of the interpretation were used for molecular design of potential dual VEGFR-2/HDAC inhibitors. For comparative QSAR research we used physicochemical descriptors calculated by the program HYBOT, the method of Random Forest (RF), and on-line version of the expert system OCHEM (https://ochem.eu). In the modeling of OCHEM PyDescriptor descriptors and extreme gradient boosting was chosen. In addition, the models obtained with the help of the expert system OCHEM were used for virtual screening of 300 compounds to select promising VEGFR-2/HDAC inhibitors for further synthesis and testing.

Pages: « first previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"