Результаты поиска по 'training set':
Найдено статей: 10
  1. Alkousa M.S., Gasnikov A.V., Dvurechensky P.E., Sadiev A.A., Razouk L.Ya.
    An approach for the nonconvex uniformly concave structured saddle point problem
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 225-237

    Recently, saddle point problems have received much attention due to their powerful modeling capability for a lot of problems from diverse domains. Applications of these problems occur in many applied areas, such as robust optimization, distributed optimization, game theory, and many applications in machine learning such as empirical risk minimization and generative adversarial networks training. Therefore, many researchers have actively worked on developing numerical methods for solving saddle point problems in many different settings. This paper is devoted to developing a numerical method for solving saddle point problems in the nonconvex uniformly-concave setting. We study a general class of saddle point problems with composite structure and H\"older-continuous higher-order derivatives. To solve the problem under consideration, we propose an approach in which we reduce the problem to a combination of two auxiliary optimization problems separately for each group of variables, the outer minimization problem w.r.t. primal variables, and the inner maximization problem w.r.t the dual variables. For solving the outer minimization problem, we use the Adaptive Gradient Method, which is applicable for nonconvex problems and also works with an inexact oracle that is generated by approximately solving the inner problem. For solving the inner maximization problem, we use the Restarted Unified Acceleration Framework, which is a framework that unifies the high-order acceleration methods for minimizing a convex function that has H\"older-continuous higher-order derivatives. Separate complexity bounds are provided for the number of calls to the first-order oracles for the outer minimization problem and higher-order oracles for the inner maximization problem. Moreover, the complexity of the whole proposed approach is then estimated.

  2. Shumixin A.G., Boyarshinova A.S.
    Algorithm of artificial neural network architecture and training set size configuration within approximation of dynamic object behavior
    Computer Research and Modeling, 2015, v. 7, no. 2, pp. 243-251

    The article presents an approach to configuration of an artificial neural network architecture and a training set size. Configuration is based on parameter minimization with constraints specifying neural network model quality criteria. The algorithm of artificial neural network architecture and training set size configuration is applied to dynamic object artificial neural network approximation.
    Series of computational experiments were performed. The method is applicable to construction of dynamic object models based on non-linear autocorrelation neural networks.

    Views (last year): 2. Citations: 8 (RSCI).
  3. Bakhvalov Y.N., Kopylov I.V.
    Training and assessment the generalization ability of interpolation methods
    Computer Research and Modeling, 2015, v. 7, no. 5, pp. 1023-1031

    We investigate machine learning methods with a certain kind of decision rule. In particular, inverse-distance method of interpolation, method of interpolation by radial basis functions, the method of multidimensional interpolation and approximation, based on the theory of random functions, the last method of interpolation is kriging. This paper shows a method of rapid retraining “model” when adding new data to the existing ones. The term “model” means interpolating or approximating function constructed from the training data. This approach reduces the computational complexity of constructing an updated “model” from $O(n^3)$ to $O(n^2)$. We also investigate the possibility of a rapid assessment of generalizing opportunities “model” on the training set using the method of cross-validation leave-one-out cross-validation, eliminating the major drawback of this approach — the necessity to build a new “model” for each element which is removed from the training set.

    Views (last year): 7. Citations: 5 (RSCI).
  4. Zatserkovnyy A.V., Nurminski E.A.
    Neural network analysis of transportation flows of urban aglomeration using the data from public video cameras
    Computer Research and Modeling, 2021, v. 13, no. 2, pp. 305-318

    Correct modeling of complex dynamics of urban transportation flows requires the collection of large volumes of empirical data to specify types of the modes and their identification. At the same time, setting a large number of observation posts is expensive and technically not always feasible. All this results in insufficient factographic support for the traffic control systems as well as for urban planners with the obvious consequences for the quality of their decisions. As one of the means to provide large-scale data collection at least for the qualitative situation analysis, the wide-area video cameras are used in different situation centers. There they are analyzed by human operators who are responsible for observation and control. Some video cameras provided their videos for common access, which makes them a valuable resource for transportation studies. However, there are significant problems with getting qualitative data from such cameras, which relate to the theory and practice of image processing. This study is devoted to the practical application of certain mainstream neuro-networking technologies for the estimation of essential characteristics of actual transportation flows. The problems arising in processing these data are analyzed, and their solutions are suggested. The convolution neural networks are used for tracking, and the methods for obtaining basic parameters of transportation flows from these observations are studied. The simplified neural networks are used for the preparation of training sets for the deep learning neural network YOLOv4 which is later used for the estimation of speed and density of automobile flows.

  5. Kosykh N.E., Sviridov N.M., Savin S.Z., Potapova T.P.
    Computer aided analysis of medical image recognition for example of scintigraphy
    Computer Research and Modeling, 2016, v. 8, no. 3, pp. 541-548

    The practical application of nuclear medicine demonstrates the continued information deficiency of the algorithms and programs that provide visualization and analysis of medical images. The aim of the study was to determine the principles of optimizing the processing of planar osteostsintigraphy on the basis of сomputer aided diagnosis (CAD) for analysis of texture descriptions of images of metastatic zones on planar scintigrams of skeleton. A computer-aided diagnosis system for analysis of skeletal metastases based on planar scintigraphy data has been developed. This system includes skeleton image segmentation, calculation of textural, histogram and morphometrical parameters and the creation of a training set. For study of metastatic images’ textural characteristics on planar scintigrams of skeleton was developed the computer program of automatic analysis of skeletal metastases is used from data of planar scintigraphy. Also expert evaluation was used to distinguishing ‘pathological’ (metastatic) from ‘physiological’ (non-metastatic) radiopharmaceutical hyperfixation zones in which Haralick’s textural features were determined: autocorrelation, contrast, ‘forth moment’ and heterogeneity. This program was established on the principles of сomputer aided diagnosis researches planar scintigrams of skeletal patients with metastatic breast cancer hearths hyperfixation of radiopharmaceuticals were identified. Calculated parameters were made such as brightness, smoothness, the third moment of brightness, brightness uniformity, entropy brightness. It has been established that in most areas of the skeleton of histogram values of parameters in pathologic hyperfixation of radiopharmaceuticals predominate over the same values in the physiological. Most often pathological hyperfixation of radiopharmaceuticals as the front and rear fixed scintigramms prevalence of brightness and smoothness of the image brightness in comparison with those of the physiological hyperfixation of radiopharmaceuticals. Separate figures histogram analysis can be used in specifying the diagnosis of metastases in the mathematical modeling and interpretation bone scintigraphy. Separate figures histogram analysis can be used in specifying the diagnosis of metastases in the mathematical modeling and interpretation bone scintigraphy.

    Views (last year): 3. Citations: 3 (RSCI).
  6. Tinkov O.V., Polishchuk P.G., Khachatryan D.S., Kolotaev A.V., Balaev A.N., Osipov V.N., Grigorev B.Y.
    Quantitative analysis of “structure – anticancer activity” and rational molecular design of bi-functional VEGFR-2/HDAC-inhibitors
    Computer Research and Modeling, 2019, v. 11, no. 5, pp. 911-930

    Inhibitors of histone deacetylases (HDACi) have considered as a promising class of drugs for the treatment of cancers because of their effects on cell growth, differentiation, and apoptosis. Angiogenesis play an important role in the growth of most solid tumors and the progression of metastasis. The vascular endothelial growth factor (VEGF) is a key angiogenic agent, which is secreted by malignant tumors, which induces the proliferation and the migration of vascular endothelial cells. Currently, the most promising strategy in the fight against cancer is the creation of hybrid drugs that simultaneously act on several physiological targets. In this work, a series of hybrids bearing N-phenylquinazolin-4-amine and hydroxamic acid moieties were studied as dual VEGFR-2/HDAC inhibitors using simplex representation of the molecular structure and Support Vector Machine (SVM). The total sample of 42 compounds was divided into training and test sets. Five-fold cross-validation (5-fold) was used for internal validation. Satisfactory quantitative structure—activity relationship (QSAR) models were constructed (R2test = 0.64–0.87) for inhibitors of HDAC, VEGFR-2 and human breast cancer cell line MCF-7. The interpretation of the obtained QSAR models was carried out. The coordinated effect of different molecular fragments on the increase of antitumor activity of the studied compounds was estimated. Among the substituents of the N-phenyl fragment, the positive contribution of para bromine for all three types of activity can be distinguished. The results of the interpretation were used for molecular design of potential dual VEGFR-2/HDAC inhibitors. For comparative QSAR research we used physicochemical descriptors calculated by the program HYBOT, the method of Random Forest (RF), and on-line version of the expert system OCHEM (https://ochem.eu). In the modeling of OCHEM PyDescriptor descriptors and extreme gradient boosting was chosen. In addition, the models obtained with the help of the expert system OCHEM were used for virtual screening of 300 compounds to select promising VEGFR-2/HDAC inhibitors for further synthesis and testing.

  7. Minnikhanov R.N., Anikin I.V., Dagaeva M.V., Asliamov T.I., Bolshakov T.E.
    Approaches for image processing in the decision support system of the center for automated recording of administrative offenses of the road traffic
    Computer Research and Modeling, 2021, v. 13, no. 2, pp. 405-415

    We suggested some approaches for solving image processing tasks in the decision support system (DSS) of the Center for Automated Recording of Administrative Offenses of the Road Traffic (CARAO). The main task of this system is to assist the operator in obtaining accurate information about the vehicle registration plate and the vehicle brand/model based on images obtained from the photo and video recording systems. We suggested the approach for vehicle registration plate recognition and brand/model classification on the images based on modern neural network models. LPRNet neural network model supplemented by Spatial Transformer Layer was used to recognize the vehicle registration plate. The ResNeXt-101-32x8d neural network model was used to classify for vehicle brand/model. We suggested the approach to construct the training set for the neural network of vehicle registration plate recognition. The approach is based on computer vision methods and machine learning algorithms. The SIFT algorithm was used to detect and describe local features on images with the vehicle registration plate. DBSCAN clustering was used to detect and delete outliers in such local features. The accuracy of vehicle registration plate recognition was 96% on the testing set. We suggested the approach to improve the efficiency of using the ResNeXt-101-32x8d model at additional training and classification stages. The approach is based on the new architecture of convolutional neural networks with “freezing” weight coefficients of convolutional layers, an additional convolutional layer for parallelizing the classification process, and a set of binary classifiers at the output. This approach significantly reduced the time of additional training of neural network when new vehicle brand/model classification was needed. The final accuracy of vehicle brand/model classification was 99% on the testing set. The proposed approaches were tested and implemented in the DSS of the CARAO of the Republic of Tatarstan.

  8. Krasnov F.V., Smaznevich I.S., Baskakova E.N.
    Bibliographic link prediction using contrast resampling technique
    Computer Research and Modeling, 2021, v. 13, no. 6, pp. 1317-1336

    The paper studies the problem of searching for fragments with missing bibliographic links in a scientific article using automatic binary classification. To train the model, we propose a new contrast resampling technique, the innovation of which is the consideration of the context of the link, taking into account the boundaries of the fragment, which mostly affects the probability of presence of a bibliographic links in it. The training set was formed of automatically labeled samples that are fragments of three sentences with class labels «without link» and «with link» that satisfy the requirement of contrast: samples of different classes are distanced in the source text. The feature space was built automatically based on the term occurrence statistics and was expanded by constructing additional features — entities (names, numbers, quotes and abbreviations) recognized in the text.

    A series of experiments was carried out on the archives of the scientific journals «Law enforcement review» (273 articles) and «Journal Infectology» (684 articles). The classification was carried out by the models Nearest Neighbors, RBF SVM, Random Forest, Multilayer Perceptron, with the selection of optimal hyperparameters for each classifier.

    Experiments have confirmed the hypothesis put forward. The highest accuracy was reached by the neural network classifier (95%), which is however not as fast as the linear one that showed also high accuracy with contrast resampling (91–94%). These values are superior to those reported for NER and Sentiment Analysis on comparable data. The high computational efficiency of the proposed method makes it possible to integrate it into applied systems and to process documents online.

  9. Voronina M.Y., Orlov Y.N.
    Identification of the author of the text by segmentation method
    Computer Research and Modeling, 2022, v. 14, no. 5, pp. 1199-1210

    The paper describes a method for recognizing authors of literary texts by the proximity of fragments into which a separate text is divided to the standard of the author. The standard is the empirical frequency distribution of letter combinations, built on a training sample, which included expertly selected reliably known works of this author. A set of standards of different authors forms a library, within which the problem of identifying the author of an unknown text is solved. The proximity between texts is understood in the sense of the norm in L1 for the frequency vector of letter combinations, which is constructed for each fragment and for the text as a whole. The author of an unknown text is assigned the one whose standard is most often chosen as the closest for the set of fragments into which the text is divided. The length of the fragment is optimized based on the principle of the maximum difference in distances from fragments to standards in the problem of recognition of «friend–foe». The method was tested on the corpus of domestic and foreign (translated) authors. 1783 texts of 100 authors with a total volume of about 700 million characters were collected. In order to exclude the bias in the selection of authors, authors whose surnames began with the same letter were considered. In particular, for the letter L, the identification error was 12%. Along with a fairly high accuracy, this method has another important property: it allows you to estimate the probability that the standard of the author of the text in question is missing in the library. This probability can be estimated based on the results of the statistics of the nearest standards for small fragments of text. The paper also examines statistical digital portraits of writers: these are joint empirical distributions of the probability that a certain proportion of the text is identified at a given level of trust. The practical importance of these statistics is that the carriers of the corresponding distributions practically do not overlap for their own and other people’s standards, which makes it possible to recognize the reference distribution of letter combinations at a high level of confidence.

  10. Chuvilin K.V.
    The use of syntax trees in order to automate the correction of LaTeX documents
    Computer Research and Modeling, 2012, v. 4, no. 4, pp. 871-883

    The problem is to automate the correction of LaTeX documents. Each document is represented as a parse tree. The modified Zhang-Shasha algorithm is used to construct a mapping of tree vertices of the original document to the tree vertices of the edited document, which corresponds to the minimum editing distance. Vertex to vertex maps form the training set, which is used to generate rules for automatic correction. The statistics of the applicability to the edited documents is collected for each rule. It is used for quality assessment and improvement of the rules.

    Citations: 5 (RSCI).

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"