All issues
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Enhancing DevSecOps with continuous security requirements analysis and testing
Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1687-1702The fast-paced environment of DevSecOps requires integrating security at every stage of software development to ensure secure, compliant applications. Traditional methods of security testing, often performed late in the development cycle, are insufficient to address the unique challenges of continuous integration and continuous deployment (CI/CD) pipelines, particularly in complex, high-stakes sectors such as industrial automation. In this paper, we propose an approach that automates the analysis and testing of security requirements by embedding requirements verification into the CI/CD pipeline. Our method employs the ARQAN tool to map high-level security requirements to Security Technical Implementation Guides (STIGs) using semantic search, and RQCODE to formalize these requirements as code, providing testable and enforceable security guidelines.We implemented ARQAN and RQCODE within a CI/CD framework, integrating them with GitHub Actions for realtime security checks and automated compliance verification. Our approach supports established security standards like IEC 62443 and automates security assessment starting from the planning phase, enhancing the traceability and consistency of security practices throughout the pipeline. Evaluation of this approach in collaboration with an industrial automation company shows that it effectively covers critical security requirements, achieving automated compliance for 66.15% of STIG guidelines relevant to the Windows 10 platform. Feedback from industry practitioners further underscores its practicality, as 85% of security requirements mapped to concrete STIG recommendations, with 62% of these requirements having matching testable implementations in RQCODE. This evaluation highlights the approach’s potential to shift security validation earlier in the development process, contributing to a more resilient and secure DevSecOps lifecycle.
-
Evolutionary effects of non-selective sustainable harvesting in a genetically heterogeneous population
Computer Research and Modeling, 2025, v. 17, no. 4, pp. 717-735The problem of harvest optimization remains a central challenge in mathematical biology. The concept of Maximum Sustainable Yield (MSY), widely used in optimal exploitation theory, proposes maintaining target populations at levels ensuring maximum reproduction, theoretically balancing economic benefits with resource conservation. While MSYbased management promotes population stability and system resilience, it faces significant limitations due to complex intrapopulation structures and nonlinear dynamics in exploited species. Of particular concern are the evolutionary consequences of harvesting, as artificial selection may drive changes divergent from natural selection pressures. Empirical evidence confirms that selective harvesting alters behavioral traits, reduces offspring quality, and modifies population gene pools. In contrast, the genetic impacts of non-selective harvesting remain poorly understood and require further investigation.
This study examines how non-selective harvesting with constant removal rates affects evolution in genetically heterogeneous populations. We model genetic diversity controlled by a single diallelic locus, where different genotypes dominate at high/low densities: r-strategists (high fecundity) versus K-strategists (resource-limited resilience). The classical ecological and genetic model with discrete time is considered. The model assumes that the fitness of each genotype linearly depends on the population size. By including the harvesting withdrawal coefficient, the model allows for linking the problem of optimizing harvest with the that of predicting genotype selection.
Analytical results demonstrate that under MSY harvesting the equilibrium genetic composition remains unchanged while population size halves. The type of genetic equilibrium may shift, as optimal harvest rates differ between equilibria. Natural K-strategist dominance may reverse toward r-strategists, whose high reproduction compensates for harvest losses. Critical harvesting thresholds triggering strategy shifts were identified.
These findings explain why exploited populations show slow recovery after harvesting cessation: exploitation reinforces adaptations beneficial under removal pressure but maladaptive in natural conditions. For instance, captive arctic foxes select for high-productivity genotypes, whereas wild populations favor lower-fecundity/higher-survival phenotypes. This underscores the necessity of incorporating genetic dynamics into sustainable harvesting management strategies, as MSY policies may inadvertently alter evolutionary trajectories through density-dependent selection processes. Recovery periods must account for genetic adaptation timescales in management frameworks.
-
Approaches to cloud infrastructures integration
Computer Research and Modeling, 2016, v. 8, no. 3, pp. 583-590Views (last year): 6. Citations: 11 (RSCI).One of the important direction of cloud technologies development nowadays is a creation of methods for integration of various cloud infrastructures. An actuality of such direction in academic field is caused by a frequent lack of own computing resources and a necessity to attract additional ones. This article is dedicated to existing approaches to cloud infrastructures integration with each other: federations and so called ‘cloud bursting’. A ‘federation’ in terms of OpenNebula cloud platform is built on a ‘one master zone and several slave ones’ schema. A term ‘zone’ means a separate cloud infrastructure in the federation. All zones in such kind of integration have a common database of users and the whole federation is managed via master zone only. Such approach is most suitable for a case when cloud infrastructures of geographically distributed branches of a single organization need to be integrated. But due to its high centralization it's not appropriate when one needs to join cloud infrastructures of different organizations. Moreover it's not acceptable at all in case of clouds based on different software platforms. A model of federative integration implemented in EGI Federated Cloud allows to connect clouds based on different software platforms but it requires a deployment of sufficient amount of additional services which are specific for EGI Federated Cloud only. It makes such approach is one-purpose and uncommon one. A ‘cloud bursting’ model has no limitations listed above but in case of OpenNebula platform what the Laboratory of Information Technologies of Joint Institute for Nuclear Research (LIT JINR) cloud infrastructure is based on such model was implemented for an integration with a certain set of commercial cloud resources providers. Taking into account an article authors’ experience in joining clouds of organizations they represent as well as with EGI Federation Cloud a ‘cloud bursting’ driver was developed by LIT JINR cloud team for OpenNebula-based clouds integration with each other as well as with OpenStack-based ones. The driver's architecture, technologies and protocols it relies on and an experience of its usage are described in the article.
-
Modeling of population dynamics employed in the economic sectors: agent-oriented approach
Computer Research and Modeling, 2018, v. 10, no. 6, pp. 919-937Views (last year): 34.The article deals with the modeling of the number of employed population by branches of the economy at the national and regional levels. The lack of targeted distribution of workers in a market economy requires the study of systemic processes in the labor market that lead to different dynamics of the number of employed in the sectors of the economy. In this case, personal strategies for choosing labor activity by economic agents become important. The presence of different strategies leads to the emergence of strata in the labor market with a dynamically changing number of employees, unevenly distributed among the sectors of the economy. As a result, non-linear fluctuations in the number of employed population can be observed, the toolkit of agentbased modeling is relevant for the study of the fluctuations. In the article, we examined in-phase and anti-phase fluctuations in the number of employees by economic activity on the example of the Jewish Autonomous Region in Russia. The fluctuations found in the time series of statistical data for 2008–2016. We show that such fluctuations appear by age groups of workers. In view of this, we put forward a hypothesis that the agent in the labor market chooses a place of work by a strategy, related with his age group. It directly affects the distribution of the number of employed for different cohorts and the total number of employed in the sectors of the economy. The agent determines the strategy taking into account the socio-economic characteristics of the branches of the economy (different levels of wages, working conditions, prestige of the profession). We construct a basic agentoriented model of a three-branch economy to test the hypothesis. The model takes into account various strategies of economic agents, including the choice of the highest wages, the highest prestige of the profession and the best working conditions by the agent. As a result of numerical experiments, we show that the availability of various industry selection strategies and the age preferences of employers within the industry lead to periodic and complex dynamics of the number of different-aged employees. Age preferences may be a consequence, for example, the requirements of employer for the existence of work experience and education. Also, significant changes in the age structure of the employed population may result from migration.
-
The application of genetic algorithms for organizational systems’ management in case of emergency
Computer Research and Modeling, 2019, v. 11, no. 3, pp. 533-556Views (last year): 31.Optimal management of fuel supply system boils down to choosing an energy development strategy which provides consumers with the most efficient and reliable fuel and energy supply. As a part of the program on switching the heat supply distributed management system of the Udmurt Republic to renewable energy sources, an “Information-analytical system of regional alternative fuel supply management” was developed. The paper presents the mathematical model of optimal management of fuel supply logistic system consisting of three interconnected levels: raw material accumulation points, fuel preparation points and fuel consumption points, which are heat sources. In order to increase effective the performance of regional fuel supply system a modification of information-analytical system and extension of its set of functions using the methods of quick responding when emergency occurs are required. Emergencies which occur on any one of these levels demand the management of the whole system to reconfigure. The paper demonstrates models and algorithms of optimal management in case of emergency involving break down of such production links of logistic system as raw material accumulation points and fuel preparation points. In mathematical models, the target criterion is minimization of costs associated with the functioning of logistic system in case of emergency. The implementation of the developed algorithms is based on the usage of genetic optimization algorithms, which made it possible to obtain a more accurate solution in less time. The developed models and algorithms are integrated into the information-analytical system that enables to provide effective management of alternative fuel supply of the Udmurt Republic in case of emergency.
-
Analysis of the effectiveness of machine learning methods in the problem of gesture recognition based on the data of electromyographic signals
Computer Research and Modeling, 2021, v. 13, no. 1, pp. 175-194Gesture recognition is an urgent challenge in developing systems of human-machine interfaces. We analyzed machine learning methods for gesture classification based on electromyographic muscle signals to identify the most effective one. Methods such as the naive Bayesian classifier (NBC), logistic regression, decision tree, random forest, gradient boosting, support vector machine (SVM), $k$-nearest neighbor algorithm, and ensembles (NBC and decision tree, NBC and gradient boosting, gradient boosting and decision tree) were considered. Electromyography (EMG) was chosen as a method of obtaining information about gestures. This solution does not require the location of the hand in the field of view of the camera and can be used to recognize finger movements. To test the effectiveness of the selected methods of gesture recognition, a device was developed for recording the EMG signal, which includes three electrodes and an EMG sensor connected to the microcontroller and the power supply. The following gestures were chosen: clenched fist, “thumb up”, “Victory”, squeezing an index finger and waving a hand from right to left. Accuracy, precision, recall and execution time were used to evaluate the effectiveness of classifiers. These parameters were calculated for three options for the location of EMG electrodes on the forearm. According to the test results, the most effective methods are $k$-nearest neighbors’ algorithm, random forest and the ensemble of NBC and gradient boosting, the average accuracy of ensemble for three electrode positions was 81.55%. The position of the electrodes was also determined at which machine learning methods achieve the maximum accuracy. In this position, one of the differential electrodes is located at the intersection of the flexor digitorum profundus and flexor pollicis longus, the second — above the flexor digitorum superficialis.
-
Development of a computational environment for mathematical modeling of superconducting nanostructures with a magnet
Computer Research and Modeling, 2023, v. 15, no. 5, pp. 1349-1358Now days the main research activity in the field of nanotechnology is aimed at the creation, study and application of new materials and new structures. Recently, much attention has been attracted by the possibility of controlling magnetic properties using a superconducting current, as well as the influence of magnetic dynamics on the current–voltage characteristics of hybrid superconductor/ferromagnet (S/F) nanostructures. In particular, such structures include the S/F/S Josephson junction or molecular nanomagnets coupled to the Josephson junctions. Theoretical studies of the dynamics of such structures need processes of a large number of coupled nonlinear equations. Numerical modeling of hybrid superconductor/magnet nanostructures implies the calculation of both magnetic dynamics and the dynamics of the superconducting phase, which strongly increases their complexity and scale, so it is advisable to use heterogeneous computing systems.
In the course of studying the physical properties of these objects, it becomes necessary to numerically solve complex systems of nonlinear differential equations, which requires significant time and computational resources.
The currently existing micromagnetic algorithms and frameworks are based on the finite difference or finite element method and are extremely useful for modeling the dynamics of magnetization on a wide time scale. However, the functionality of existing packages does not allow to fully implement the desired computation scheme.
The aim of the research is to develop a unified environment for modeling hybrid superconductor/magnet nanostructures, providing access to solvers and developed algorithms, and based on a heterogeneous computing paradigm that allows research of superconducting elements in nanoscale structures with magnets and hybrid quantum materials. In this paper, we investigate resonant phenomena in the nanomagnet system associated with the Josephson junction. Such a system has rich resonant physics. To study the possibility of magnetic reversal depending on the model parameters, it is necessary to solve numerically the Cauchy problem for a system of nonlinear equations. For numerical simulation of hybrid superconductor/magnet nanostructures, a computing environment based on the heterogeneous HybriLIT computing platform is implemented. During the calculations, all the calculation times obtained were averaged over three launches. The results obtained here are of great practical importance and provide the necessary information for evaluating the physical parameters in superconductor/magnet hybrid nanostructures.
-
Assessing the impact of deposit benchmark interest rate on banking loan dynamics
Computer Research and Modeling, 2024, v. 16, no. 4, pp. 1023-1032Deposit benchmark interest rates are a policy implemented by banking regulators to calculate the interest rates offered to depositors, maintaining equitable and competitive rates within the financial industry. It functions as a benchmark for determining the pricing of different banking products, expenses, and financial choices. The benchmark rate will have a direct impact on the amount of money deposited, which in turn will determine the amount of money available for lending.We are motivated to analyze the influence of deposit benchmark interest rates on the dynamics of banking loans. This study examines the issue using a difference equation of banking loans. In this process, the decision on the loan amount in the next period is influenced by both the present loan volume and the information on its marginal profit. An analysis is made of the loan equilibrium point and its stability. We also analyze the bifurcations that arise in the model. To ensure a stable banking loan, it is necessary to set the benchmark rate higher than the flip value and lower than the transcritical bifurcation values. The confirmation of this result is supported by the bifurcation diagram and its associated Lyapunov exponent. Insufficient deposit benchmark interest rates might lead to chaotic dynamics in banking lending. Additionally, a bifurcation diagram with two parameters is also shown. We do numerical sensitivity analysis by examining contour plots of the stability requirements, which vary with the deposit benchmark interest rate and other parameters. In addition, we examine a nonstandard difference approach for the previous model, assess its stability, and make a comparison with the standard model. The outcome of our study can provide valuable insights to the banking regulator in making informed decisions regarding deposit benchmark interest rates, taking into account several other banking factors.
-
Generating database schema from requirement specification based on natural language processing and large language model
Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1703-1713A Large Language Model (LLM) is an advanced artificial intelligence algorithm that utilizes deep learning methodologies and extensive datasets to process, understand, and generate humanlike text. These models are capable of performing various tasks, such as summarization, content creation, translation, and predictive text generation, making them highly versatile in applications involving natural language understanding. Generative AI, often associated with LLMs, specifically focuses on creating new content, particularly text, by leveraging the capabilities of these models. Developers can harness LLMs to automate complex processes, such as extracting relevant information from system requirement documents and translating them into a structured database schema. This capability has the potential to streamline the database design phase, saving significant time and effort while ensuring that the resulting schema aligns closely with the given requirements. By integrating LLM technology with Natural Language Processing (NLP) techniques, the efficiency and accuracy of generating database schemas based on textual requirement specifications can be significantly enhanced. The proposed tool will utilize these capabilities to read system requirement specifications, which may be provided as text descriptions or as Entity-Relationship Diagrams (ERDs). It will then analyze the input and automatically generate a relational database schema in the form of SQL commands. This innovation eliminates much of the manual effort involved in database design, reduces human errors, and accelerates development timelines. The aim of this work is to provide a tool can be invaluable for software developers, database architects, and organizations aiming to optimize their workflow and align technical deliverables with business requirements seamlessly.
-
Views (last year): 3.
Storage is the essential and expensive part of cloud computation both from the point of view of network requirements and data access organization. So the choice of storage architecture can be crucial for any application. In this article we can look at the types of cloud architectures for data processing and data storage based on the proven technology of enterprise storage. The advantage of cloud computing is the ability to virtualize and share resources among different applications for better server utilization. We are discussing and evaluating distributed data processing, database architectures for cloud computing and database query in the local network and for real time conditions.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"




