All issues
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Views (last year): 3.
The BES-III experiment at the IHEP CAS, Beijing, is running at the high-luminosity e+e- collider BEPC-II to study physics of charm quarks and tau leptons. The world largest samples of J/psi and psi' events are already collected, a number of unique data samples in the energy range 2.5–4.6 GeV have been taken. The data volume is expected to increase by an order of magnitude in the coming years. This requires to move from a centralized computing system to a distributed computing environment, thus allowing the use of computing resources from remote sites — members of the BES-III Collaboration. In this report the general information, latest results and development plans of the BES-III distributed computing system are presented.
-
Mathematical model of respiratory regulation during hypoxia and hypercapnia
Computer Research and Modeling, 2017, v. 9, no. 2, pp. 297-310Views (last year): 16.Transport of respiratory gases by respiratory and circulatory systems is one of the most important processes associated with living conditions of the human body. Significant and/or long-term deviations of oxygen and carbon dioxide concentrations from the normal values in blood can be a reason of significant pathological changes with irreversible consequences: lack of oxygen (hypoxia and ischemic events), the change in the acidbase balance of blood (acidosis or alkalosis), and others. In the context of a changing external environment and internal conditions of the body the action of its regulatory systems aimed at maintaining homeostasis. One of the major mechanisms for maintaining concentrations (partial pressures) of oxygen and carbon dioxide in the blood at a normal level is the regulation of minute ventilation, respiratory rate and depth of respiration, which is caused by the activity of the central and peripheral regulators.
In this paper we propose a mathematical model of the regulation of pulmonary ventilation parameter. The model is used to calculate the minute ventilation adaptation during hypoxia and hypercapnia. The model is developed using a single-component model of the lungs, and biochemical equilibrium conditions of oxygen and carbon dioxide in the blood and the alveolar lung volume. A comparison with laboratory data is performed during hypoxia and hypercapnia. Analysis of the results shows that the model reproduces the dynamics of minute ventilation during hypercapnia with sufficient accuracy. Another result is that more accurate model of regulation of minute ventilation during hypoxia should be developed. The factors preventing from satisfactory accuracy are analysed in the final section.
Respiratory function is one of the main limiting factors of the organism during intense physical activities. Thus, it is important characteristic of high performance sport and extreme physical activity conditions. Therefore, the results of this study have significant application value in the field of mathematical modeling in sport. The considered conditions of hypoxia and hypercapnia are partly reproduce training at high altitude and at hypoxia conditions. The purpose of these conditions is to increase the level of hemoglobin in the blood of highly qualified athletes. These conditions are the only admitted by sport committees.
-
NLP-based automated compliance checking of data processing agreements against General Data Protection Regulation
Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1667-1685As it stands in the contemporary world, compliance with regulations concerning data protection such as GDPR is central to organizations. Another important issue analysis identified is the fact that compliance is hampered by the fact that legal documents are often complex and that regulations are ever changing. This paper aims to describe the ways in which NLP aids in keeping GDPR compliance effortless through automated scanning for compliance, evaluating privacy policies, and increasing the level of transparency. The work does not only limit to exploring the application of NLP for dealing with the privacy policies and facilitate better understanding of the third-party data sharing but also proceed to perform the preliminary studies to evaluate the difference of several NLP models. They implement and execute the models to distinguish the one that performs the best based on the efficiency and speed at which it automates the process of compliance verification and analyzing the privacy policy. Moreover, some of the topics discussed in the research deal with the possibility of using automatic tools and data analysis to GDPR, for instance, generation of the machine readable models that assist in evaluation of compliance. Among the evaluated models from our studies, SBERT performed best at the policy level with an accuracy of 0.57, precision of 0.78, recall of 0.83, and F1-score of 0.80. BERT showed the highest performance at the sentence level, achieving an accuracy of 0.63, precision of 0.70, recall of 0.50, and F1-score of 0.55. Therefore, this paper emphasizes the importance of NLP to help organizations overcome the difficulties of GDPR compliance, create a roadmap to a more client-oriented data protection regime. In this regard, by comparing preliminary studies done in the test and showing the performance of the better model, it helps enhance the measures taken in compliance and fosters the defense of individual rights in the cyberspace.
-
The introduction of baryon string in the model of spiral galaxies structure
Computer Research and Modeling, 2012, v. 4, no. 3, pp. 597-612Views (last year): 2. Citations: 1 (RSCI).It proposes a new alternative approach to explain the flat spectrum of the velocity for stars orbital motion on the periphery of spiral galaxies. In particular, that velocity significant excess of speed calculated according to the virial theorem. The concept is the assumption of the existence for gravitational field of the Central body of the galaxy cylindrical, and not spherical, symmetry. The configuration of this field can be explained by the presence on galaxy axis the cosmic string, the length of which covers the diameter of the disk of the galaxy. This model will be subjected to comparison with the more traditional concept of the availability of the spiral galaxy ball halo of dark matter. For this approach it will also be offered a kinematic model, and the hypothesis about the nature of dark matter. It examines the data of astronomical observations about the presence of cosmic strings in the zones adjacent to galaxies.
-
Evolutionary effects of non-selective sustainable harvesting in a genetically heterogeneous population
Computer Research and Modeling, 2025, v. 17, no. 4, pp. 717-735The problem of harvest optimization remains a central challenge in mathematical biology. The concept of Maximum Sustainable Yield (MSY), widely used in optimal exploitation theory, proposes maintaining target populations at levels ensuring maximum reproduction, theoretically balancing economic benefits with resource conservation. While MSYbased management promotes population stability and system resilience, it faces significant limitations due to complex intrapopulation structures and nonlinear dynamics in exploited species. Of particular concern are the evolutionary consequences of harvesting, as artificial selection may drive changes divergent from natural selection pressures. Empirical evidence confirms that selective harvesting alters behavioral traits, reduces offspring quality, and modifies population gene pools. In contrast, the genetic impacts of non-selective harvesting remain poorly understood and require further investigation.
This study examines how non-selective harvesting with constant removal rates affects evolution in genetically heterogeneous populations. We model genetic diversity controlled by a single diallelic locus, where different genotypes dominate at high/low densities: r-strategists (high fecundity) versus K-strategists (resource-limited resilience). The classical ecological and genetic model with discrete time is considered. The model assumes that the fitness of each genotype linearly depends on the population size. By including the harvesting withdrawal coefficient, the model allows for linking the problem of optimizing harvest with the that of predicting genotype selection.
Analytical results demonstrate that under MSY harvesting the equilibrium genetic composition remains unchanged while population size halves. The type of genetic equilibrium may shift, as optimal harvest rates differ between equilibria. Natural K-strategist dominance may reverse toward r-strategists, whose high reproduction compensates for harvest losses. Critical harvesting thresholds triggering strategy shifts were identified.
These findings explain why exploited populations show slow recovery after harvesting cessation: exploitation reinforces adaptations beneficial under removal pressure but maladaptive in natural conditions. For instance, captive arctic foxes select for high-productivity genotypes, whereas wild populations favor lower-fecundity/higher-survival phenotypes. This underscores the necessity of incorporating genetic dynamics into sustainable harvesting management strategies, as MSY policies may inadvertently alter evolutionary trajectories through density-dependent selection processes. Recovery periods must account for genetic adaptation timescales in management frameworks.
-
Approaches to cloud infrastructures integration
Computer Research and Modeling, 2016, v. 8, no. 3, pp. 583-590Views (last year): 6. Citations: 11 (RSCI).One of the important direction of cloud technologies development nowadays is a creation of methods for integration of various cloud infrastructures. An actuality of such direction in academic field is caused by a frequent lack of own computing resources and a necessity to attract additional ones. This article is dedicated to existing approaches to cloud infrastructures integration with each other: federations and so called ‘cloud bursting’. A ‘federation’ in terms of OpenNebula cloud platform is built on a ‘one master zone and several slave ones’ schema. A term ‘zone’ means a separate cloud infrastructure in the federation. All zones in such kind of integration have a common database of users and the whole federation is managed via master zone only. Such approach is most suitable for a case when cloud infrastructures of geographically distributed branches of a single organization need to be integrated. But due to its high centralization it's not appropriate when one needs to join cloud infrastructures of different organizations. Moreover it's not acceptable at all in case of clouds based on different software platforms. A model of federative integration implemented in EGI Federated Cloud allows to connect clouds based on different software platforms but it requires a deployment of sufficient amount of additional services which are specific for EGI Federated Cloud only. It makes such approach is one-purpose and uncommon one. A ‘cloud bursting’ model has no limitations listed above but in case of OpenNebula platform what the Laboratory of Information Technologies of Joint Institute for Nuclear Research (LIT JINR) cloud infrastructure is based on such model was implemented for an integration with a certain set of commercial cloud resources providers. Taking into account an article authors’ experience in joining clouds of organizations they represent as well as with EGI Federation Cloud a ‘cloud bursting’ driver was developed by LIT JINR cloud team for OpenNebula-based clouds integration with each other as well as with OpenStack-based ones. The driver's architecture, technologies and protocols it relies on and an experience of its usage are described in the article.
-
Nonsmooth Distributed Min-Max Optimization Using the Smoothing Technique
Computer Research and Modeling, 2023, v. 15, no. 2, pp. 469-480Distributed saddle point problems (SPPs) have numerous applications in optimization, matrix games and machine learning. For example, the training of generated adversarial networks is represented as a min-max optimization problem, and training regularized linear models can be reformulated as an SPP as well. This paper studies distributed nonsmooth SPPs with Lipschitz-continuous objective functions. The objective function is represented as a sum of several components that are distributed between groups of computational nodes. The nodes, or agents, exchange information through some communication network that may be centralized or decentralized. A centralized network has a universal information aggregator (a server, or master node) that directly communicates to each of the agents and therefore can coordinate the optimization process. In a decentralized network, all the nodes are equal, the server node is not present, and each agent only communicates to its immediate neighbors.
We assume that each of the nodes locally holds its objective and can compute its value at given points, i. e. has access to zero-order oracle. Zero-order information is used when the gradient of the function is costly, not possible to compute or when the function is not differentiable. For example, in reinforcement learning one needs to generate a trajectory to evaluate the current policy. This policy evaluation process can be interpreted as the computation of the function value. We propose an approach that uses a smoothing technique, i. e., applies a first-order method to the smoothed version of the initial function. It can be shown that the stochastic gradient of the smoothed function can be viewed as a random two-point gradient approximation of the initial function. Smoothing approaches have been studied for distributed zero-order minimization, and our paper generalizes the smoothing technique on SPPs.
Keywords: convex optimization, distributed optimization. -
Methods of evaluating the effectiveness of systems for computing resources monitoring
Computer Research and Modeling, 2012, v. 4, no. 3, pp. 661-668Views (last year): 2. Citations: 2 (RSCI).This article discusses the contribution of computing resources monitoring system to the work of a distributed computing system. Method of evaluation of this contribution and performance monitoring system based on measures of certainty the state-controlled system is proposed. The application of this methodology in the design and development of local monitoring of the Central Information and Computing Complex, Joint Institute for Nuclear Research is listed.
-
Connection between discrete financial models and continuous models with Wiener and Poisson processes
Computer Research and Modeling, 2023, v. 15, no. 3, pp. 781-795The paper is devoted to the study of relationships between discrete and continuous models financial processes and their probabilistic characteristics. First, a connection is established between the price processes of stocks, hedging portfolio and options in the models conditioned by binomial perturbations and their limit perturbations of the Brownian motion type. Secondly, analogues in the coefficients of stochastic equations with various random processes, continuous and jumpwise, and in the coefficients corresponding deterministic equations for their probabilistic characteristics. Statement of the results on the connections and finding analogies, obtained in this paper, led to the need for an adequate presentation of preliminary information and results from financial mathematics, as well as descriptions of related objects of stochastic analysis. In this paper, partially new and known results are presented in an accessible form for those who are not specialists in financial mathematics and stochastic analysis, and for whom these results are important from the point of view of applications. Specifically, the following sections are presented.
• In one- and n-period binomial models, it is proposed a unified approach to determining on the probability space a risk-neutral measure with which the discounted option price becomes a martingale. The resulting martingale formula for the option price is suitable for numerical simulation. In the following sections, the risk-neutral measures approach is applied to study financial processes in continuous-time models.
• In continuous time, models of the price of shares, hedging portfolios and options are considered in the form of stochastic equations with the Ito integral over Brownian motion and over a compensated Poisson process. The study of the properties of these processes in this section is based on one of the central objects of stochastic analysis — the Ito formula. Special attention is given to the methods of its application.
• The famous Black – Scholes formula is presented, which gives a solution to the partial differential equation for the function $v(t, x)$, which, when $x = S (t)$ is substituted, where $S(t)$ is the stock price at the moment time $t$, gives the price of the option in the model with continuous perturbation by Brownian motion.
• The analogue of the Black – Scholes formula for the case of the model with a jump-like perturbation by the Poisson process is suggested. The derivation of this formula is based on the technique of risk-neutral measures and the independence lemma.
-
Spatio-temporal models of ICT diffusion
Computer Research and Modeling, 2023, v. 15, no. 6, pp. 1695-1712The article proposes a space-time approach to modeling the diffusion of information and communication technologies based on the Fisher –Kolmogorov– Petrovsky – Piskunov equation, in which the diffusion kinetics is described by the Bass model, which is widely used to model the diffusion of innovations in the market. For this equation, its equilibrium positions are studied, and based on the singular perturbation theory, was obtained an approximate solution in the form of a traveling wave, i. e. a solution that propagates at a constant speed while maintaining its shape in space. The wave speed shows how much the “spatial” characteristic, which determines the given level of technology dissemination, changes in a single time interval. This speed is significantly higher than the speed at which propagation occurs due to diffusion. By constructing such an autowave solution, it becomes possible to estimate the time required for the subject of research to achieve the current indicator of the leader.
The obtained approximate solution was further applied to assess the factors affecting the rate of dissemination of information and communication technologies in the federal districts of the Russian Federation. Various socio-economic indicators were considered as “spatial” variables for the diffusion of mobile communications among the population. Growth poles in which innovation occurs are usually characterized by the highest values of “spatial” variables. For Russia, Moscow is such a growth pole; therefore, indicators of federal districts related to Moscow’s indicators were considered as factor indicators. The best approximation to the initial data was obtained for the ratio of the share of R&D costs in GRP to the indicator of Moscow, average for the period 2000–2009. It was found that for the Ural Federal District at the initial stage of the spread of mobile communications, the lag behind the capital was less than one year, for the Central Federal District, the Northwestern Federal District — 1.4 years, for the Volga Federal District, the Siberian Federal District, the Southern Federal District and the Far Eastern Federal District — less than two years, in the North Caucasian Federal District — a little more 2 years. In addition, estimates of the delay time for the spread of digital technologies (intranet, extranet, etc.) used by organizations of the federal districts of the Russian Federation from Moscow indicators were obtained.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"




